text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Automatic, Illumination-Invariant and Real-Time Green-Screen Keying Using Deeply Guided Linear Models : The conventional green screen keying method requires users’ interaction to guide the whole process and usually assumes a well-controlled illumination environment. In the era of “we-media”, millions of short videos are shared online every day, and most of them are produced by amateurs in relatively poor conditions. As a result, a fully automatic, real-time, and illumination-robust keying method would be very helpful and commercially promising in this era. In this paper, we propose a linear model guided by deep learning prediction to solve this problem. The simple, yet effective algorithm inherits the robustness of the deep-learning-based segmentation method, as well as the high matting quality of energy-minimization-based matting algorithms. Furthermore, thanks to the introduction of linear models, the proposed minimization problem is much less complex, and thus, real-time green screen keying is achieved. In the experiment, our algorithm achieved comparable keying performance to the manual keying software and deep-learning-based methods while beating other shallow matting algorithms in terms of accuracy. As for the matting speed and robustness, which are critical for a practical matting system, the proposed method significantly outperformed all the compared methods and showed superiority over all the off-the-self approaches. Introduction Thanks to the rapid development of computer graphics, the compositing shot has become a common choice in the film and television industry. Green/blue screen keying plays a crucial role in image/video compositing [1] and has already shown its production-level matting quality in many applications. This "well-developed" technology, however, requires professional users' guidance and other ad hoc settings such as a specially designed lighting apparatus for even illumination and a matte screen material to reduce light reflection. In recent years, with the surge of "we-media", millions of short videos are shared online every day, and most of them are produced by amateurs in relatively poor conditions. As a result, an Automatic, Illumination-invariant and Real-time (AIR) keying method could be very commercially promising in the age of the mobile Internet. In this paper, we propose a totally automatic and real-time green screen keying algorithm for unconstrained scenarios such as screens with natural light, shadows, and marks on them. Though little attention has been given by the research community, as we show later in this paper, achieving an "AIR" keying algorithm is not a trivial task. Firstly, it is hard to directly employ the existing keying methods [2,3] in AIR keying as there is no human mark or interaction in the process. Secondly, the sophisticated matting algorithms [4][5][6][7] also need initialization annotation by humans and cannot perform sufficiently well in video processing production. Most recently, the deep-learning-based matting algorithms have illustrated their high robustness in very challenging scenarios [8][9][10][11]. However, due to the high computational complexity, they cannot achieve real-time speed on low-resolution (typically below 512 × 512) images. This resolution cannot meet the basic requirements of today's video or image applications, which usually require at least 1080P frames. One can of course upsample the low-resolution matting result to higher resolution, but the pixelwise matting accuracy will decrease significantly. Actually, the contradiction between the requirements of pixelwise accuracy and real-time speed is a long-standing and essential problem in the research of deep learning. In this work, we tried to address this long-standing problem by introducing deeply guided linear models and a framework for smartly combining deep models and shallow models. In the training stage, a deep network was trained to robustly classify each pixel into foreground and background, on low-resolution images. When testing, linear models were trained online under the supervision of the deep network, and then, the α value for each pixel was determined in a coarse-to-fine style. The yielded green screen keying method is totally Automatic, Illumination-invariant, and Real-time (AIR). It achieved much better matting results than the existing shallow and deep matting approaches, in terms of accuracy, speed, and robustness. When compared to the state-of-the-art commercial keying software with human interactions, our method illustrated comparable accuracy and overwhelming superiority on speed. The contribution of this work is three-fold: • First, to the best of our knowledge, our keying algorithm is the first AIR keying method in the literature; • Second, the combination between the coarse output of deep learning and an onlinetrained linear model is novel and also inspiring from the perspective of machine learning [12,13]; • Finally, to conduct a more comprehensive evaluation, we designed and generated a new green screen dataset, Green-2018. This dataset is not only larger than the existing ones [3], but also contains much more variances in terms of the foreground object category, the illumination changes, and the texture pattern of the green screens. This dataset is suitable to design better algorithm for the more challenging tasks such as outdoor green screen keying. The rest part of this paper is organized as follows. In Section 2, the motivation of the proposed method as well as its flowchart are introduced. Section 3 proposes a small, yet effective CNN. Section 4 presents the algorithm details of the deeply guided linear model. Section 5 introduces the new green screen dataset, while the last two sections give the experiment (Section 6) and conclusions (Section 7), respectively. Overview of the Proposed Method Without controlled illumination and effective guidance by humans, one firstly needs a highly robust segmentation algorithm to distinguish background and foreground. Motivated by the success of deep learning [14,15], in this work, we also employed deep neural networks for AIR green screen keying. However, as we explain later, the robust CNN model can hardly achieve high robustness and high pixelwise accuracy simultaneously, especially when the time budget is limited. A Dilemma Existing in Deep Learning Matting Although deep learning has achieved great success in the field of computer vision, it still faces some fundamental difficulties. For pixelwise classification/regression problems, it is hard for a single deep network to perform prediction precisely given a limited time budget, e.g., 40 ms per image (the real-time criterion). The dilemma is two-fold: the running time of most deep networks increases quickly as the input image size grows; it is also not easy to obtain pixelwise precise prediction for a high-resolution image from a low-resolution prediction. In addition and more essential, in deep networks, each pixel of a prediction map is rendered from a large neighboring region on the input image. The neighboring region, formally termed the "receptive field" [16][17][18], plays a significant role in explaining the high robustness of deep learning [19][20][21]. However, its drawback is also obvious: as the receptive fields of two neighboring pixels are very alike, it is very hard to generate the prediction map with sharp boundaries on which the adjacent map pixels are assigned distinct values. Researchers have been making considerable effort to alleviate the problem via more complex network topologies [22][23][24][25][26], while introducing even more computational complexity. We demonstrate this dilemma in Figure 1. From the figure, we can see that, although the predicted alpha matte by the deep network is globally robust, it has ambiguous boundaries, which reduces the "user experience" significantly. In contrast, narrow methods (KNN matting [6] and information flow [27]) can generate more precise alpha values in some local regions. Image Deep Learning KNN matting Our Solution In this work, we propose to address the above problem via smartly fusing the deep and shallow learning approaches. The flowchart of our algorithm is shown in Figure 2. From the chart, we can see that the high-resolution test image (I h ) is downsampled into one middle-resolution image (I m ) and one low-resolution image (I l ). In the first stage, an offlinetrained, light-weight, and symmetrical CNN is applied to I l to roughly classify each small region into foreground and background. The initial prediction is then upsampled to match the middle-sized I m as learning guidance for the following shallow model. In the second step, a linear model is trained online based on the raw features (RGB values and texture features in this work) extracted only from this particular image to fine-tune the initial classification result. As we show in Section 4, the loss function employed in this stage can be considered as a Linear Discriminant Analysis (LDA) loss regularized by an affinity term, which usually yields a smoother mask while maintaining the prediction accuracy. The third step is conducted on the high-resolution image (I h ), where we focus on the "uncertain" region U defined by the previous linear classification. Soft matting values in this region α i ∈ [0, 1], ∀i ∈ U are determined by a sigmoid function, whose hyperparameter is selected via brute force searching with standard KNN matting loss, as we describe in Section 4. A Small, yet Effective CNN for Segmentation on Green Screens In recent years, much effort has been made to handle the natural matting problem, in which the foreground and background are not predefined. Though accused of being ill-posed, deep-learning-based methods [8][9][10]28] still illustrate high accuracy in this task. Recent approaches have also focused on matting without any external input [29][30][31][32] and matting with a known natural background [33,34]. It seems we can easily pick one of the above "off-the-self" matting networks for our green screen matting. However, those networks are relatively large to extract more abstract semantic information, which is important for robust natural matting. On the contrary, in green screen matting, some low-level features are already informative enough, and thus, the above networks are unnecessarily complex and slow in our task. In [35], Liu et al. proposed a small network for edge detection. Considering the similar motivation of exploiting the multiscale information, we designed our segmentation network based on its RCF model. To achieve an even higher forward speed so that the whole system is real-time, we further shrank the RCF model by reducing the channel numbers, as well as removing some redundant skip connections, as we show in Figure 3. In this work, we term this reduced RCF as R 2 CF, whose structure is shown in Figure 3. We can see that the backbone of the R 2 CF network is the shrunken version of the VGG-16 network [36,37] with three extra branches and their corresponding intermedia loss layers. In practice, we trained the R 2 CF model based on the training set of the proposed new green screen dataset (described in Section 5). We initialized the network's parameters via the "Xavier" strategy and employed the conventional Stochastic Gradient Descent (SGD) for optimization. The minibatch size was 32, and the base learning rate was 0.003 and dropped by 10 times every 30,000 iterations. The momentum and weight decay were set to 0.9 and 0.00004, respectively. One needs to perform SGD for 100,000 iterations to obtain good performance. The learned deep model performed sufficiently well in practice, though one can still observe some segmentation flaws (see Figure 4), which could be almost totally corrected by the following linear classifier, as we introduce in Section 4. On the other hand, the network was very efficient, with the speed below 10 ms per image on a middle-level GPU. Figure 3. The R 2 CF network composed of 13 convolution layers and 3 fully connected layers. Similar to its prototype, VGG-16 [36], all the convolutional layers are divided into 5 groups as conv1, ..., conv5. Feature maps from conv3, conv4, and conv5 are integrated together after being filtered by the 1 × 1 convolutional layers. The three obtained feature maps are then finally summed up elementwise, after another 1 × 1 convolutional layer. The upsampling process is conducted to guarantee all feature maps have the same size. Training Features As explained in Section 2.1, one cannot expect deep learning to predict pixelwise accurate segmentations or alpha mattes, especially with a limited time budget. Given the output of R 2 CF, we extracted the two-channel feature map just before the final softmax layer to calculate the "trimap" T l ∈ R w l ×h l as: where T i l is the i-th pixel on the low-resolution trimap T i l and the value η i is obtained via: where f i and b i are the values of the two-channel output of the R 2 CF network, on the i-th pixel's location. They stand for the confidence of being the foreground and background on this pixel, respectively. Then, the low-resolution trimap is resized to the mid-resolution version: T l ∈ R w l ×h l → T m ∈ R w m ×h m . In the second stage, as shown in Figure 2, training samples are collected randomly on both the background region (T i m <= 0.01) and the foreground region (T i m >= 0.99). In this paper, the feature of each training sample contains two parts: the normalized RGB value and the texture feature extracted on a small adjacent region (3 × 3 in this work). In mathematical form, the feature f i ∈ R 15 is written as: where β i denotes the local texture feature of a pixel, which is defined as: where function Hist ∆θ (m grad , d grad ) represents the histogram function based on the gradient directions weighted by the corresponding gradient magnitude; Z i is the normalization parameter, so 1 T β i = 1. In this work, we set ∆θ = 30; thus, the dimension of β and f i is 12 and 15, respectively. Two Types of Loss Functions Given the training sample set { f 1 , f 2 , . . . , f N } with the corresponding labels {l 1 , l 2 , . . . , l N }, l j ∈ {0, 1}, which are actually the sampled pixel values on the trimap T m , we tried to train a linear model such that: To obtain a good estimation of ω and b, we firstly built the classification loss following Linear Discriminant Analysis (LDA) [38] as: where P and N stand for the positive and negative subsets of the training samples and S ± w denotes the "within scatter matrix" defined in the LDA algorithm [38]. Recall that the LDA was proposed for general classification, which is different from the matting problem, where the pixels are actually related geometrically. We thus introduced the affinity loss from the family of spectral-based matting [4,6] into the above optimization problem. Specifically, we employed the strategy of KNN matting [6] to build the affinity matrix L rgb (here, the subscript rgb indicates that the kernel values in this affinity matrix are calculated on the RGB values), with the hyperparameter k = 7. Given the affinity matrix L, our affinity loss is written as: Now, the combined loss function is defined as: In practice, we set λ = 1000, and the introduction of the affinity loss leads to smoother alpha output, which can benefit the following matting step. Note that the generalization and optimization are the most time-consuming parts of the KNN matting algorithm. Each of them usually takes more than 1000 ms on a midresolution image. In our case, however, this problem did not exist. The reason is two-fold. First, as we assumed a linear model to represent the pixel's alpha value, one does not need to sample all the pixels on the image, whose number is usually over a million. Actually, in our experiment, we only sampled 1500 positive samples and 1500 negative samples, which were sufficient to offer good results. Secondly, and more importantly, thanks to the linear assumption, the quadratic matrix L rgb collapses into the extremely small oneL rgb , which was only 15 × 15 in this work. As a result, the optimization problem of Equation (9) can be easily solved via off-the-shelf quadratic programming solvers, within 5 ms. Fine-Tuning the Alpha Values via Brute Force Searching As shown in Figure 2, in Step 3, we firstly calculate the binary version the output of last step as:α j = ω T f j + b and then, an "unknown" region on the image is obtained via a simple Gaussian filtering and thresholding process. We fix the binary value ofα outside the unknown area and recalculate the inside ones as: The hyperparameters λ and µ are determined via a brute force searching procedure whose loss function is exactly the loss function defined in KNN matting [6]. Note that when performing the brute force searching, it is not necessary to take all the unknown pixels into consideration. In this work, we only randomly sampled 2000 unknown pixels to estimate the best λ and µ. The other 10,000 pixels in the known region were sampled to calculate the affinity matrix of KNN matting. All of Step-3 typically takes only 15 to 20 ms. The New Green Screen Dataset To the best of our knowledge, the only publicly available green screen dataset was that proposed in [3], which contains four videos captured in controlled environments. To test the algorithm in more challenging scenarios, in this work, we generated a bigger and more comprehensive green screen dataset, called "Green-2018" in this paper. We illustrate the dataset in Figure 5. To obtain the high-quality ground-truth alpha, all the images in the new dataset were synthetically composed from a foreground image (with a precise alpha matte) and a background image. Unlike the existing dataset, which only focuses on human objects, the Green-2018 dataset has various foreground types including animal, human, and furniture. On the other hand, the background images in the new dataset also involve more variance. As we show in Figure 5, there are two main attributes, which are textured (we only focus on the green background here; thus, the textured background is also generated by using a number (two in our case) of different green colors) or pure green screen and natural or controlled lighting condition, respectively. We rendered our dataset through randomly locating the foreground objects with random scales. To make the synthetic images closer to the real ones, shadows were also rendered on some of the background images. The whole dataset contains 657 foreground images and 2693 background images. We divided them into two subsets for training and testing, respectively. Our training subset contains 20,370 merged images, which were generated from 485 foreground and 2010 background images, while the test subset includes the last 172 foreground images and 683 background images, and 3096 composed test images were rendered. Experiments and Results In this section, we compare the proposed method with different types of approaches, which can solve the green screen matting problem. Three state-of-the-art shallow matting algorithms were compared: closed-form matting [4], KNN matting [6], and the most recently proposed information flow matting [27]. Two typical deep-learning-based matting methods, i.e., deep image matting [8] and IndexNet Matting [39], were also performed in the comparison. Meanwhile, we also illustrate the comparison between our automatic method and the off-the-shelf manual keying software, i.e., After Effect (AE) from Adobe. Following the conventional setting in the matting literature [6,27,40], we report the performance via four evaluation metrics, which are SAD, MSE, Connectivity, and Gradient, respectively. As mentioned in Section 5, we evaluated all the involved methods on two datasets, i.e., • The original dataset introduced in [3]. This is a pure green screen dataset including only four videos. We called this dataset TOG-16; • Our Green-2018 dataset, which contains textured and pure green screen, as well as more foreground categories. Note that there is no matting α ground-truth offered in the TOG-16 dataset; we manually labeled 100 images of this dataset and evaluated the matting performance on the shrunken version of TOG-16. The experiments was conducted on a PC with an Intel i5-8600 CPU, 32G memory, and a NVIDIA GTX-1080Ti GPU. The Running Speed In a practical matting system, one usually requires a real-time running speed. Consequently, we firstly compare the running speed of all the involved methods in Table 1. Methods Running Time (ms/img) closed-form [4] 3950 KNN matting [6] 20,000 information flow [27] 15,000 deep matting [8] 312 IndexNet matting [39] 6613 AE-Keylight 30,000 this work 42 From the speed comparison, we can see that only our method can be considered as real-time, the second fastest matting algorithm being deep matting [8], which only ran at around 3 fps. Note that, except the proposed method, the running time of all the other method was not taken into account in the generation time of "trimap". Our method illustrates the obvious superiority in efficiency. 6.2. The Matting Accuracy 6.2.1. The Comparison to Other Matting Algorithms As introduced above, the proposed method is "end-to-end". However, that is not true for all the other compared methods: they all require "trimaps" for matting. For a fair comparison, the required "trimaps" were obtained by using our R 2 CF model. The test results are shown in Tables 2 and 3. As we can see, for both the simple and complicated scenarios, our method showed comparable performance to the deep-learning-based methods and showed obvious superiority over the shallow approaches. More comparison results are shown in Figure 6. From the images, one can say that the proposed method performed well in most scenarios and showed high robustness, as can be seen in Tables 2 and 3. Besides the automatic matting algorithms proposed in the literature, manual matting software dominates the current market. The software is mostly designed based on a single key color (green or blue) background. We also evaluated our method by comparing to the manual method on two randomly picked videos from TOG-16. The quantitative results are shown in Table 4 from which one can see the accuracy of our method compared to the manual commercial software. Note that the software was operated by an amateur user with one week of AE experience. When testing, the operator only performed manual keying on the first frame and used the samekeying parameter for all the following frames of the sequence. From the comparison results shown in Section 6.2.1, one could say that the proposed method enjoys a fast running speed while usually performing worse than the deep-learningbased method, which also demonstrated state-of-the-art matting performance on some well-known matting datasets [8,39]. However, the situation changed dramatically when the same experiment was conducted on some real-life images, rather than the "synthetic" images employed in the Green-2018 dataset. We captured eight video sequences with a real human shown in front of the same background setting as in Green-2018 (see Figure 7). As can be seen, the "trimap" obtained using the R 2 CF model became imperfect and sometimes even incorrect. In this scenario, the deep-learning-based methods deteriorated rapidly, and the proposed method still maintained a relatively high matting accuracy. Our method illustrated much higher matting robustness against the "state-of-the-art" matting approaches. . From left to right: the input image; the imperfect "trimap" obtained by using the R 2 CF model; the matting result of deep image matting [8]; the result of IndexNet matting [39]; and the result of this work. One can see that as the "trimap" becomes incorrect, the deep-learning-based methods are influenced dramatically, while the proposed method performs much more stably. Conclusions In this paper, we proposed a novel way to achieve automatic illumination-invariant and real-time keying on green screens. Linear models and deep learning results were smartly combined to generate robust matting results, with a nearly real-time (around 42 ms per image) speed. Besides, a new green screen dataset, which contained more foreground variances and more challenging backgrounds, was built. To the best of our knowledge, this is the first algorithm that can perform AIR keying, and the proposed dataset is also the first in-the-wild green screen dataset. The superiority in the efficiency, accuracy, and robustness of the proposed method was also proven in our experiment. In the future, our work will focus on improving the quality of the coarse output of the offline-trained CNN, which is very important to us. In addition, we will apply our proposed approach to a higher image resolution and more complex scenes to verify its effectiveness. Conflicts of Interest: The authors declare no conflict of interest.
5,604.2
2021-08-09T00:00:00.000
[ "Computer Science" ]
Sliding Impact Mechanism of Square Roadway Based on Complex Function Theory To clarify the process of stress change and plastic zone evolution of square roadways under high-stress conditions, the rotational square expansion plastic zone evolution model of square roadway was established by theoretical analysis, numerical simulation, and engineering verification. The shear slip impact stress criterion of square roadway based on complex variable function theory was studied, and the law of surrounding rock stress distribution, plastic zone expansion, elastic energy density, local energy release rate (LERR), and total energy release of square roadway were analyzed. The results show that the compressive stress is concentrated in the four corners of the roadway after the roadway excavated and transfers with the change of plastic zone. Main shear failures start from the four corners and develop in a rotating square shape, forming square failure zones I and II. The square failure zone I is connected with the roadway contour and rotated 45°. The square failure zone II is connected with the square failure zone I and rotated 45°. When the original rock stress is low, the surrounding rock tends to be stable after the square shear slip line field formed. When the original rock stress is high, the shear failure of the surrounding rock continues to occur after the square failure zone II formed, showing a spiral slip line. Corners of the square roadway and square failure zones I and II are the main energy accumulation and release areas. The maximum elastic energy density and LERR increase exponentially with the ratio of vertical stress to uniaxial compressive strength (Ic). When square corners of the roof are changed to round corners, the plastic zone of the roof expands to form an arch structure. The maximum elastic energy density decreases by 22%, which reduces the energy level and possibility of rock burst. This study enriches the failure mechanism of roadway sliding impact. It can provide a basic theoretical reference for the design of the new roadway section and support form based on the prevention of rock burst. Introduction e causes of rock burst are various, and their manifestations are also different [1][2][3][4]. e cross section of many premining roadways in a coal mine is square or rectangular, and a lot of rock burst happened in these roadways. For example, on February 22, 2020, a rock burst occurred in Xinjulong Coal Mine, resulting in four deaths and extensive roadway damage. Many scholars have studied the failure of rectangular and square roadways and achieved fruitful results. Shi et al. [5] studied the stress distribution law of rectangular roadways under different span height ratios and lateral pressure coefficients. Guo et al. [6] studied the evolution law of plastic zone and large-scale failure criterion of roadway surrounding rock by FLAC numerical simulation method. Wu et al. [7,8] found that V-shaped belt failure occurred on both sides of the tunnel through true triaxial experiment and numerical simulation. Wang et al. [9] established the elastic energy spatial zoning evolution model of surrounding rock under different driving speeds. Yang et al. [10] divided the rock burst roadway into a head-on area, dynamic evolution area of a plastic circle, and stable area of a plastic circle. Li et al. [11] divided the rectangular roadway roof into fracture through the area, fracture development area, microfracture area, and nonfracture area. Yu et al. [12] studied the loose range of rectangular roadway surrounding rock under different lithologic conditions by combining the method of multipoint displacement of deep base point and borehole peeping. Yu et al. [13] studied the distribution and evolution law of stress, displacement, and plastic zone of a roadway with a composite roof. Pan et al. [14] studied the stress distribution, surrounding rock displacement, and plastic zone distribution characteristics of the bottom gas drainage roadway through numerical simulation. Meng et al. [15] obtained the evolution process of roadway surrounding rock impact by studying the stress, displacement, and plastic zone distribution of soft rock roadway. Wang et al. [16] studied the analytical expressions of stress distribution, plastic zone, and disturbed zone width on both sides of the roadway in a gas coal seam. Hou et al. [17] studied the deformation characteristics and acoustic emission characteristics of rectangular roadway surrounding rock under different initial in situ stresses through a laboratory test. Yin et al. [18] studied the critical plastic softening zone depth and critical load of the rectangular roadway under rock burst. Rock burst occurs when the softening zone of tunnel rock mass with rock burst tendency reaches the critical depth [19]. Li et al. [20] studied the deformation and failure process of rectangular roadways through numerical simulation and field observation. Zheng et al. [21] analyzed the calculation formula of the Hoek Brown constitutive model based on the stress of rectangular roadways and studied the effect of anchorage system on roadway roof under different preloads. Chen et al. [22] studied the working face stability of shallow buried square tunnels in heterogeneous soil. Zhao et al. [23] applied the theory of complex variable function to solve the stress of square tunnel surrounding rock in a homogeneous isotropic elastic rock mass. Zhang et al. [24] established the friction work calculation model of roadway plastic zone and studied the influence of the relationship between total energy and friction work in roadway impact area on rock burst. Yi et al. [25] studied the transmission and dissipation of strain energy in the surrounding rock of deep roadway by numerical simulation. Wen et al. [26] studied the rock burst evaluation method based on the ratio of released energy to the absorbed energy. However, the research on the law of plastic zone evolution and energy dissipation around square roadway needs to be further studied. In this paper, the rotational square expansion plastic zone evolution model of square roadway was established, and the shear slip stress conditions of square roadway based on the theory of complex variable function are studied. e stress distribution, plastic zone expansion, elastic energy density, local energy release rate, and total energy released by surrounding rock are analyzed, and the optimization scheme of the roadway section is proposed. Roadway Shear Slip Impact Model and Surrounding Rock Stress Distribution 2.1. Shear Slip Impact Model of Roadway. After the excavation of the square roadway, the high-stress concentration is formed at the corner, and the shear slip occurs and expands continuously from the corner, as shown in Figure 1. When the shear slip zone passes through, a square failure zone is formed, which is called square failure zone I. After the formation of square failure zone I, the stress concentration zones shift from the corners of the roadway to the corners of square failure zone I. And the shear slip continued to expand in a square form from the corners of square failure zone I, forming square failure zone II. Either square failure zone is circumscribed with the tunnel contour (or the previous square failure zone) and rotated 45°, and the expansion of plastic failure area is called rotational square expansion, and the shear slip line field is rotational square slip line field. After the formation of square failure zone II, the stress distribution at the corner of square failure zone II changes due to the reaction of the destroyed coal and rock mass in the roadway. e shear failure does not propagate according to the rotational square but tends to the logarithmic spiral [27,28] shear slip mode. When the roadway failure does not form square failure zone I, the plastic zone is small and will not form a largescale rock burst. When the roadway failure forms square failure zone I, the failure area is connected for the first time, forming a large range of overall weak blocks. If there are high elastic energy accumulations around the roadway, it is easy to form a large-scale roadway rock burst. e center of two sides, roof, and floor are the main impact areas, while the impact at the corner is weak. When the roadway failure forms square failure zone II, the overall weak block of the surrounding rock further increases, which is easy to form large-scale roadway rock burst. e two sides, roof, floor, and corners are the main impact areas. Stress Distribution and Impact Determination of Surrounding Rock Based on Complex Function eory. In [23], the stress distribution of square roadway surrounding rock is studied by complex variable function and integral transformation theory. e circular stress distribution function of surrounding rock in polar coordinates of square roadway is given as follows: where e length of the roadway is much larger than the cross section of the roadway. e deformation and failure of the roadway can be approximated as a plane strain problem. According to the plane strain characteristics, equation (2) can be obtained by universal Hooke's law. where σ z is the stress along the length of the roadway and σ r is the radial stress of the roadway. σ r is 0 on the surface of the roadway, and σ m , which is the average stress of roadway, is as follows: According to the characteristics along the slip line, Δσ m is directly proportional to Δω. where where Δω′ is the angle from the starting point of a spiral line to any point of the spiral line. According to the Mohr-Coulomb criterion, under the limit equilibrium state, the stress state at any point can be expressed as where σ is the normal stress, τ is the shear stress, and φ is the internal friction angle. Shock and Vibration 3 Substituting equation (5) into equation (6), the stress state at any point in the limit equilibrium state is equation (7). When the stress state of the surrounding rock breaks through the limit equilibrium state, the shear slip impact occurs in the corresponding region. Modeling. e strain-softening model was established by FLAC 3D software. e model length × width × height is 100 m × 1 m × 50 m. e physical and mechanical parameters of rock strata are mainly taken from Xinjulong Coal Mine, and the conventional parameters are used to supplement the missing coal and rock parameters. Table 1 lists the physical and mechanical parameters of each rock stratum. Fix the bottom and lateral boundaries, and apply different forces on the top of the model, respectively. 21 groups of different initial stress states are formed, and the vertical stress at the boundary between the coal seam and roof is 1.0-3.0 [σ c ], respectively (the ratio of vertical stress to uniaxial compressive strength Ic � 1.0-3.0). e coefficient of lateral pressure is 1.0. Law of Stress Change. After the model is balanced, the roadway is excavated along the coal seam floor with a section size of 4 m × 4 m. Four vertical stress measuring points are arranged in the far field of the roadway sidewall to monitor the change of regional stress. e distance between the stress measuring point and the roadway sidewall is 28-37 m, and the vertical direction is located on the horizontal plane of the middle point of the roadway. When Ic � 1.5, the vertical stress change of each measuring point is shown in Figure 2. Each measuring point has experienced two obvious stress reduction and recovery stages, which indicates that the surrounding rock of the roadway has a twice obvious failure, causing regional stress adjustment. e evolution of compressive stress concentration area of roadway surrounding rock is shown in Figure 3. When the roadway is just excavated, the compressive stress is concentrated near the four corners of the roadway. After the first obvious failure of the roadway, the compressive stress concentration area rotates 45°and expands outward. After the second obvious failure of the roadway, the compressive stress concentration area rotated 45°again and expanded outward. Evolution Law of Failure Area. When Ic � 1.5, the plastic zone expansion of the surrounding rock is shown in Figure 4. After the first obvious failure of the roadway, square failure zone I is formed, and square failure zone I is mainly shear and tensile failure. After the second obvious failure of the roadway, square failure zone II is formed, and square failure zone II is mainly shear failure. e main shear failure areas of the surrounding rock are shown in Figure 5, and the main shear failure zones extend in a rotating square shape. When Ic � 1.5, the shear strain nephogram of the roadway surrounding rock is shown in Figure 6. e shear strain is the largest at the four corners of the roadway, and a square shear slip line field connected with the roadway and rotated 45°is formed, which is the boundary of square failure zone I in the theoretical analysis, and square failure zone II with a rotation of 45°is formed outside square failure zone I. When the original rock stress is low, the surrounding rock tends to be stable after the formation of square failure zone II. When the original rock stress is high, after the formation of square failure zone II, the failure area continues to expand outward, as shown in Figure 7, which shows the shear strain nephogram of surrounding rock with Ic � 3.0. After the formation of square failure zone II, the stress distribution has changed, and the propagation direction of the failure zone has changed. e slip line field is similar to the spiral expansion form. Energy Monitoring Methods. When the elastic moduli of the three principal stress directions are the same, the elastic strain energy released by the element can be expressed as [29,30] where U e is the elastic strain energy released by the element, E 0 is the elastic modulus, σ 1 , σ 2 , and σ 3 are the three principal stresses of the element, and v is the Poisson ratio. e elastic strain energy released by the element is expressed by the local energy release rate (LERR), and the sum of elastic strain energy released by all elements is the total energy released by the surrounding rock, which is recorded as ERE. e calculation method is as follows [31]: where LERR i is the local energy release rate of the ith element and U e i max is the peak value of elastic strain energy 4 Shock and Vibration density of the ith element before failure. U e i min is the valley value of elastic strain energy density after the failure of the ith element and V i is the volume of the ith unit. According to the above method, FLAC 3D software is used to program Fish language to monitor the energy parameters of the model. Energy Evolution Law. When Ic � 1.5, the evolution law of elastic energy density of surrounding rock is shown in Figure 8. After the formation of square failure zone I, the elastic energy accumulates at its four corners. After the formation of square failure zone II, the elastic energy accumulated at the four corners of square failure zone II. When Ic � 1.5, the local energy release rate of the surrounding rock is shown in Figure 9. Four corners of the roadway and the four corners of square failure zone I are the areas where a lot of energy is released. e maximum local energy release rate is 30.6 kJ/m 3 . When Ic � 1.5, the total energy release curve of the surrounding rock is shown in Figure 10. ere are two peaks in the total energy released by the surrounding rock. When the total energy released by the surrounding rock reaches the peak value I, the square failure zone I is basically formed. e surrounding rock tends to be stable and the stress is readjusted, resulting in the accumulation of new strain energy, which leads to the decrease of the total energy released in the region. When the total energy released by the surrounding rock reaches peak II, the square failure zone II is basically formed. e stress is adjusted again, and the surrounding rock tends to the final stable state and the total energy released tends to be stable. e statistical results of maximum elastic energy density (PE) and local energy release rate (LERR) of each model are shown in Figure 11. rough data fitting, it can be seen that PE and LERR increase exponentially with Ic. Case 1. Liangbaosi Coal Mine is located in Juye coalfield. e average buried depth of 3507 working face is 1067 m, and the average thickness of the coal seam is 6.2 m. e coal seam, roof, and floor have a weak impact tendency. A coal blasting occurred during the tunneling of the 3507 track gateway, which resulted in roof subsidence of the right shoulder and bolt fracture, as shown in Figure 12. e roof subsidence of the right shoulder of the roadway is 200-300 mm, while the middle part of the roof has no deformation, which is consistent with the theoretical analysis that the corner of the roadway is damaged first. Step Figure 13. e middle part of the sides and the middle part of the roof protrude. e deformation of roadway corners is relatively small. It is consistent with the theoretical analysis that the main impact areas are the roadway side and middle roof. Optimization of Roadway Section e right angle in the structure is easy to produce stress concentration, while the fillet stress concentration is low. e filleted corner should be kept as far as possible in the roadway excavation. e roadway driving along the bottom shall avoid forming a right angle at the top angle, while the roadway driving along the top shall avoid forming a right angle at the bottom angle. e two roof corners of the roadway in the model, where Ic � 1.5, are changed into rounded corners, and the fillet radius is 0.5 m. When the model is recalculated, the law of plastic zone expansion has changed. e surrounding rock forms an arch failure area. e newly formed arched surrounding rock structure can bear more force and reduce deformation. As shown in Figures 14 and 15, the maximum depth of the plastic zone on the roof of the fillet roadway is reduced by 20%, the maximum depth of plastic zone on both sides is reduced by 10%, and the maximum horizontal displacement is moved from the middle point of the two sides to the bottom angle of the two sides. Figure 16 shows the cloud chart of the maximum elastic energy density after roadway excavation. e maximum elastic energy density of the square roadway is 61.7 kJ/m 3 , which is located on the roadway floor. e maximum elastic energy density of the round corner roadway is 48.2 kJ/m 3 , which is located at the top coal of the roadway. e maximum elastic energy density of the filleted corner roadway is 22% less than that of the square roadway, and the location of the maximum elastic energy density is transferred from the floor to the top coal. Figure 17 shows the local energy release rate nephogram of the round corner roadway. e energy release rate and range of the floor are larger than those of top coal. e corners of the roadway floor are still right angle, and the plastic zone still expanded in a rotating square shape in the early stage. A large amount of energy is released at the corner of square failure zone I. When the corners of the roadway roof are changed to round corners, the plastic zone of the Step Figure 10: Curve of the total energy released from surrounding rock. 8 Shock and Vibration roof expands to form an arch structure, which increases the bearing capacity and releases less energy. erefore, the accumulated elastic energy of the floor is less than that of the roof, but a greater energy release and damage occur. At the same time, it can be determined that the ability of top coal with arch structure is higher than that of floor sandstone with right angle corner. Discussion is paper studies the impact failure mechanism of the square roadway under high stress. e roadway mainly presents tensile shear failure. e shear failure starts from the four corners of the roadway, develops at 45°with the roadway contour, and finally integrates into a square shear failure area, which is connected with the roadway contour and rotated 45°to the roadway contour. en, the four corners of the square shear failure zone begin to expand, forming a new square shear failure zone, which is connected with the original square shear failure zone and rotated 45°to form a rotating square slip line field. After two times of rotation square shear failure, the stress distribution at the corner changed due to the reaction of the damaged coal and rock mass in the roadway. e shear failure does not expand in the form of rotation square, and the expansion form of shear failure tends to logarithmic spiral shear slip, which is similar to the spiral shear slip line studied by scholars before. When the corners of the square roadway are changed into rounded corners, the evolution law of the plastic zone changes, and the square plastic zone is transformed into an arch shape, which is more conducive to bearing capacity and can reduce the accumulation of elastic energy of surrounding rock. In addition, the construction of shear bolts in the main shear area of the roadway can strengthen the ability of the roadway to withstand shear failure. More prevention and control technologies need more in-depth research. e research results enrich the failure mechanism of roadway sliding and impact and can provide a basic theoretical reference for the design of new roadway cross section and support forms based on rock burst prevention and control. Conclusions In this paper, the rotational square expansion plastic zone evolution model of square roadway was established, and the stress criterion of shear slip impact of square roadway based on complex function theory is given. When the roadway is just excavated, the compressive stress is concentrated near the four corners of the roadway, and the main shear failure starts from the four corners and gradually forms the square failure zones I and II. Square failure zone I is connected with the roadway contour and rotated by 45°, and square failure zone II is connected with square failure zone I and rotated 45°. When the original rock stress is low, after the formation of square failure zone II, the surrounding rock tends to be stable, forming a rotating square shear slip line field. When the original rock stress is high, the shear failure of the surrounding rock continues to occur after the formation of square failure zone II and tends to expand in the form of a logarithmic spiral. When square failure zone I is not formed, the plastic zone is small and large-scale rock burst will not occur. When square failure zone I is formed, the failure area is connected for the first time, forming a large-scale overall weak block. If there is a high elastic energy accumulation around the roadway, it is easy to form a large-scale roadway rock burst. e center of two sides and the center of roof and floor are the main impact areas, and the impact at each corner is weak. When square failure zone II is formed, the weak block of surrounding rock of roadway will further increase, which is easy to form large-scale roadway rock burst. e two sides, roof, and floor and corners of the roadway are the main impact areas. e corners of the square roadway and the corners of square failure zones I and II are the main energy accumulation and release areas. e maximum elastic energy density and local energy release rate increase exponentially with the ratio of vertical stress to uniaxial compressive strength. When the corners of the roof are changed to round corners, the plastic zone of the roof expands to form an arch structure, the bearing capacity increases, and the plastic zone becomes smaller when it reaches stability. e maximum elastic energy density of the round corner roadway is 22% less than that of the square roadway, which reduces the energy level and possibility of rock burst. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
5,692.4
2021-01-31T00:00:00.000
[ "Engineering" ]
Antimicrobial Activities of Extracts of Some Species of Mangrove Plants and a New Compound Isolated Towards some Selected Strains The bio-materials of four marine mangrove medicinal plants viz., Aegiceras Corniculatum (AGC), Excoecariaagallocha (EA), Rhizophoramucronata (RM) and Xylocarpusgranatum (XG) are extracted with methanol and hexane. These extracts are submitted to the antibacterial activity towards the strains:Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigellaflexneri, Sphingomonaspaucimobilis, Escherichia coli and Vibrio choleraadopting Agarwell diffusion method. It is found that a new Flavone Compound isolated from hexane extract of EAis effective towards Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigellaflexneri, Sphingomonaspaucimobilis, Escherichia coli and Vibrio cholera strains while RM MeOH extract is effective towards the strains Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis,Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigellaflexneri, Sphingomonaspaucimobilis, Escherichia coli and Vibrio cholera. The XG MeOH extract is found to be effective towards the strains Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigellaflexneri, Sphingomonaspaucimobilis, Escherichia coli and Vibrio cholera strains while AGC MeOH extract is found to be effective towards the strainsBacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigellaflexneri, Sphingomonaspaucimobilis, Escherichia coli and Vibrio cholera.The order of effectiveness is found to be: EA Hexane > RM MeOH> XG MeOH> AG MeOH. Finally a new flavone compound is found to be more effective than the extracts. INTRODUCTION The recent investigations are concentrating on the bio-screening of natural products have revived due to the paucity of safe antimicrobial drugs, anti-reverse transcriptase, anti-HIV and the perilous upsurge of new and re-emerging infectious diseases 1,2,3 .The antibiotics from natural sources are efficacious, biodegradable, less toxic and cost effective and therefore, it could supplement the costly synthesized antibiotic drugs 4,5,6 .Biopotentiality of mangrove vegetal makes them as a reserved for the development of pharmaceuticals, fish and animal feed additives, agrichemicals and natural pigments 7,8,9 .The mangrove preparations used successfully in the hospitalization of infectious diseases and aliments are envisaged to possess antimicrobial potency 10,11,12 . In the present investigation, the different biological parts of four mangrove species namely AegicerasCorniculatum, Excoecariaagallocha, Rhizophoramucronata and Xylocarpusgranatumhave been extracted with different solvents like hexane and methanol.These extractes have been screened for antimicrobial activity towards the strains Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigellaflexneri, Sphingomonaspaucimobilis, Escherichia coli and Vibrio cholera and found to be results are encouraging hence they are presented comprehensively in this article. Collection of Mangrove Medicinal plants The different species of mangrove plants viz., Excoecariaagallocha and Xylocarpusgranatum, were collected from corangi Mangrove forest near Bhiravapalem in Godavary Estuary ( Latitude 160 15'N and Longitude 820 15' E) and further , Aegiceras Corniculatum and Rhizophora Mucronata ( Latitude 80 99' N and Longitude 760 87' E) were collected from Kollam mangrove forest near Krishnapatnam Port, Nellore. Plant preparation and extraction The fresh plants were washed under running tap water and dried in a warm room for 2 to 6 d.The samples were grinded into fine powder and extracted with n-hexane and methanol successively to get n-hexane and methanol extracts.Then, all the crude extracts were kept at -20 ! until further use.The flavone compound getting By using column chromatography over a column of silica gel (Acme brand, 100-200 mesh, and 450 g) using solvents of increasing polarity from n-hexane through EtoAc.In all 200 Fractions (500 ml) were collected.The fractions displaying similar spots in TLC were combined and the residues from therein were subjected to re-chromatography over silica gel column to yield one pure compound Fig.I 13 In the form of an off-white solid. preparation of a sample A sample of 100 mg from each extract and compound was dissolved in 1 mL DMSO.The extract and compound was then sterilized by filtration through sterile syringe filter (0.2 µm pore).Finally the filtered extract and compound was stored as aliquots until it was used. Agar Ditch diffusion method The agar disc diffusion method was employed for the determination of antimicrobial activities of the extracts according to Qarallehet al 14 some modification.Briefly, inoculum containing 120 0 (15 lb/in2)was spread on Nutrient agar Medium with the respective bacterial strains of bacteria and medium potato dextrose agar for fungus strains.Testing sterile forceps, the sterile filter papers (6 mm diameter) containing the crude extracts (1 or 1.5 mg), standard antibiotics (30 µg of chloramphenicol or 100 µg of amphotericin B) or negative control (DMSO) were laid down on the coverage of inoculated agar plate.The plates were incubated at 37 ±2! for 24 h for the bacteria and at room temperature 28±2! for 12 h for yeasts strains.Each sample was tested in duplicate and the zone of inhibition was measured as 50 micro liters diameter. Screening for Antimicrobial Activity The antimicrobial activity was carried out by the employing 24h young cultures with the given compounds by using Agar-well diffusion method.The medium was sterilized by autoclaving at 120 °C (15 lb/in2).About 20 ml of the medium (Nutrient Agar Medium) with the respective bacterial strains of bacteria and medium (Potato Dextrose Agar) for Fungal strains were transferred aseptically into each sterilized petri Plate.The plates were left at room temperature for solidification.Each plate is made 5 wells with equal distance with of 6mm sterile borer.The test compounds were freshly reconstituted with suitable solvents (DMSO) and tested at various concentrations.The samples and the control along with standard (Ciprofloxacin) were places in 6-mm diameter well.In Antimicrobial assays plates were incubated at 28±20c for fungi about 24h and 37±20C for bacteria 12h.Standard with 5µg/ml used as a positive control for antibacterial activity.Activity diameter of the zone of inhibition was measured using Himedia antibiotic zone scale.Observations and results were represented in Table 2. RESULTS and DISCUSSION The Agar well diffusion method which belongs to Gram positive & Gram negative Bacteria's of different plant extracts and flavone compound towards different strains have been presented in Table 2.The following observations are significant: of all the extracts and compound tested, AGC, EA, RM, XG have shown some remarkable antimicrobial behaviour. The Gram Positive Bacteria's are Bacillus subtilis with the values12 and 10 respectively, Bacillus coagulans with the values 13, 11 and 10 respectively.And Staphylococcus aureus with the value 7 respectively.These strains have no activity against the Gram Negative Bacteria's. CONCLUSION The extracts and new flavone compound of parts of different species of mangrove plants have been tested for their antimicrobial activity towards the strains Bacillus puvuilis, Bacillus with XG extract, the antimicrobial activity for strains Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigella flexneri, Sphingomonas paucimobilis, Escherichia coli and Vibrio cholera.The Gram Positive Bacteria's are Bacillus puvuilis & Corynebacterium diphtheria these strains are no activity.And Bacillus subtilis with the values 12 and 10 respectively, Bacillus coagulans with the values 15 and 11 respectively, Staphylococcus aureus with the values 14, 13 and 12 etc.And Bacillus licheniformis with the values 11 and 10 respectively.The Gram Negative Bacteria's are Escherichia coli with the values 12 and 11 respectively.Remaining in all Negative Strains has no activity.Finally the order of effectiveness is found to be: EA Hexane > RM MeOH > XG MeOH > AG MeOH.Finally a new flavone compound is found that more effective than the extracts.The order of Activity is:EA Hexane (4) > RM MeOH (1) > XG MeOH (2)> AGMeOH (3). : Mangrove plants extracts and a new flavone compound activity on some selected strains are Escherichia coli and Vibrio cholera.T he Gram Positive Bacteria's are Bacillus subtilis & Bacillus coagulans with the values 12 & 15, 13, 11 respectely.These strains have no activity was found against Bacillus subtilis, Staphylococcus aureus, Bacillus licheniformis and Corynebacteriumdiphtheria. Klebsiella pneumonia with the values 20, 13 and 10 respectively.And Pseudomonas aeruginosa with the value 15 respectively, Shigella flexneri with the values 16 and 12 respectively, Sphingomonaspaucimobilis with the values 19, 13 and 11 respectively.Escherichia coli with the values 16 and 12 respectively.And Vibrio cholera with the values 19, 13 and 10 respectively. Bacillus subtilis with the values 19, 18 and 17 respectively.These strains are no activity was found against Bacillus subtilis and Bacillus coagulans.Staphylococcus aureus with the values 13, 12 and 11 respectively.Bacilluslicheniformis with the values 14, 12 and 11 respectively.Corynebacterium diphtheria with the values 11 and 10 respectively.The Gram Negative Bacteria's are Klebsiella pneumonia and Shigella flexneri with the value 11 respectively.And Pseudomonas aeruginosa with the value 10 respectively.And Vibrio cholera&Sphingomonas paucimobilis with the values 13 & 15 respectively. : Mangrove plants extracts and a new flavone compound activity on some selected strains aeruginosa , Shigella flexneri, Sphingomonas paucimobilis, Escherichia coli and Vibrio cholera.The XG MeOH extract is found to be effective towards the strains Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigella flexneri, Sphingomonas paucimobilis, Escherichia coli and Vibrio cholera strains while AGC MeOH extract is found to be effective towards the strains Bacillus puvuilis, Bacillus subtilis, Bacillus coagulans, Staphylococcus aureus, Bacillus licheniformis, Corynebacterium diphtheria, Klebsiella pneumonia, Pseudomonas aeruginosa, Shigella flexneri, Sphingomonas paucimobilis, Escherichia coli and Vibrio cholera.The order of effectiveness is found to be: EA Hexane > RM MeOH > XG MeOH > AG MeOH.Finally a new flavone compound is found that more effective than the extracts.
2,171.4
2017-04-25T00:00:00.000
[ "Biology", "Engineering" ]
Effects of geopolitical risk on environmental sustainability and the moderating role of environmental policy stringency This study investigates the impact of geopolitical risk (GPR) on consumption-based carbon (CCO2) emissions as well as the moderating role of environmental policy stringency (EPS) on the above relationship. Based on data collected from 27 countries from 1990 to 2020, the basic results from the sample of the study indicate that GPR accelerates CCO2 emissions. Quantile regression results reveal that the effect of GPR is more pronounced in countries with higher CCO2 emissions. Moreover, EPS weakens the escalating effect of GPR on CCO2 emissions. The robust test results validate the findings reported in the basic regression model. The heterogeneity test indicates that the impact of GPR on CCO2 emissions is greater in developing countries compared in developed countries. The study also proposes these policy implications based on the findings: (1) countries should ensure a stable political environment, establish a robust legal system and promote energy transition; and (2) the scope of environmental taxes should be expanded where different tax rates should be imposed in order to be useful in reducing CCO2 emissions. regression Environmental sustainability has become a prominent issue as it is essential to both economic progress and human health.This has led to alliances among nations and international institutions to adopt efficient measures that are driven by concerns on environmental deterioration.With the goal of reducing greenhouse gas emissions and promoting environmental sustainability, nations from around the world have taken part in conferences like the Conference of Parties (COP) series as well as international agreements like the Kyoto Protocol, the Paris Agreement, and the United Nations Framework Convention on Climate Change.The COP 26, for instance, set a global goal to reduce existing carbon dioxide emissions by 50% of 2010 levels 1 .Following that, COP 27 underscored the significance of climate change and the necessity for global cooperation to attain carbon neutrality.In addition, COP 28 introduced the global stocktake and pledges to transition away from fossil fuels in energy systems. In extant literature, scholars have paid attention to drivers of environmental degradation, such as trade diversification 2 , energy consumption 3 , and foreign direct investment (FDI) 4 .Meanwhile, solutions for carbon neutrality have also been widely explored.These solutions include renewable energy 5 , green innovation 6 , environmental taxes 7 , and environmental policy stringency (EPS) 8 .However, some scholars contend that environmental degradation is challenging to address due to uncertainty 9,10 .In addition to economic uncertainty, geopolitical risk (GPR) stands as one of the most pervasive uncertainties worldwide.It encompasses tensions and uncertainty that arise from factors like war, peace threats, military buildups, nuclear threats, and terrorism.According to Fig. 1, since the Cuban Crisis in 1962, followed by events such as the Gulf War, September 11, and the Paris Terror Attacks, the Historical GPR Index has seen peaks.For instance, all these events significantly impact economic activities and investments.GPR has two opposite effects on the environment, which are the escalating effect and mitigating effect 11,12 .In the former, GPR reduces the use of renewable energy sources and increases the use of non-renewable ones like petroleum, which leads to higher CO 2 emissions.Conversely, the latter effect operates in the opposite direction.In terms of theory analysis, there is no consensus on the impact of GPR on environmental quality. Despite the importance of examining how GPR influences environmental problems, little empirical research has been done on the issue.Hence, this study aims to analyse the influence of GPR on consumption-based carbon (CCO 2 ) emissions.In addition, as EPS plays a crucial role in the global path towards improving environmental quality, this study incorporates the moderating effect of EPS on the above nexus.The research makes several contributions to the existing body of literature.First, the trend of GPR world has been volatile and more dynamic over the last few years.As such, GPR has garnered significant attention from numerous experts in environmental economics literature.The Ukraine/Russia conflict and the ongoing supply chain challenges that stem from the COVID-19 pandemic underscore the significance of GPR in shaping economic and environmental dynamics.Although studies have been conducted on GPR in the environmental field, environmental degradation does not entail CCO 2 .Hence, this study enriches the environmental impacts of GPR by considering CCO 2 as the proxy of environmental degradation.Following the method of past researchers 13 , 11 economics were randomly selected from the entire research sample, which led to Figs. 2 and 3.This is to effectively illustrate how geopolitical threats Literature review Literature on the relationship between GPR and environmental sustainability GPR, like trade conflicts and military activities, can influence economic activity and energy use that will subsequently impact CO 2 emissions 18 .GPR can influence CO 2 emissions by affecting investment decisions in green technology and causing disruptions in energy supplies.Additionally, it redirects the government's focus towards managing GPR, which is an important factor that can influence the investment decision of firms 19 .For instance, investments in cleaner technologies or renewable energy can be cut due to high levels of GPR 20 , which could result in the continued reliance on fossil fuels, leading to higher CO 2 emissions 21 .Additionally, a high level of GPR can influence governments to prioritise addressing geopolitical issues over improving environmental quality 22 .This may result in a relaxation of CO 2 regulations, which would then lead to higher CO 2 emissions.www.nature.com/scientificreports/Finally, stability and security in one area is necessary for the access and free flow of energy resources 23 .However, GPR can disrupt energy supplies.Hence, countries may resort to using less environmentally-friendly energy sources or ramp up production in their existing fossil fuel sectors.Both scenarios have the potential to elevate CO 2 emissions.In terms of the empirical results regarding the relationship between GPR and environmental sustainability, scholars have held divergent views.Some scholars contend that GPR is positively correlated with CO 2 emissions.For example, the impact of GPR on environmental quality in BRICS with continuously updated and fully modified estimators has been previously examined 22 , which were concluded to degrade environmental quality.In addition, the nexus in the same sample countries through an augmented mean group has also been studied, which found that a 1% increase in GPR leads to the increase of CO 2 emissions by 13% 11 .Furthermore, GPR has been found to be positively correlated with CO 2 emissions through an examination of the nexus among 25 OECD countries 24 .Notably, the heightened risks related to mineral resources was found to primarily contribute to the carbon-increase effect of GPR.GMM results have also confirmed that GPR accelerates environmental pollution in 38 developing and industrialised countries 25 .Bootstrapped ARDL was also used in one study to examine the role of GPR in sustainable environment in China 26 , which found that GPR is positively correlated with CO 2 emissions in the long and short run. Some studies, in contrast, have documented a negative impact.For instance, GPR has been found to be negatively correlated to environmental degradation proxied by ecological footprint consumptions in E7 economies 27 .The decline in investment and consumption activity brought by high GPR was thought to be the cause of the negative link.A similar positive relationship between GPR and environment has also been documented in the context of residential and commercial sectors in the US 28 and France 29 .GPR has also been found to have no influence on environmental sustainability through an examination of the impact of GPR on environmental quality proxied by load capacity factor in India with ARDL method 30 . Finally, a non-linear relationship has also been documented, where GPR increases CO 2 emissions in countries with lower CO 2 emissions levels and lowers them in countries with lower CO 2 emissions levels in a sample of BRICST countries 31 .GPR depresses CO 2 emissions in Russia and South Africa, and the effect is opposite in other BRICS countries in a study employing the non-linear autoregressive distributed lag model 32 . Based on the inconclusive impact of GPR on environmental sustainability, this study hypothesises that GPR increases CCO 2 emissions in the sample countries of this study (hypothesis H1). Literature on the relationship between EPS and environmental sustainability EPS measures the level of stringency, which is defined as the implicit or explicit cost of environmentally harmful behaviour.This data originates from a comprehensive database that focuses primarily on policy tools that address climate change and air pollution.Hence, it is anticipated that stricter regulations can offset the negative effects of GPR on the environment.EPS is thought to have the ability to lessen the negative impacts of pollution by encouraging the development of "clean" technology and discouraging the use of "dirty" ones 33 .The mechanism through which EPS operates to reduce CO 2 emissions is by increasing the cost of producing "dirty" products to a point where they will not be attractive 34 .A well-designed policy can assist firms in implementing eco-friendly technologies, which can result in a reduction in pollution 35 .Following this, if the benefits of regulatory compliance outweigh the expenses, it will lead to net productivity benefits, which aligns with the "narrow" version of the theory 36 .However, the expenses associated with EPS is worth noting.EPS may potentially deter investments in green innovative technologies that consequently influences environmental quality 37 .Additionally, EPS may encourage certain kinds of innovation, which can lead to net productivity loss, aligning with the 'weak' version of the theory 36 .Hence, this empirical evidence is not harmonious as to whether EPS enhances environmental quality even though EPS was found to be effective in reducing CO 2 emissions in 20 European countries between 1995 and 2012 17 . In the case of BRICST countries, the improvement in environmental quality has been suggested to be due to EPS 2 .Similarly, for 32 OECD countries, the emission level was found to be negatively connected with EPS 8 .However, the "green paradox" 38 also exists, which claims that EPS may have unanticipated and undesired effects that worsen environmental degradation.GMM results have also indicated that environmental regulations have not been successful in regulating and reducing pollution as intended 39 .The increase in carbon emissions in Asia in particular has been found to be caused by environmental regulations 40 .Similarly, EPS has also been found to have little to no impact on CO 2 reductions 41 . Based on above discussion, the study hypothesises that EPS weakens the positive relationship between GPR and environmental sustainability (hypothesis H2). In a nutshell, existing literature has recognised the growing importance of GPR and EPS in promoting environmental sustainability.However, current literature has not analysed the combined effect of GPR and EPS.Hence, this study addresses the first gap in the body of existing research by examining the moderating effect of EPS on the impact of GPR on environmental quality.Additionally, in the existing literature, environmental damage has often been represented by CO 2 emissions per capita from the World Development Indicators (WDI) 11,25,31 and ecological footprint 12,42 .However, a substantial body of research suggests that it is crucial to explore environmental conditions using alternative proxy variables, like CCO 2 emissions.This metric, adjusted for trade effects, takes into account the role of international trade, making it a more comprehensive index of environmental degradation.Hence, this study addresses the second gap by using CCO 2 emissions as a proxy for the environment.Furthermore, to the best of our knowledge, the analysis of the relationship between GPR and environmental quality using the panel quantile model has only been limited to BRICST countries 31 , OECD countries 24 , and GCC countries 43 .Hence, this study addresses the third gap by examining the GPR-environmental quality nexus using the quantile regression approach in 27 countries.The employment of this technique sets Independent variable: geopolitical risk GPR was selected from the GPR Index database 14 .The number of articles related to unfavourable geopolitical events in each newspaper for each month across the archives of ten newspapers was counted to create this index.This index was composed of two types: Geopolitical Threats and Geopolitical Acts, based on different categories of words.This index has a number of benefits over existing indices that are currently in use, however, it has certain intrinsic drawbacks 14 .Firstly, the index covers a broader range of geopolitical events including wars, major economic crises, political conflicts, and climate change 45 .Secondly, this index also holds more reference value and timeliness as its data is derived from real-time media text-search results.These media sources collect the viewpoints of global investors, policymakers, and the public that reflect the real-time level of GPR.A higher value indicates a more unstable economic environment.The data is available at https:// www.matte oiaco viello.com/ gpr.htm.The simple average of the twelve months was taken to formulate a yearly index following the method of past studies 13,24,46 . Control variable Previous research has indicated that the external macroeconomic environment has an impact on environmental sustainability.Foreign capital with high pollution levels has sought "pollution havens" to avoid the high costs of adhering to stringent pollution control regulations.These investors often turn to less developed nations with more lenient environmental policies, therefore, FDI enhances environmental degradation in less developed nations 47 .Next, expansionary fiscal policy involves increasing government spending, which provides the government with more funds to invest in the research and development of renewable technology and the purchase of green products 48 .Since a large number of ICT devices have high energy consumption, using ICT may result in increased CO2 emissions 49 .Also, renewable energy technology meets people's energy demands while also mitigating pollution 50 .Hence, with reference to past research 11,[48][49][50][51][52][53][54] , the following variables were chosen: (1) FDI, measured by foreign direct investment and net inflows (% of GDP); (2) fiscal policy, measured through the general government's final consumption expenditure (% of GDP); (3) ICT, measured by mobile cellular subscriptions (per 100 people); and (4) renewable energy consumption (Renew), measured by percentage of total final energy consumption. Moderating variable: environmental policy stringency There exists a demand for instruments to compare nations' EPS as countries implement more stringent environmental rules.This study uses the EPS index database developed by OECD based on the measurement of stringency defined as the implicit or explicit cost of environmentally harmful behaviour.This database compiles data on selected different environmental policy tools, particularly those that deal with climate change and air pollution.A smaller value indicates a less strict policy, with 0 denoting not stringent regulations. The data for the variables above are summarised in Table 1. Econometric model This study employs OLS, FEM, and REM to thoroughly examine the relationship between GPR and CCO 2 emissions.The Breusch Pagan and Lagrangian Multiplier (BP and LM) test was the foremost step as this test can detect whether pooled or panel data is optimal.If the p-value of BP test and chi-square of LM test is significant at level 5%, then panel data is chosen.Both FEM and REM are employed in this study to deal with the panel data.The Hausman test is used to choose the model suitable for this research based on the null hypothesis.The FEM is chosen to analyse the data if the null hypothesis is rejected (or when the prob.< 0.05).H0: the random effect is appropriate H1: the random effect is not appropriate Quantile regression was utilised to get a comprehensive result.This method is preferred by scholars 55,56 over mean-based estimation techniques like OLS for the following reasons.Firstly, it can yield robust results even when the data exhibits heavy tails.Secondly, this statistical approach examines the influence of GPR on CCO 2 emissions across various quantiles, as illustrated in Eq. ( 2).Consequently, it can explain how GPR affects CCO 2 emissions at relatively lower, middle, and upper levels. The following empirical equation was thus proposed: where: CCO 2it is the log term of CCO 2 emissions of country i at time t, GPR it is the level of geopolitical risk of country i at time t, EPS it is the environmental policy stringency of country i at time t, CC it is the control variable of country i at time t, ƹ ijt is the error term, Q T is the conditional quantile, T represents the quantile. Descriptive statistics Table 2 presents the descriptive statistics of variables for all countries in the sample. In terms of the dependent variable, the mean value of CCO 2 emissions was 5.88 with a standard deviation of 1.24.In the sample, the independent variable's mean value was 0.28.In the meantime, there was a significant range in both the maximum and minimum values of GPR, which reflects the diversity in GPR among nations.Significant fluctuations around the sample mean can also be observed for other control variables. Table 3 presents Pearson's correlation matrix that displays coefficients between variables.Concerning the dependent variable, a negative correlation was found between CCO 2 emissions and FDI, Fiscal, ICT, EPS, and Renew.Conversely, there was a positive correlation between CCO 2 emissions and GPR.Additionally, the maximum correlation between the explanatory variables was found to be lower than 0.8, demonstrating that the regression estimation was not multicollinear.In a multivariate study, multicollinearity exists if the correlation coefficients with the explanatory variables are more than 0.8. (1) www.nature.com/scientificreports/ The VIF value in Table 4 indicates that there was no significant multicollinearity among the variables in the regression model provided the maximum VIF value is 2.31. The Pesaran CD 57 , Pesaran scaled LM, and Breusch-Pagan LM tests 58 were used to test the cross-sectional dependence of data.Based on the results, there was no cross-sectional dependence, which confirmed the null hypothesis.All results demonstrated in Table 5 were statistically significant at the 1% level, suggesting a significant interdependence and cross-sectional correlation among the variables.The test's overall finding was that all variables were cross-sectionally dependent. To explore the integration order of separate variables, the LLC 59 , IPS 60 , HT 61 , ADF-Fisher 62 , and PP-Fisher 63 tests were performed, and it was found that at the level form, not all variables were stationary, but when they reached their first difference, as shown in Table 6 they became stationary. Basic results In Table 7, GPR and control variables were included in the model.In the preliminary stage, the results from Table 7 show that the fixed effect model was most suitable to be used in this study as the p value of BP test and Chi-square of LM test was significant at 1% level or lower and the p value of the Hausman test was significant at 1% level or lower. The FEM in Table 7 shows that GPR has a significant and positive effect on CCO 2 emissions.A 1% increase in GPR leads to an increase in CCO 2 emissions by 0.067%.The regression results indicate that GPR negatively impacts environmental sustainability in the sample countries.This confirms hypothesis H1.The above effect can be explained by the following reasons.Firstly, as the level of GPR increases, the risk premium of an investment rises.This means that investors may postpone or even reconsider the viability of the investment.For the private sector, there may also be concerns about the returns on investments in environmentally-friendly technologies and projects.In the case of the public sector, this can lead to a decrease in the investments in green technology, with the tendency to prioritise short-term financial gains over long-term sustainability projects.In addition, GPR can impact the sustainability of the international carbon reduction process.For example, when it comes to environmental challenges, tensions between countries can make international cooperation difficult.Cooperative efforts to combat climate change, preserve ecosystems, and exchange sustainable technologies may also face delays or encounter obstacles due to geopolitical conflicts.Additionally, geopolitical tensions have led nations www.nature.com/scientificreports/ to prioritise energy independence.Instead of relying on environmentally-friendly but highly import-dependent products, some nations may opt for more accessible yet carbon-intensive energy sources, potentially leading to an increase in carbon emissions.Finally, geopolitical disputes make it challenging to implement and maintain consistent environmental regulations due to political instability or weak governance 24 .This finding is consistent with a previous study 24 on the relationship between GPR and CO 2 emissions within the context of the OECD countries.It also aligns with another study 11 which confirmed that GPR escalates CO 2 emission in BRICS economics.Similar findings were also reported for BRICST countries 31 and for the transportation sector in the US 28 . Regarding the control variables, the regression results showed that renewable energy and FDI negatively contributed to CCO 2 emissions, while fiscal policy and ICT had a positive impact on CCO 2 emissions. The results suggest a negative association between renewable energy and CCO 2 emissions.Renewable energy such as wind, solar, and hydroelectric can generate electricity without emitting any pollutants into the atmosphere 64 .It is worth noting that the findings confirm the conclusions in the works of several past research works 65,66 .Regarding the role of FDI, the regression results indicate that FDI can alleviate environmental pressure.This phenomenon can be attributed to the pollution halo hypothesis.According to this hypothesis, FDI brings advanced technologies and green practices from developed countries to the developing country, enabling these nations to produce in a more environmentally-friendly manner, which may be contradictory 67 .In terms of fiscal policy, the positive coefficient of fiscal policy indicates that fiscal policy, which increases government spending 68 , aggravates CCO 2 emissions.Hence, this will greatly stimulate overall economic demand, thereby increasing CCO 2 emissions which is in line with the findings of previous researchers 69 .In addition, the use of ICT may contribute to increased CCO 2 emissions due to the high energy consumption associated with the large number of ICT devices 49 . Quantile regression To attain a more robust result, the model was run by the panel quantile regression which offers a more thorough study for model estimation at multiple quantiles 70 .This method has two advantages.First, quantile regression is considered more reliable when the data is not normally distributed 71 .Secondly, quantile regression is a useful www.nature.com/scientificreports/tool for estimating the significant impact of extreme values 24 .While the traditional econometric model provides the average effect of the independent variable, panel quantile regression not only provides results at different quantile and complies with the non-normality requirements, but also solves issues with variable slope coefficients and cross-sectional dependence 72 .This technique is widely used in the field of environmental economics.Hence, three kinds of quantiles were chosen based on past research 73 , namely the lower (10th-30th), middle (40th-60th), upper quantiles (70th-90th) as depicted in Table 8 and Fig. 4. The magnitude supports the application of quantile regression because the impact of GPR on CCO 2 emissions is heterogeneous across quantiles.The coefficients of GPR were found to be positive and significant across the distribution.As we transition from lower quantiles to middle quantiles, the magnitude also increases.Initially, a 1% rise in GPR leads to a 1.139% increase in CCO 2 emissions at the 10th percentile of CCO 2 emissions.However, the elasticity surges to 1.440% at the 60th percentile.In contrast, the magnitude decreases in upper quantiles, yet remains positive and statistically significant, which contradict findings from past research 31,32 which proposed that GPR deflates the environmental quality at lower quartiles while the effect is reverse at other quantiles.GPR has also been contradictorily been documented 32 to depress CO 2 emissions in Russia and South Africa and escalate emissions in other BRICS countries with the method of non-linear autoregressive distributed lag model.The varying positive association between GPR and CCO 2 emissions confirms hypothesis H1.This variation suggests that the impact of GPR is contingent upon the degree of environmental degradation.The increasing trend before the upper quantile can be explained by the following.First, in countries with higher CCO 2 emissions quantiles, technology is usually less advanced, which creates a gap between countries in higher and lower CCO 2 emissions quantiles.A wider technology gap will also result in a country holding a lower position in the global value chain and engage in less environmentally-friendly production 74 .The lower global value chain position also makes these countries more susceptible to disruptions caused by geopolitical events.Secondly, countries with higher CCO 2 www.nature.com/scientificreports/emissions quantiles may have more resource-intensive industries and are more likely to be influenced by GPR that relate to resource and trade.Firms also tend to resort to polluting production methods due to concerns on GPR 11 .The diminishing effect in upper quantiles can be explained by market pressures and environment policy constrains.In terms of the former, industries in countries with higher CCO 2 emissions quantiles may experience pressure to adopt cleaner practices due to consumer and market demands for sustainability and environmental responsibility.In contrast, stringent environmental regulations are implemented by the government due to severe environmental pollution.This, to some extent, counteracts the dependence on polluting production methods and heavily-polluting energy sources caused by GPR. In conclusion, as we progress from lower to higher quantiles, the coefficients exhibit an increasing trend.This suggests that GPR has a particularly notable impact in countries with higher CCO 2 emissions levels. Moderating effect of EPS Different exogenous shocks, such as global economic uncertainty and EPS, may impact the way GPR affects CCO 2 emissions.Various levels of EPS may produce different results.For instance, when EPS increases, the profit from using the polluting producing method and polluting energy may be halted or reduced.In line with the basic regression findings, the coefficient of GPR was found to be significantly positive at the 1% level.Furthermore, the interaction term (GPR* EPS) exhibited a significant negative coefficient, as demonstrated in Table 9.This signifies that the negative effects of GPR on environmental sustainability could be somewhat mitigated by an increase in EPS level.Hence, the negative coefficient confirms hypothesis H2.This can be explained by regulatory compliance.EPS backed by robust enforcement mechanisms can elevate the cost of polluting production and the use of non-clean energy sources, resulting in a mitigation effect on CCO 2 emissions.Furthermore, stringent regulations impose limits on emissions and encourage the adoption of greener practices and technologies.A similar carbon reduction effect of EPS has been documented 75 when analysing the moderating effect of EPS on the impact of financial development on environmental quality. Robust test In this section, four robustness tests were conducted to validate the basic findings.Initially, the independent variable was replaced.Then, the control variable was added to re-examine the underlying link.In addition, the dependent variable was replaced to assess the robustness of the basic regression model.Finally, the sample was segmented into pre-and post-Kyoto protocol periods to investigate the potential influence of external events on the nexus. Robust test1: replacing independent variable Following a method done in past literature 13 , the annual GPR index calculated by the geometric mean method (GPR-G) was generated to test the robustness of the test.The results in Table 10 show that the main findings still hold. Robust test2: add extra control variable The absence of relevant variables was likely to decrease the validity of empirical findings and bring about estimation basis.Therefore, a factor was added to the model to see if the main findings change.FD (financial development) facilitates green projects in obtaining loans, thereby reducing CO 2 emissions 49 .Hence, the variable of FD was incorporated into the analysis following the method of past studies 49,76 .FD is proxied by domestic credit to private sector's percentage of GDP.Subsequently, this variable was introduced into the regression model. Robust test3: replace dependent variable The model was re-estimated by replacing the dependent variable with CO 2 emissions metric tons per capita(log.)(CE).In line with past studies 49,[77][78][79][80][81] , CE was chosen as one indicator of the quality of the environment.The results are reported in Table 12.The regression results showed that the basic regression remains robust. Robust test4: the impact of the Kyoto protocol's enforcement All sample countries in this study were signatories to the Kyoto Protocol and were therefore influenced by it.In 2005, the Kyoto Protocol officially entered into force after being agreed upon in 2003.Therefore, the pre-Kyoto era was firstly examined, and unexpectedly, it was found that the GPR's sign was positive but not significant, indicating that earlier GPRs were wholly ineffective in escalating CO 2 emissions.However, when the post-Kyoto era was examined, similar signs were observed to the basic regression model as shown in Table 13, which confirmed the basic regression results. Heterogeneity test Based on the classification of United Nations (https:// unsta ts.un.org/ unsd/ metho dology/ m49/ histo rical-class ifica tion-of-devel oped-and-devel oping-regio ns.xlsx), this study divided the sample country into developed and developing countries as demonstrated in Table 14 and reperformed the regression model.For developing country samples, the REM was the most suitable while the FEM was the most suitable for developed countries according to Table 15.The impact of GPR on environmental sustainability was found to be smaller in developed countries www.nature.com/scientificreports/compared to developing countries.This may be because developed countries often have more advanced infrastructure and technology to support renewable energy development 82 .A well-established domestic renewable energy system enables a relatively independent energy supply.When facing GPRs, countries are less likely to heavily consume fuel energy.This results in smaller coefficients regarding developed countries. Conclusion and policy recommendations Conclusion In the current modern era, GPR is a significant issue that has a strong environmental impact and economic impact on nations.Hence, this paper studies the impact of GPR on CCO 2 emissions based on the data collected from 27 countries ranging from 1990 to 2020.First, the GPR-CCO 2 emissions nexus was tested and then checked with regards to the heterogeneous impact of GPR at different quantiles of CCO 2 emissions.Furthermore, this study examined the moderating effect of EPS on the above nexus.Several robustness tests were used to check the basic results.Finally, a heterogeneity test was performed in developing countries and developed countries. The results of the study suggest that GPR can significantly increase CCO 2 emissions and has a greater and more substantial impact on higher quantiles of CCO 2 emissions.Meanwhile, EPS can negatively moderate the nexus among GPR and CCO 2 emissions.In other words, the negative effects of GPS on environmental quality could be somewhat offset by improvements in EPS level.The robust tests confirm the basic regression results.Additionally, upon dividing the sample period into pre-and post-Kyoto periods, it was observed that the impact of GPR aligns with the basic regression model during the post-Kyoto periods.However, it is interesting to note that the effect of GPR in the pre-Kyoto period is positive but insignificant.The heterogeneity test indicates that the impact of GPR on CCO 2 emissions is greater in developing countries compared in developed countries. Policy recommendations This research has several implications for policy.First, the relationship between GPR and a country's environmental deregulation suggests that higher GPR may be detrimental to efforts aimed at improving environmental sustainability.To address this issue, the government should provide a stable political environment and a sound legal system, thereby attracting more investors to participate in environmental projects.Besides that, energy transition could be promoted by adopting policies to reduce reliance on high-carbon-emission energy sources and instead use cleaner and renewable energy sources.Finally, the government can strengthen environmental regulation to ensure the enforcement of environmental laws and enhance the effectiveness of environmental protection measures. Second, the heterogeneous impact of GPR at different quantiles serve as a reminder that low-and middleemissions countries should pay closer attention to reduce CCO 2 emissions, as the influence of GPR becomes stronger in higher level of CCO 2 emissions countries.Besides that, every country has different circumstances, thus, policies must be developed, put into practice, and be continuously improved to reflect those particulars. Third, the government needs to make every effort to optimise the benefits of EPS.EPS weakens the negative impact of GPR by enforcing regulatory compliance and promoting green production.As a result, it may be more efficient to put strict measures into place, such extending the scope of environmental taxes and imposing different tax rates according to the degree of environmental harm.However, softer methods can also prove to be highly effective.It is imperative to make proactive investments in clean technology research and development.These technologies directly support source reduction of emissions by providing workable substitutes for highemission processes.Promoting garbage recycling and sustainable consumption can also greatly increase public participation in the battle against environmental deterioration.By implementing these strategies, policymakers will be able to successfully manage the complex relationship between GPR, environmental quality, and EPS. Limitations First, this study's sample is constrained by data availability.It includes countries where the independent, dependent, and control variables intersect.In addition, data for only 27 countries and spanning from 1990 were used for this study. Second, the empirical model only employed the techniques of OLS, FE, RE, and panel quantile regression. Table 2 . Descriptive statistics for all countries. Table 6 . Stationarity test results.***Indicates a 1% significance level.The Stata commands xtunitroot llc (for LLC), xtunitroot ips (for IPS), xtunitroot ht for (for HT), xtunitroot fisher dfuller (for ADF-Fisher), and xtunitroot fisher pperron (for PP-Fisher) were employed to estimate the results in this table.LLC and HT were not able to be performed in terms of CCO 2 emissions as strongly balanced data was needed. Table 7 . Basic regression result.Standard errors in parentheses. Table 13 . Robust test4: The impact of the Kyoto protocol's enforcement.Standard errors in parentheses.
7,683.2
2024-05-10T00:00:00.000
[ "Environmental Science", "Political Science", "Economics" ]
The Effect of Geometrical Factors on the Surface Pressure Distribution on a Human Phantom Model Following Shock Exposure: A Computational and Experimental Study Experimental data and finite element simulations of an anthropometric surrogate headform was used to evaluate the effect of specimen location and orientation on surface pressures following shock exposures of varying intensity. It was found that surface pressure distributions changed with local flow field disturbances, mak-ing it necessary to use data reduction strategies to facilitate comparisons between test locations, shock wave intensities and headform orientations. Non-dimensional parameters, termed amplification factors, were developed to permit direct comparisons of pressure waveform characteristics between incident shock waves differing in intensity, irrespective of headform location and orientation. This approach proved to be a sensitive metric, highlighting the flow field disturbances which exist in different locations and indicating how geometric factors strongly influence the flow field and surface pressure distribution. Introduction The shock tube is a convenient way to generate the shock waves in a controlled fashion, and it has been employed in various research areas for more than a century [1][2][3][4][5][6]. The design of a compressed gas driven shock tube includes three standard components: driver (breech) and driven sections with an optional end wave eliminator [7,8]. The differences in the dimensions (volume of the breech, breech-to-test section diameter ratio, length of the driven section) and operation of the tube (type of driven gas, mechanism of driver gas release) have significant impact on the resulting pressure history measured inside of the tube [9]. The classical design of the shock tube employs the plastic or metal membranes, which is used to confine the driver gas from entering the driven test section. The driver section pressure is gradually increased until the point of mechanical failure of the membrane upon which the driver gas is entering the test section, pressurizing the ambient gas and resulting in the formation of the shock wave. Alternative designs employ membraneless drivers where the piston [10][11][12][13], or fast acting valve [14][15][16], are used eliminating the need for membrane replacement between consecutive tests. Both designs have been demonstrated to allow generation of shock waves with diverse magnitudes and characteristics. It is worth mentioning various instrumental factors, discussed in detail in our recent contribution [17], can affect the quality of recorded pressure waveforms and impact the interpretation of the experimental data. In the biomedical field, research utilizing shock tubes to investigate mechanisms of blast TBI (bTBI) was invigorated only 20 years ago [18,19]. The primary goal in this area of research is to replicate conditions associated with field explosions, particularly the primary blast injuries caused exclusively by the interaction of a shock wave with the brain are of interest [20]. Simulation of explosive blast implies that a shock wave closely resembling the Friedlander waveform should be produced and it has become a standard in contemporary bTBI models [7,21]. Two experimental parameters of paramount importance are the specimen restraint and the location of the test section, where the specimen is exposed to a shock wave. For the inanimate specimen the method of restraint is usually not an issue, however, human phantom models are frequently mounted on a biofidelic neck, e.g., Hybrid III, and when subjected to a shock wave loading rapid acceleration can result in the specimen displacement affecting the pressure loading on the surface. When the animal models are used the head restraint becomes extremely important, particularly for rodents with relatively small body dimensions and low weight. If the proper head restraint measures are not included in the experimental design, it might result in the development of erroneous injury modalities. It's been demonstrated that head acceleration might lead to the development of tertiary blast injuries, which have different injury characteristics than those resulting from shock wave loading [22,23]. The importance of the test location in the shock tube was a subject of the experimental evaluation in the past by our group [24][25][26]. These results illustrate significant differences between testing the specimen inside of the shock tube, i.e., at the distance from the exit enough to eliminate influence of any end-effects, versus at the end and outside. Testing outside is undoubtedly more convenient, but carries a number of unwanted drawbacks: (1) the shock wave profile is eroded and typically only short duration waveforms are achievable, (2) there is large dynamic pressure component which might contribute to a variety of errors [27], (3) the loading of the specimen strongly depends on the location with respect to the shock tube exit and diameter due to highly heterogeneous conditions [28,29]. Testing inside provides much higher level of control over the shock wave profile with the dynamic components resembling these encountered in the field explosions. Numerical simulations are invaluable tools for mechanistic investigation of short lived phenomena like shock wave interaction with complex biological structures. The numerical models in bTBI research area provide insight into: (1) the transmission and propagation of the blast waves in the brain [8,30,31], and (2) mitigation of blast effects by helmets [32][33][34][35][36][37][38][39][40] or other PPE designed to safeguard the craniofacial area [36]. However, the accuracy of the numerical simulations relies on the validation using high-quality experimental data, e.g., the pressure measured on the surface of the helmet or phantom [8,35,39], or intracranial pressure [31,40]. A related branch of experimental work which is yet to be explored to its full potential for numerical model validation is the use of post-mortem human specimens (PMHSs) [37,41,42], instrumented with surface and intracranial pressure sensors. Existing studies in this area are rare and hindered by the experimental difficulties, not to mention complete impracticalities for the evaluation of PPE performance. It only leaves the use of anthropometric phantoms as the only alternative, which, made of non-biological materials, replicate the geometry to a high degree [8,31,39,43,44]. In this contribution we performed a comprehensive experimental characterization of the human phantom model instrumented with 10 pressure sensors to measure the response to a shock wave loading. The specimen was tested using large cross-sectional area shock tube using three locations with characterized by divergent flow characteristics. The loading of the specimen was administered via a single shock wave with three nominal intensities (70, 140 and 210 kPa), and surface pressure was probed in three headform orientations with respect to the incident shock wave, i.e., 0, 90 and 180°. The shock tube The 7 m long square (0.71 × 0.71 m) cross section shock tube was used in all experiments. This device was previously characterized in detail [24,26]. The driver gas was compressed helium (ultra-high purity, 99.99%, Airgas, Oakland, NJ), which was allowed to flow into the breech separated from the driven section of the shock tube with membranes made of Mylar (Grafix, Cleveland, OH). Upon the rupture of membranes, the driver gas enters the driven section and compresses the ambient air, which in turn generates a shock wave. Three discrete Friedlander waveform shock waves with nominal intensities of approximately 70, 140 and 210 kPa peak overpressure in the test section (T5 sensor, Figure 1) were used. All tests were performed at ambient conditions. Pressure measurement, headform preparation, and instrumentation The temporal evolution of the incident shock wave waveforms was recorded using seven high frequency response pressure sensors model 134A24 (PCB Piezotronics, Inc., Depew, NY, USA), distributed along the shock tube (Figure 1). The pencil probe model ICP ® 137B24B (PCB Piezotronics Inc., Depew, NY, USA) was used to measure the incident pressure on the outside (PP location, Figure 1). The phantom headform [45], was instrumented with 10 PCB Piezotronics model 102B06 pressure sensors as illustrated in Figure 2A. Five medial sensors are located along midline anterior-posterior (H1-H5), and five circumferential sensors: two on the right parietal side (H6 and H7), two in eye sockets (H8 and H9, Figure 2A) and one on the left parietal side (H10). These sensors were mounted flush to the surface using tapped holes. The headform was mounted on the Hybrid III neck (Humanetics, Plymouth, MI) [46], in a rigid configuration to eliminate the motion of the headform during shock wave impact. The FOCUS headform-Hybrid III neck the assembly was attached to the adapter plate and bolted to the bottom of the shock tube in the test section in three different locations (Figure 1). A custom LabView program was used to record the pressure waveforms. The data acquisition system is based on PXIe-1082 PCI Express chassis and PXI-6133 S Series multifunction DAQ modules (National Instruments, Austin, TX, USA). The signal of pressure sensors was filtered using 8-channel signal conditioners model 483C05 (PCB Piezotronics Inc., Depew, NY, USA). The pressure waveforms were recorded at 1.0 MHz sampling frequency with an acquisition time of 50 ms. All pressure waveforms were processed and quantified in Origin 2018 software (OriginLab Corp., Northampton, MA). All data are presented as mean and standard deviation (n = 4). The data normalization and reduction were performed as follows. The four pressure waveform characteristics were tabulated. These include (1) the peak overpressure, the increase in pressure observed at arrival of the shock front, (2) the rise time, or the time required for the pressure to increase from 10 to 90% of the peak overpressure, (3) the positive phase duration, or the time required to return to ambient pressure, and (4) the impulse, or the area under the pressure-time curve during the positive phase duration. Each pressure waveform characteristic value was divided by the incident shock wave equivalent (Eq. (1)): where, x p -peak overpressure, rise time, duration or impulse of the resulting pressure waveform on the headform, x i -peak overpressure, rise time, duration or impulse of the incident shock wave, at the headform location (T5, D7 or PP, Figure 1). As a result, a set of normalized non-dimensional peak overpressure, rise time, duration and impulse values is generated for each test location, headform orientation and shock wave intensity. Flow-field simulations Finite element models were used to identify the influence the three experimental variables on the flow-field around the specimen. The flow-field around the headform was simulated using a coupled Eulerian-Lagrangian approach to fluidstructure interaction. This solution technique is ideal for the simulation of large deformation finite element analyses and has been used extensively in the simulation of shock-structure interaction physics [8,47,48]. The shock wave is modeled in a Eulerian mesh of air and interacts with the headform, modeled using a Lagrangian mesh. The interaction of the two domains results in a solution which depicts the air flow around the headform. The headform Lagrangian model was generated from a 3D geometrical model generated from the FOCUS headform and the Hybrid III neck using Autodesk Recap Pro 2018 (Autodesk, Inc.). The three-dimensional geometry was meshed using linear tetrahedrons (Simpleware, Synopsys) at a converged mesh density with an average minimum edge length of 8.73 mm ( Figure 3A). The headform was assumed to be a linear elastic material with a density of 2700 kg/m 3 , an elastic modulus of 6.89 GPa, and a Poisson's ratio of 0.33. A pre-existing, validated model of a shock tube was enhanced for use as the Eulerian domain [49]. This model used a biased linear hexahedral mesh which converged at a minimum edge length of 14 mm at the region of interest. Two shock tube meshes were used, designed to best model the inside specimen placement and to model the exit and outside specimen placements. The model used to simulate the inside of the shock tube was 5.9 m in length and over 580 thousand elements ( Figure 3B). The model used to simulate the end and outside specimen placements was 2.9 m in length and over 2.24 million elements ( Figure 3C). The Eulerian mesh was assumed to be filled with air, approximated using the ideal gas equation of state at 296 K, a density of 1.2 kg/m 3 , and a specific heat ratio of 1.4. Twenty-seven simulated configurations were conducted, mirroring the experimental configurations of three specimen placement locations (inside, at the exit of, and outside the shock tube), three specimen orientations (0, 90, and 180°), and three blast overpressures (70, 140, and 210 kPa). The shock was simulated as a planar pressure wave, applied to a surface upstream from the headform. For each configuration, pressure measurements taken at the sensor coincident with the loading surface were averaged for the four repeated exposures to create an input incident pressure pulse. The pressure pulse for a 210 kPa exposure with the headform located at the exit was used as the pressure pulse for the untested configuration of 210 kPa with the headform outside of the shock tube. All nodes coincident to the walls of the Eulerian domain and the base of the Lagrangian headform were constrained against all translational and rotational degrees of freedom. The enhanced immersed boundary method allowed for the Lagrangian mesh to occupy void regions within the Eulerian mesh, enabling the computation of the interfacial surface. Interaction between the two domains was defined as hard general contact in the direction normal to the interacting surfaces and a frictionless contact in the tangential direction. All simulations were conducted in Abaqus 6.13-4 on two Intel Xeon 2.10 GHz processors (Dassault Systèmes). For each configuration, the pressure in each element along the vertical longitudinal plane was mapped to plot the blast overpressure (BOP), impulse, and rise time amplification factors (MATLAB R2019a, Mathworks). Element-wise values for the peak BOP were defined as the maximum simulated pressures, the impulses were calculated as the area under the pressure-time curves, and the rise times were calculated as the time required for the signal to increase from 10 to 90% of the peak value. Element-wise values were normalized with values of the incident waveform at that location, resulting in unitless amplification factors. Experimental surface pressures on the headform Representative pressure profiles recorded by the surface sensors mounted on the headform are presented in Figure 2B-D. These data were recorded at a nominal shock wave intensity of 70 kPa, and general trends in pressure waveforms distribution are similar for the other two incident shock wave intensities (140 and 210 kPa). The effect of the headform rotation on the surface loading is illustrated in Figure 2 for the inside test location. In general, on the face exposed to the shock wave the recorded peak overpressures are the highest: 245 kPa (H1 sensor) for the 0° orientation ( Figure 2B, H8 and H9 sensors are a special case, considering the sensors are located in the concave "eye sockets" which results in the pressure entrapment, and extremely high peak overpressure values), 220 kPa (H7 sensor) for 90° orientation (Figure 2C, inset), and 225 kPa (H5 sensor) for 180° orientation (Figure 2D). It is obvious that the same trends are observed for the other two test locations and headform orientations (Figure 4A-C). It appears that the rise time is a sensitive metrics for the headform loading (Figure 4D-F). In general, the rise time is shortened on the front face exposed to the shock wave (H1 at 0° orientation, Figure 4D, H6 and H7 at 90° orientation, Figure 4E, and H5 at 180° orientation, Figure 4F) or it is extended on the converse side of the headform (H4 and H5 at 0° orientation in Figure 4D, H10 at 90° orientation in Figure 4E, and H1 and H2 at 180° orientation in Figure 4F), compared to the rise time of the incident shock wave. This effect is seen regardless of the headform test location. For the duration and impulse, a different kind of relationship is noticeable. It is expected considering both peak overpressure and the rise time are describing the behavior of the front face of the waveform, while the duration and the impulse are related to the properties of the entire waveform. In general, the duration and impulse values decrease with distance from the breech, corresponding with the test location in the order: inside > end > outside. This is associated with the erosion of the incident wave tail by the end effect. The shock wave exiting the shock tube creates a region of underpressure which travels back into the shock tube [7], and unconfined conditions on the outside allow for free expansion of the previously constrained shock front, resulting in a conversion of the static pressure to 'jet wind' , resulting in a shorter durations and lower impulse values at the end and outside locations (Figure 5). Experimental data normalization It is thus obvious a further data reduction is necessary in order to compare data collected at three different intensities and three different headform locations. The simplest and most natural approach is to take the characteristic parameters of the incident shock wave waveform (input) recorded by a sensor mounted at specific test location, i.e., T5 for inside, D7 for end and PP for outside location, and compare them with waveforms recorded by pressure sensors on the headform (output). The resulting dimensionless parameters (calculated using Eq. (1)) are a measure of the disturbance caused by introduction of headform into the flow field of the shock wave traveling in the shock tube. The values other than 1 indicate divergence from the waveform characteristics at specific test location in or outside of the shock tube and on the specific location on the headform, compared to the incident shock wave. These can be attributed to geometric factors, changes in shock wave characteristics and presence of additional high velocity flows which are below the detection levels of small cross section sensors sparsely distributed on the surface of the headform. However, if the shock wave properties were only gradually evolving while traveling in the shock tube, it is reasonable to expect similar distribution of non-dimensional parameters as a function of their physical location on the headform which becomes the only defining parameter of the system. (n = 4). The headform orientation in these experiments was 0° (A, C), 90° (B, E), and 180° (C, F). The non-dimensional parameters (or amplification factors) allow for direct comparison of pressure waveform characteristic parameters generated by a range of incident shock waves differing in intensity. Using this concept in mind we performed further data reduction for peak overpressures, rise time, impulse and durations in all datasets. The representative bar plot for the amplification factors for the peak overpressure, rise time, duration and impulse for the headform in the 90° orientation and exposed to a shock wave with three nominal intensities (70, 140 and 210 kPa) in three different locations (inside, end and outside) is presented in Figure 6. This figure illustrates that indeed the normalization is an effective strategy to compare the data obtained from a variety of exposure conditions. The normalized peak overpressure and rise time follow well defined trends, and the largest divergence is observed for the normalized duration and impulse at the end test location (Figure 6C and D). It is expected considering duration and impulse values for the end and outside locations vary more significantly than these reported by the headform sensors in the inside location ( Figure 5). We previously reported similar trends for the headform tested in the 0° orientation (see Figure 7 in Ref. [26]), and this work expands upon these results by incorporation of two additional orientations (90 and 180°). The data are presented as a function of the sensor distance of the 2D projection of the headform (for details refer to Figure 7B in Ref. [26]). The best results were noted for amplification factors of peak overpressure recorded on the headform at 0° orientation (Figures 7, 8 and Table 1): the data collected in all three test locations has narrow distribution and follows well defined trend. Front sensor H1 has increased values in the range of 2.4-3.6, which gradually decrease along the headform reaching minimum values of about 1 for the H4 and H6 sensors on the back of the headform, and reaching a value of 2 for the H5 sensor at the very end of the headform. This increase is purely due to combined effect of the shock wave wrapping around the headform and its two streams joining together at the back. This is accompanied by increased values for the rise time for H5 sensor by a factor of 10 compared to incident shock wave, which is markedly increased compared to all other sensors, where the rise time amplification factor never exceeds the value of 2 (Figure 7). Amplification factors for sensors mounted in the eye sockets are extremely high (in the range of 5-8), and they do not follow the same trend as sensors mounted on the flat surface of the headform. The concave geometry of eye sockets is responsible for the compressed air entrapment and momentary stagnation during shock wave exposure, which results in extreme pressures, compared to other locations on the headform. Similar trends are also noticeable for the 90 and 180° orientations, with differences related to the geometrical factors. The amplification factors on the face exposed to the shock wave (H6, H7 for the 90°, and H5 for 180° orientation) have lower values (<3) compared to 0° orientation. The rise time for these two orientations have much lower values (<1 μs) and rise time values gradually increase with increased distance (Figure 7). Far less consistent results are obtained for the duration and impulse values (Figure 8). The spread of amplification factor values in all six cases is 1.0 or more, which indicates that the loading conditions at specific test location plays an important role. These variations are expressed in the goodness of fit parameters (adjusted R 2 ) which are presented in Table 1. It is clear fit parameters for peak overpressure and rise time indicate good fit, while for duration and impulse the parameters are below 0.5, which is indicative of poor-quality fit. The dispersion of the data points is the main reason, and respective data sets appear to follow stochastic distribution pattern rather than a single well-defined function. Numerical simulation data normalization The trends observed experimentally were confirmed using the numerical simulations. Normalized results for the spatial distribution of peak overpressures confirmed that the highest peak overpressures are seen at the front face of the headform (Figure 9). Following data normalization, trends were found to be common between the three shock wave intensities, 70, 140, and 210 kPa, and, therefore, only results for 140 kPa exposures are shown. The area of stagnation is largest in the 90° orientation, followed by the 180° orientation. Additionally, for similar headform orientations, the area of stagnation increased with increasing distance from the breech, with the largest area of stagnations observed in the outside test locations and the smallest area observed at the inside location. A region of increased peak overpressures is observed behind the headform, regardless of the headform orientation or the testing location. This region indicates the area affected after the shock front wraps around the headform. Above and below that area of elevated peak overpressures, the lowest pressures are observed. This phenomenon is most apparent in the inside test location (Figure 9A-C), as a fan radiating behind the headform. These results corroborate the observed trends reported in Figure 7. The rise time was highest around the posterior face of the headform in all headform orientations and underneath the chin of the headform in the 0° orientation (Figure 10). In the area around the headform, the rise time was higher than that of the incident waveform. These trends were observed regardless of specimen location. Spatial maps of the impulse amplification factors highlight the importance of testing location on specimen loading (Figure 11). The end location exhibited the largest impulse amplification factors and the inside location exhibited the smallest. This finding corroborates the wide spread of impulse and duration values reported in Figure 8. Furthermore, the impulse was highest in the 90° headform orientation, followed by the 180° orientation. The spatial distribution of the increased impulse was similar among specimen orientations and testing locations, differing in magnitudes only. A region of high impulse is seen on the front face of the headform and a region of low impulse, lower than the impulse of the incident wave, is seen behind the headform. Conclusions Headform instrumented with 10 pressure sensors (mounted to measure surface pressure) was exposed to a single shock wave with three nominal intensities: 70, 140, and 210 kPa. The headform was mounted in three different orientations: 0, 90 and 180° with respect to the direction of the shock wave propagation. The effect of the headform location was evaluated by positioning it inside of the shock tube, at the end and outside of the shock tube. The headform was mounted using Hybrid III biofidelic surrogate neck, which was tightened to eliminate the inertial motion of the headform caused by blast exposure. Comparison of the test results at three different shock wave intensities complicates data analysis even further. To resolve these issues, we developed a simple strategy of data reduction: the respective pressure parameters recorded by headform sensors are divided by equivalent parameters of the incident shock wave as defined in Eq. (1). As a result, a comprehensive set of non-dimensional parameters is generated. These nondimensional parameters (or amplification factors) allow for direct comparison of pressure waveform characteristic parameters generated by a range of incident shock waves differing in intensity and for the headform located in different locations. Using this approach, we found there is a correlation function which allows prediction of the peak pressure on the headform which depends only on the peak pressure of the incident shock wave (for specific sensor location on the headform), and it's independent on the headform location, and to a certain degree the orientation. Similar relationship exists also for the rise time. However, for the duration and impulse similar correlation functions do not exist. We demonstrated via comprehensive experimental and numerical studies that three different testing locations are characterized by non-equivalent loading
6,003.2
2019-08-26T00:00:00.000
[ "Physics" ]
Full Tetragonal Phase Stabilization in ZrO2 Nanoparticles Using Wet Impregnation: Interplay of Host Structure, Dopant Concentration and Sensitivity of Characterization Technique Here, we show that wet impregnation of ZrO2 nanoparticles with 10% and 20% Eu oxide followed by thermal anneal in air above 500 °C produces full stabilization of the tetragonal phase of ZrO2 without evidencing any phase separation. The bare ZrO2 nanoparticles were obtained using three synthetic methods: oil in water microemulsion, rapid hydrothermal, and citrate complexation methods. The homogeneity of the solid solutions was assessed using X-ray diffraction, Raman spectroscopy, high resolution transmission electron microscopy, and advanced luminescence spectroscopy. Our findings show that wet impregnation, which is a recognized method for obtaining surface doped oxides, can be successfully used for obtaining doped oxides in the bulk with good homogeneity at the atomic scale. The limits of characterization technique in detecting minor phases and the roles of dopant concentration and host structure in formation of phase stabilized solid solutions are also analyzed and discussed. Introduction Zirconium oxide (ZrO 2 ) is a well-established ceramic material where the physical and chemical properties depend strongly on the structural phase leading to a variety of applications [1,2]. Both tetragonal and cubic phases can be stabilized at ambient temperatures upon doping with trivalent ions such as Y 3+ or lanthanides (Ln) [3,4]. Due to the facile doping of the Ln metals in ZrO 2 lattice, there are many reports that describe the potential applications of Ln doped ZrO 2 as dielectric film transistor [5], white light emitting diodes [6], catalysis [7,8], fuel cells [9], temperature sensor [10], oxygen sensor [11], dosimetry [12], photocatalyst [13], and bioimaging [14]. Among the Ln series, Eu is considered as an ideal dopant/stabilizer as the average structural properties of ZrO 2 can be correlated with the local scale properties around Eu. As such, Eu shows distinct changes in the emission/excitation spectra and excited state dynamics with changes in the local symmetry with fingerprint emissions characteristic to monoclinic and tetragonal phases identified and extensively described [15][16][17][18][19][20]. The mechanism of tetragonal and cubic phase stabilization using lanthanide doping is well-established. According to Li et al. [4], the oversized aliovalent lanthanide metals are effective for the stabilization of tetragonal and cubic phases at room temperature via generation of the oxygen ion vacancies. To maintain its effective coordination number close to 7, as required by the (partial) covalent nature of the Zr-O bond, the ZrO 2 lattice assumes a crystal structure, which offers an 8-coordination number (typically tetragonal or cubic structures) and simultaneously incorporates the generated oxygen vacancies into the lattice as the nearest neighbors to Zr 4+ cations, and thus, next-nearest neighbors to the trivalent Ln. So far, ZrO 2 nanoparticles stabilized in the tetragonal/cubic phase by lanthanide doping are obtained using a multitude of synthetic approaches that include: microemulsion oil in water [18,19,21], hydrothermal/rapid hydrothermal [22][23][24][25], coprecipitation [26,27], sol-gel [28,29], Pechini process [30], sol-emulsion-gel [11,31,32], combustion synthesis [13,[33][34][35][36], solar physical vapor deposition [37], and complex polymerization method [38]. To the best of authors' knowledge, there is no study that investigates the effect of wet impregnation with a lanthanide oxide on the tetragonal/cubic phase stabilization of ZrO 2 nanoparticles. Wet impregnation, otherwise a well-known method used for synthesis of heterogeneous catalysts, exposes the host oxide to a liquid containing the precursor of the dopant, which is then dried and heated in air. In the best scenario, the final result is a surface-doped oxide without formation of a separate phase of dopant oxide or a submonolayer of the dopant oxide on the host oxide [39]. In the case of ZrO 2 , homogenous doping in the bulk by wet impregnation with a high amount of lanthanide oxide typically needed for phase stabilization therefore seems highly improbable. Here, we investigate the effects of wet impregnation on the structural phase of ZrO 2 nanoparticles by use of X-ray diffraction (XRD), Raman spectroscopy, transmission electron microscopy (HRTEM), and advanced luminescence spectroscopy. We show that wet impregnation of ZrO 2 with 10% and 20% Eu oxide followed by thermal anneal in air above 500 • C leads to solid solutions of tetragonal phase that are homogenous at the atomic scale. The doping efficiency does not depend on the synthetic route used in synthesis of bare ZrO 2 (rapid hydrothermal, oil in water microemulsion, or citrate complexation method), or surface area (that vary from 250-300 m 2 /g to few m 2 /g). The complementarity between the characterization techniques highlights the ability of luminescence to detect minor monoclinic phase disregarded by both X-ray diffraction and Raman spectroscopy. A comparison with another significant fluorite oxide, CeO 2 , highlights the key role played by the fluorite structure in the efficiency of "dissolving" the lanthanide oxides. Nanoparticles Synthesis Bare ZrO 2 nanoparticles were prepared using three distinct synthetic routes: oil in water microemulsion (OW), rapid hydrothermal (RH), and citrate complexation (CIT) methods. Details on each of the method are published elsewhere [18,28,40]. These supports were impregnated with 10% Eu (in the case of OW and RH) and 20% Eu (in the case of CIT). A brief description of the synthesis procedures is however presented. For the microemulsion synthesis, the following precursors were chosen: Synperonic®10/6, Zirconium (IV) ethylhexanoate (Alfa Aesar, Ward Hill, MA, USA), and Hexane (Merck, Whitehouse Station, NJ, USA). The employed microemulsion system was: water/Synperonic®10/6/hexane. An isotropic solution at 35 • C was obtained by mixing all the above-mentioned components. The maturation time was 48 h after the pH was adjusted to 11. The obtained precipitate was washed using a mixture of ethanol and chloroform (1:1). Zirconium nitrate (Alfa Aesar, Ward Hill, MA, USA ) was chosen as a precursor alongside citric acid (Alfa Aesar, Ward Hill, MA, USA) for the citrate complexation method. A homogeneous solution was obtained by mixing the zirconium precursor with the citric acid (1:1.2 molar ratio) in water. Zirconium ethoxide (Sigma Aldrich, St. Louis, MO, USA) was chosen as a precursor for this rapid hydrothermal method. The zirconium precursor was rapidly added as a powder in boiling water in order to obtain zirconium oxide nanoparticles. All the intermediate products were dried in air at 70 • C overnight. Impregnation of 1g ZrO 2 with a 0.004 M solution of EuCl 3 ·6H 2 O (Fluka, Waltham, MA, USA) with 10% Eu (OW and RH) or 20% Eu (CIT). The suspensions were stirred for 12 h at 60 • C, and then the separated solid was dried for 4 h at 80 • C under vacuum. Samples were calcined at 500 and 1000 • C in air with a heating/cooling rate of 10 • C/min and kept for 4 h at the maximum anneal temperature The Brunauer-Emmett-Teller (BET) method was used to calculate the surface area from the data obtained at P/P 0 (P = partial vapor pressure of adsorbate gas in equilibrium with the surface at 77.4 K; P 0 = saturated pressure of adsorbate gas) between 0.01 and 0.995 sizes. Surface area (BET) measurements were conducted on precalcined samples (at 450 • C). Compositional, Structural, and Morphological Characterization The SEM micrographs and EDX spectra were acquired using an FEI Inspect S Electron Scanning Microscope (FEI, Hillsboro, OR, USA). Microbeam X-ray fluorescence (micro-XRF) spectrometry was performed on a custom-made instrument with an X-ray tube: Oxford Instruments (Abingdon, United Kingdom), Apogee 5011, Mo target, focus spot ≈40 µm, max. high voltage 50 kV, max current 1 mA, Amptek X-123 complete X-Ray spectrometer with Si-PIN detector. The key element of the micro-XRF instrument is an X-ray polycapillary minilens (IfG-Institute for Scientific Instruments, Berlin, Germany) which provides a focal spot size on the sample of 15-20 µm. Powder X-ray diffraction (XRD) patterns were recorded on a Shimadzu XRD-7000 diffractometer (Shimadzu Corp., Kyoto, Japan) using Cu Kα radiation (λ = 1.5418 Å, 40 kV, 40 mA) at a scanning speed of 0.10 degrees/min in the 10 • -90 • 2Θ range. The crystallite size was estimated using the Scherrer equation. Raman spectra were acquired in the extended spectral region from 150 to 4000 cm −1 . Raman analysis was carried out with a Horiba JobinYvon-Labram HR UV-Visible-NIR Raman Microscope Spectrometer (Horiba Ltd., Kyoto, Japan) using the excitation wavelengths at 488, 514, and 633 nm. For TEM measurements, samples were prepared by grinding them in a mortar, followed by ultrasonic dispersion in ethanol and drop casting on a TEM grid provided with a holey carbon membrane. The specimens were analyzed using two electron microscopes. The low-magnification and HRTEM images, as well as the EDS (energy dispersive X-ray spectroscopy) spectra, were recorded using a JEM 2100 analytical TEM (JEOL Ltd., Tokyo, Japan) operated at 200 kV. Luminescence The photoluminescence measurements were carried out at room temperature and T = 80 K (by use of exchange gas cryostat) using a Fluoromax 4 spectrofluorometer (Horiba Ltd., Kyoto, Japan) operated in both the fluorescence and the phosphorescence mode. For excitation spectra the integration window varied between 0.1 and 0.5 s, the slits were varied from 0.1 to 1 nm in excitation, and from 1 to 3 nm in emission. The emission decays were measured by using the "decay by delay" feature of the phosphorescence mode. The repetition rate of the xenon flash lamp was 25 Hz, the integration window was set to 10 ms, the delay after flash varied between 0.03 and 25 ms, and up to 30 flashes were accumulated per data point. The average decay lifetime was calculated as integrated area of normalized emission decay. Time-resolved (gated) emission spectra (TRES) were recorded at low temperature, T = 80 K (by use of exchange gas cryostat), using a wavelength tunable NT340 Series EKSPLA OPO (Optical Parametric Oscillator, EKSPLA, Vilnius, Lithuania) for samples excitation (210-2300 nm), operated at 10 Hz. The tunable wavelength laser has a narrow linewidth (<5 cm −1 , which makes the laser a high selective excitation source) with a scanning step and output energy depending on the spectral region. As a detection system, an intensified CCD (iCCD) camera (Andor Technology Ltd., Belfast, Ireland) coupled to a spectrograph (Shamrock 303i, Andor, Belfast, Ireland) was used. The TRES were collected using the box car technique. The gain of the micro-channel plate (MCP gain) was set to 100. The emission was detected in the spectral range of 500 nm < λ em < 750 nm with a spectral resolution from 0.05 to 0.45 nm and the input slit of the spectrograph was set to 10 µm with a delay after the laser pulse varying from a few µs to 40 ms. The temperature of the iCCD was lowered to −20 • C for a better signal to noise ratio (S/N). For all measurements done using the iCCD, cut off filters were used to protect the detector from the excitation light. Assessment of Solid Solution Homogeneity by X-Ray Diffraction, Raman Spectroscopy, and Transmission Electron Microscopy Throughout the text, the impregnated ZrO 2 with 10% or 20% Eu are labelled as 10Eu(I)-ZrO 2 (OW), 10Eu(I)-ZrO 2 (RH), and 20Eu(I)-ZrO 2 (CIT), where OW, RH, and CIT refer to oil in water microemulsion, rapid hydrothermal, and citrate complexation methods, respectively, used in the synthesis of bare ZrO 2 nanoparticles. Oil in water microemulsion proved to be superior to the water in oil microemulsion method since the major (continuous) phase was water, while keeping the great advantages of the microemulsion method for the particle preparation (excellent control of nanoparticle characteristics such as size, composition, good crystallinity, high-surface area, etc.). However, this method has a few disadvantages due to separation and numerous washing steps and use of different oil phases that lead to low reproducibility [41,42]. The rapid-hydrothermal synthesis method assumes the immersion of alkoxide precursor powders into boiling water [40]. The advantages of this experimental method relate to the simple experimental setup and easily controllable particle size in time. Low reproducibility of the products due to their high dependence on the experimental conditions and the fact that high reaction rates cannot be expected represent important disadvantages of this procedure. Sol-gel citrate produces homogeneous nanopowders that involve the growth of metal oxo-polymers in a solvent. The sol-gel process involves hydrolysis and condensation, which are generally fast and need to be inhibited to avoid precipitation and allow sol or gel formation [43]. Simple synthesis steps and good reproducibility of this method are its main characteristics; however, particle agglomeration may be significant. All methods deliver amorphous particles with small, below 10 nm crystallite sizes. Surface area (BET) measurements on precalcined samples (at 450 • C) delivered values of few m 2 /g (CIT), and up to 250-300 m 2 /g for OW and RH samples. Prior to investigations, the bulk elemental composition of 10% (OW and RH) and 20% impregnated ZrO 2 (CIT) was measured using EDX-SEM and X-ray fluorescence. As shown in Figure 1a,b and Table 1, the estimated values were in good agreement with the nominal concentration determined from precursors. Figure 1c gathers the XRD patterns of 10Eu(I)-ZrO 2 (OW), 10Eu(I)-ZrO 2 (RH), and 20Eu(I)-ZrO 2 (CIT) following annealing in air at 500 and 1000 • C. Except for the 20Eu(I)-ZrO 2 annealed at 500 • C, all patterns display a pure tetragonal phase (04-013-4748) (space group P4 2 /nmc). Within the instrumental sensitivity limit of XRD of a few %, the presence of additional impurity phases (e.g., ZrO 2 monoclinic-PDF card: 00-036-0420) was not detected. Apparently, homogenous solid solutions were being formed even following mild calcination at 500 • C (Figure 1c). In the case of 20Eu(I)-ZrO 2 (CIT), the broadened X-ray diffraction patterns due to the small crystallite size (around 3 nm) hindered the discrimination between cubic and tetragonal phases, as the tetragonal (P4 2 /nmc) phase is a slightly distorted variant of the cubic phase (F m3m ) [1]. As listed in Table 1, the crystallite sizes estimated using the Scherrer equation increased from 13 nm and 15 nm to 23 nm and 17 nm for the 10Eu(I)-ZrO 2 (OW) and 10Eu(I)-ZrO 2 (RH), respectively, and from a few nm to 16 nm for 20Eu(I)-ZrO 2 (CIT) with the increase of the annealing temperature from 500 to 1000 • C. Selected Raman spectra following annealing at 1000 • C are gathered in Figure 1d. To eliminate any interference from the relatively more intense luminescence lines of Eu, the Raman spectra were measured at three excitation wavelengths (488, 514 and 633 nm). For comparison purposes, the Raman spectra of the monoclinic and tetragonal ZrO 2 (RH support annealed at 1000 and 500 • C, respectively) used as references, are also included. The crystal structure of tetragonal ZrO 2 is a body-centered lattice with the space group P4 2 /nmc with Zr(4+) eight fold coordinated to oxygen atoms in D 2d point group symmetry. Raman spectra show five of the six fundamental optical modes in the range of 200 to 700 cm −1 , namely at 265 cm −1 (B 1g ), 318 cm −1 (E g ), 470 cm −1 (E g ), and 649 cm −1 (A 1g ), at similar positions regardless of the excitation wavelength and therefore assigned unambiguously to the Raman bands of tetragonal ZrO 2 [44]. For the OW sample, a small contribution from the monoclinic phase via the phonon band doublet modes Bg and Ag at 178 and 190 cm −1 , respectively, that were not overlapped with any tetragonal bands was clearly observed. Additional phonon modes characteristic of Eu 2 O 3 C-phase were not detected in any of the impregnated samples. advantages of this experimental method relate to the simple experimental setup and easily controllable particle size in time. Low reproducibility of the products due to their high dependence on the experimental conditions and the fact that high reaction rates cannot be expected represent important disadvantages of this procedure. Sol-gel citrate produces homogeneous nanopowders that involve the growth of metal oxo-polymers in a solvent. The sol-gel process involves hydrolysis and condensation, which are generally fast and need to be inhibited to avoid precipitation and allow sol or gel formation [43]. Simple synthesis steps and good reproducibility of this method are its main characteristics; however, particle agglomeration may be significant. All methods deliver amorphous particles with small, below 10 nm crystallite sizes. Surface area (BET) measurements on precalcined samples (at 450 °C) delivered values of few m 2 /g (CIT), and up to 250-300 m 2 /g for OW and RH samples. Prior to investigations, the bulk elemental composition of 10% (OW and RH) and 20% impregnated ZrO2 (CIT) was measured using EDX-SEM and X-ray fluorescence. Due to the small content of monoclinic content detected using Raman spectroscopy, we selected the OW sample (1000 • C annealing) for TEM characterization. As shown in Figure 2a, a relatively narrow range of particle sizes was measured, ranging from 20 to 40 nm, which suggested a mild agglomeration when compared with the values of crystallite sizes estimated from XRD ( Table 1). The selected area electron diffraction (SAED) pattern exhibited a mixture of crystallographic structures with tetragonal and monoclinic ZrO 2 as a major and minor phase, respectively (Figure 2b,c). In Figure 2b, half rings have been drawn through the scattered diffraction spots of the tetragonal phase, indicating also the corresponding Miller indices, in agreement with the PDF Card 88-1007 of tetragonal ZrO 2 . A series of spots were observable next to these rings that were related to the monoclinic phase of ZrO 2 in agreement with the PDF card 88-2390 (Figure 2c). The formation of the secondary monoclinic phase was also confirmed using high-resolution TEM images in Figure 2d. Lattice parameters and overall scale-factor were determined by refinement of whole powder pattern fitting, which allowed the tuning of lattice parameters via iterations in order to find a fit with known lattice parameters [45]. monoclinic phase of ZrO2 in agreement with the PDF card 88-2390 (Figure 2c). The formation of the secondary monoclinic phase was also confirmed using high-resolution TEM images in Figure 2d. Lattice parameters and overall scale-factor were determined by refinement of whole powder pattern fitting, which allowed the tuning of lattice parameters via iterations in order to find a fit with known lattice parameters [45]. The quantitative analysis of the EDS spectra revealed some non-uniformity in Eu local concentrations that vary from as low as 2.2 at% to 9.2 at%, which explains the occurrence of the monoclinic and tetragonal polymorphs. The minor monoclinic phase of impregnated 10Eu(I)-ZrO2, disregarded by XRD (Figure 1b) but detected by Raman spectroscopy (Figure 1c), could also be revealed by HRTEM in a nice example of techniques complementarity. The quantitative analysis of the EDS spectra revealed some non-uniformity in Eu local concentrations that vary from as low as 2.2 at% to 9.2 at%, which explains the occurrence of the monoclinic and tetragonal polymorphs. The minor monoclinic phase of impregnated 10Eu(I)-ZrO 2 , disregarded by XRD (Figure 1b) but detected by Raman spectroscopy (Figure 1c), could also be revealed by HRTEM in a nice example of techniques complementarity. Overview of the Luminescence Properties The main structural difference between monoclinic, tetragonal, and cubic phases of zirconia is given by the displacements of the lattice oxygen atoms; therefore, consideration of the local atomic structure is essential to fully understand the effect of doping on ZrO 2 [3,4]. An effective approach for local structure investigations is based on Eu luminescence, which is highly sensitive to local symmetry. The local symmetry at the Eu site varies from low C 2 , (Cs or C 1 ) symmetry in seven-fold coordination in monoclinic phase, to the higher symmetry (eight-fold coordination) of tetragonal (D 2d ) or cubic (O h ) phases. The emission properties of Eu in fully stabilized tetragonal/cubic ZrO 2 (all obtained exclusively by bulk doping methods) are well documented in the literature and the reader can refer to References [6,10,11,[15][16][17][18][19][20]25,27,33,35,36,38,[46][47][48]. Usually, in these studies, the excitation was performed using UV that contained the absorption profiles of tetragonal ZrO 2 [1] and O 2− -Eu oxygen charge transfer [49] or electric dipole transition centered at 614 nm. In the spectral range of 5 D0-7 F1 transition, one line at ≈591 nm and a doublet of closely spaced lines at 597 and 598 nm represent the signature for the monoclinic emission. Since the characteristic luminescence of monoclinic phase is measured also in the bulk doped counterpart, 10Eu(B)-ZrO2 (OW), it appears that such "incomplete" tetragonal phase stabilization is not favored using the wet impregnation route. It is also observed that the contribution of monoclinic emission to overall emission is rather strong, despite all samples being purely tetragonal, as seen in the XRD. This is likely due to larger radiative emission rates of europium in the low, monoclinic symmetry (C1) compared to more symmetrical (D2d) tetragonal sites [51]. (2) Usually, the identification of segregated Eu2O3 is strongly dependent on the instrument type and sensitivity, as well as the synthetic approach, which determines the dopant distribution homogeneity. It is highly probable that upon impregnation, some of Eu segregate as cubic C-type Eu2O3 ( ). The amount of this parasite phase may be small enough to remain undetected by XRD and Raman spectroscopy (due to the instrumental sensitivities) as well as TEM (due to the limited volume of the analyzed sample). However, the measured emission of cubic Eu2O3 (obtained by calcining the as received Eu(NO3)3·xH2O (x≈6) from Alfa Aesar at 1000 °C) using several excitation wavelengths resembles the characteristic emission of Eu in Y2O3 [52] (spectra not included) distinct to any of the spectral shapes of impregnated zirconia illustrated in Figure 3. (3) The same conclusion is drawn considering the emission shapes of Eu characteristic of Eu2Zr2O7 of disordered fluorite or ordered pyrochlore structures reported in the literature [53]. (4) Finally, the presence of a significant surface Eu, that is typically characterized by a short-lived, strongly distorted emission [54] is ruled out in all impregnated samples by use of time-gated luminescence. is also included. The dotted green line highlights the appearance of the tetragonal fingerprint emission (defined by the 606 nm peak). The green highlighted spectra correspond to well-separated tetragonal emission. The yellow highlighted spectra represent the contribution from the monoclinic fingerprint mission. Several general characteristics emerge from the comparison of emission spectra in Figure 3: (1) The emission shape characteristic of the 5 D 0 level of Eu changes with the excitation wavelength while the associated width varies from 1 to 10 nm, which sustains the coexistence of both ordered and disordered local environments. (2) The spectral feature around 606 nm characteristic of Eu substituting for Zr in tetragonal sites [17][18][19]27,30,49] (with no defect nearby or not locally charge-compensated) can be observed at selected wavelengths. In tetragonal ZrO 2 , the eight-folded Zr(4+) has two sets of short (2.17 Å) and long (2.36 Å) Zr-O bond lengths, which define a D 2d inversion, that is a less symmetrical environment around Zr [3]. The well-known emission fingerprint of Eu in tetragonal zirconia is given by two emissions bands of comparable intensity at 591 nm (corresponding to the merging of the two permitted 5 D 0 -7 F 1 transition lines) and ≈606 nm (which is the strongest among the four allowed 5 D 0 -7 F 2 lines), (the green highlighted spectra in Figure 3). (3) Basically, there are no differences observed between the emission shapes when comparing the wet impregnated and bulked doped samples (OW). (4) The excitation modes and emission dynamics are similar across OW and RH samples (Figure 4). This means that not only the electronic interactions between the dopant and zirconia host (excitation spectra), but also the balance between the radiative (local symmetry) and non-radiative transition probabilities (due to interaction between Eu ions or between Eu with structural and surface defects) are similar across the three samples. (5) For the 20Eu(I)-ZrO 2 (CIT), the Eu local environment is remarkably homogenous, as shown by the constancy of the emission shape with the excitation wavelength. The emission is broader as compared to typical emission of RH and OW samples, as a result of the enhanced amount of oxygen vacancies generated by the higher Eu concentration. Most of these apparently locate near to Eu, since the fingerprint emission of Eu in tetragonal sites at 606 nm can be separated only under time-gated emission conditions (using at least 10 ms delay after the laser pulse). Assessment of Solid Solution Homogeneity at the Atomic Scale. Comparison with CeO 2 We further checked on the homogeneity at the atomic scale of the impregnated tetragonal zirconia samples. More specifically, we looked for the sources that might interfere with Eu emission substituting for Zr lattice sites identified as (1) Eu in minor monoclinic phase, (2) segregated Eu as Eu 2 O 3 phase, (3) Eu-Zr pyrochlore oxide as Eu 2 Zr 2 O 7 [50], and (4) segregated Eu as surface Eu. (1) A small amount of the monoclinic content is determined using Raman spectroscopy only for the OW sample (Figure 1c). In luminescence, the contribution of the monoclinic-type emission to overall luminescence spectra is clearly detected in both RH and OW samples via its fingerprint emission [18,49]. Eu emission in monoclinic sites is dominated by the 5 D 0 -7 F 2 electric dipole transition centered at 614 nm. In the spectral range of 5 D 0 -7 F 1 transition, one line at ≈591 nm and a doublet of closely spaced lines at 597 and 598 nm represent the signature for the monoclinic emission. Since the characteristic luminescence of monoclinic phase is measured also in the bulk doped counterpart, 10Eu(B)-ZrO 2 (OW), it appears that such "incomplete" tetragonal phase stabilization is not favored using the wet impregnation route. It is also observed that the contribution of monoclinic emission to overall emission is rather strong, despite all samples being purely tetragonal, as seen in the XRD. This is likely due to larger radiative emission rates of europium in the low, monoclinic symmetry (C 1 ) compared to more symmetrical (D 2d ) tetragonal sites [51]. (2) Usually, the identification of segregated Eu 2 O 3 is strongly dependent on the instrument type and sensitivity, as well as the synthetic approach, which determines the dopant distribution homogeneity. It is highly probable that upon impregnation, some of Eu segregate as cubic C-type Eu 2 O 3 (I a3 ). The amount of this parasite phase may be small enough to remain undetected by XRD and Raman spectroscopy (due to the instrumental sensitivities) as well as TEM (due to the limited volume of the analyzed sample). However, the measured emission of cubic Eu 2 O 3 (obtained by calcining the as received Eu(NO 3 ) 3 ·xH 2 O (x≈6) from Alfa Aesar at 1000 • C) using several excitation wavelengths resembles the characteristic emission of Eu in Y 2 O 3 [52] (spectra not included) distinct to any of the spectral shapes of impregnated zirconia illustrated in Figure 3. (3) The same conclusion is drawn considering the emission shapes of Eu characteristic of Eu 2 Zr 2 O 7 of disordered fluorite or ordered pyrochlore structures reported in the literature [53]. (4) Finally, the presence of a significant surface Eu, that is typically characterized by a short-lived, strongly distorted emission [54] is ruled out in all impregnated samples by use of time-gated luminescence. As shown in Figure 4e-h, the emission decays are in the ms range, suggesting that Eu substitute for Zr in the inner lattice sites. The estimated average lifetimes of 5 D 0 level monitoring emissions at 606 and 614 nm ( It should be noted that the ability of a nanosized fluorite oxide of dissolving high amount of lanthanide oxides (up to 30%) by wet impregnation was first suggested by Corma et al. [55] for the fluorite-structured oxide of Ce (CeO 2 ) and confirmed in subsequent studies by some of us [56,57]. We note that the tetragonal structure (along with the monoclinic one) originates from the parent fluorite (CaF 2 ) structure [4]. However, fluorite ZrO 2 differs from fluorite CeO 2 , both in the size differences of the lattice structure [58], and ionic radii mismatch as the lanthanide are significantly bulkier relative to Zr than the Ce host cation (ionic radii of Eu = 1.066 Å, Zr = 0.84 Å, and Ce = 0.97 Å in eight-fold coordination) [59]. Therefore, we initially expected that ZrO 2 is less efficient in dissolving Eu oxide using wet impregnation and more prone to phase separation. Still, preliminary XRD and luminescence data indicates that wet impregnation of ZrO 2 with 20% Eu is leading to a homogenous solid solution following annealing at 500 • C (as confirmed by XRD) and 1000 • C (as confirmed by XRD and luminescence). Apparently, such behavior is due to the common ability of fluorite-structured oxides to accommodate a large amount of oxygen vacancies [60]. The size mismatch between the lanthanide dopant and Zr host cation does not represent a limiting factor in obtaining homogenous doped zirconia solid solutions via wet impregnation. Conclusions Wet impregnation is a well-known method used for synthesis of heterogeneous catalysts, that is, surface doped oxides. Here, we show that wet impregnation with 10 and 20% Eu followed by calcination in air above 500 • C produces a full tetragonal phase stabilization of ZrO 2 nanoparticles in the bulk. The preformed ZrO 2 nanoparticles were obtained using three synthetic routes: oil in water microemulsion, rapid hydrothermal, and citrate complexation. First, the homogeneity of Eu-doped tetragonal solutions was assessed using X-ray diffraction, Raman spectroscopy, and transmission electron microscopy. The homogeneity at the atomic scale was further confirmed using Eu luminescence as a local probe under time-resolved, site-selective excitation conditions at room-temperature and 80 K. We believe that wet impregnation will allow the first systematic and reproducible set of investigations on the structural properties of ZrO 2 doped with fixed trivalent lanthanide at various concentrations as well as with trivalent lanthanides across the whole 4f series at a fixed concentration.
6,976.2
2018-11-28T00:00:00.000
[ "Materials Science" ]
Metabolism and gene polymorphisms of the folate pathway in Brazilian women with history of recurrent abortion PURPOSE: To investigate the association between polymorphisms in genes that encode enzymes involved in folateand vitamin B12-dependent homocysteine metabolism and recurrent spontaneous abortion (RSA). METHODS: We investigated the C677T and A1298C polymorphisms of the methylenetetrahydrofalate reductase gene (MTHFR), the A2756G polymorphism of the methionine synthase gene (MS) and the 844ins68 insertion of the cystathionine beta synthetase gene (CBS). The PCR technique followed by RFLP was used to assess the polymorphisms; the serum levels of homocysteine, vitamin B12 and folate were investigated by chemiluminescence. The EPI Info Software version 6.04 was used for statistical analysis. Parametric variables were compared by Student’s t-test and nonparametric variables by the Wilcoxon rank sum test. RESULTS: The frequencies of gene polymorphisms in 89 women with a history of idiopathic recurrent miscarriage and 150 controls were 19.1 and 19.6% for the C677T, insertion, 20.8 and 26% for the A1298C insertion, 14.2 and 21.9% for the A2756G insertion, and 16.4 and 18% for the 844ins68 insertion, respectively. There were no significant differences between case and control groups in any of the gene polymorphisms investigated. However, the frequency of the 844ins68 insertion in the CBS gene was higher among women with a history of loss during the third trimester of pregnancy (p=0.003). Serum homocysteine, vitamin B12 and folate levels id not differ between the polymorphisms studied in the case and control groups. However, linear regression analysis showed a dependence of serum folate levels on the maintenance of tHcy levels. CONCLUSION: The investigated gene polymorphisms and serum homocysteine, vitamin B12 and folate levels were not associated with idiopathic recurrent miscarriage in the present study. Further investigations are needed in order to confirm the role of the CBS 844ins68 insertion in recurrent miscarriage. Introduction Folate is essential for normal DNA and RNA biosynthesis and it is required for homocysteine metabolism.It appears obvious that folate is essential for normal fetal development, and during pregnancy, women have an increased physiological need for this vitamin 1 . Homocysteine (tHcy) is a sulphur-containing intermediate in methionine metabolism, which can be catabolized in the transsulphuration pathway (vitamin B6-dependent) or remethylated to methionine (folate-and cobalamin-dependent) 2 .Two main factors affect homocysteine concentration in humans: diet, specially the intake of folate and vitamin B12, and polymorphism in genes which encode enzymes or transport proteins involved in the folate-and vitamin B12-dependent homocysteine metabolism 3,4 . The dietary intake of essential micronutrients determines the efficiency of homocysteine metabolism in general, and the reduction in the dietary intake of B complex vitamins such as folate, B6 and B12 leads to an increase in the levels of plasma homocysteine 5,6 .The MTHFR gene polymorphisms are commonly associated with hyperhomocysteinemia 7,8 .The best characterized MTHFR gene polymorphism consists of a 677C to T transition, which results in an alanine to valine substitution in the predicted catalytic domain of the enzyme.This is a risk factor for neural tube defects and recurrent embryo loss in pregnant women 9,10 .A second prevalent gene polymorphism which is associated with the in-vitro decrease of the enzyme activity is an A to C transversion at nucleotide 1298, resulting in a glutamine to alanine substitution in the enzyme molecule.Although it has been shown that the A1298C gene polymorphism, in either heterozygousity or homozygousity, is not associated with higher plasma tHcy concentrations, combined heterozygousity of the C677T and A1298C polymorphisms is associated with reduced MTHFR specific activity and higher tHcy concentrations when compared with heterozygosity for either variant 11 .Other enzymes, such as methionine synthase (MS), also play an important role in homocysteine metabolism; and its activity reduction due to A2756G polymorphism or inadequate cofactor (vitamin B12) concentrations may result in elevated homocysteine levels 12 .On the other hand, the cystathionine beta-synthetase (CBS) catalyzes the condensation of serine and homocysteine in order to form cystationine, and abnormality in CBS activity is manifested in two clinical conditions: hyperhomocysteinemia and homocystinuria 13 .The 844ins68 insertion of the CBS gene, apparently, does not cause a decrease of enzyme activity, but the allele carriers have an impaired gene transcription 14 .Several complications in pregnancy have been attributed to hyperhomocysteinemia, such as neural tube defects, pre-eclampsia and recurrent pregnancy loss 9,10,15 . The aim of this study was to investigate polymorphisms in genes that encode enzymes involved in folateand vitamin B12-dependent homocysteine metabolisms, such as C677T and A1298C in the MTHFR gene, the A2756G in the MS gene and the 844ins68 insertion in CBS gene in a group of women with recurrent pregnancy loss and in a group of women that comprised the controls, investigating its association with serum levels of vitamin B12, folate and homocysteine. Subjects We performed a case-control study in order to investigate the association between polymorphisms in genes that encode enzymes involved in folate-and vitamin B 12 -dependent homocysteine metabolism and recurrent spontaneous abortion (RSA).The study included 89 women with at least two consecutive miscarriages in the first, second or third trimester of gestation, without any successful pregnancy.Abortion diagnoses were identified by hCG testing, ultrasound, and/or physical examination.The patients were recruited at the Department of Obstetrics and Gynecology of Maternidad Climério de Oliveira, in Salvador, Bahia, Brazil.The cases were identified and selected when they visited the above mentioned hospitals for investigation of two or more consecutive unexplained terminations of pregnancy.All patients underwent a complete diagnostic work-up for RSA, including screening for hypertension, chronic infections, anatomical disorders, antiphospholipid syndrome, inherited thrombophilia, thyroid dysfunction and diabetes mellitus.Of 89 cases, 90% of RSA women were supplemented with folic acid (3 months before pregnancy and 3 months of pregnancy, 5 mg each day).RSA patients who were over 40 years of age and had previous live births were excluded from this study. The control population consisted of 150 healthy women of similar age to the patients', with at least one live born child and no history of pregnancy loss.These women were selected during their post-partum in the maternity ward of the Tsylla Balbino.The Cases and controls came from a public hospital, so that both groups had similar socioeconomic status to search for genetic polymorphisms.Out of the 89 women with recurrent miscarriage, 46 of them were tested for vitamin B12, folate and homocysteine levels and compared with 47 controls.A questionnaire was filled out by each patient to record the details of their lifestyle, habits, and family history. All participating subjects personally provided their consent to participate in the study. Biochemical and Molecular analysis Blood samples were drawn from women using sterile tubes with anticoagulant (EDTA) and sterile tubes without additive.Serum samples were separated and kept frozen at -20°C until the analysis was performed. The folate and vitamin B12 serum levels were measured by immunochemiluminescence immunoassay, by Access 2 Immunoassay System (Beckman Couter, CA -USA) device, according to manufacturer's instructions.The tHcy serum levels were measured by chemiluminescence immunoassay by an automatic device, IMMULITE 2000, according to manufacturer's instructions. The DNA was isolated from leukocytes using a Qiagen DNA Mini Kit.Genotype analyses of the polymorphisms were performed by PCR followed by RFLP analyses, except for the CBS 844ins68 insertion -which was only performed by PCR technique.The primers sequence and PCR conditions used were as previously described for MTHFR C677T polymorphism 7 ; MTHFR A1298C polymorphism 11 ; MS A2756G polymorphism 16 and CBS 844ins68 insertion 17 . Statistical Analysis The EPI Info Software, version 6.04 was used for the statistical analysis.Parametric variables were compared by Student's t-test and nonparametric variables by the Wilcoxon rank sum test.The Software SPSS, version 9.0, was used for the linear regression analysis.The tHcy serum level was considered as a dependent variable and folate and vitamin B12 serum levels were the independent variables.A p-value of >0.05 was considered statistically significant.The Hardy-Weinberg equilibrium was applied in order to estimate the equilibrium of gene polymorphisms frequency distributed between cases and control groups. Ethical Standards The study was approved by the Human Ethics Committee of Oswaldo Cruz Research Foundation.The protocol and procedures presented in the project are in accordance with the ethical standards of the responsible committee on human subjects and the Helsinki Declaration of 1964, as revised in 2008. Demographic data The median age of the 89 women in the study group was 29.4 (±5.4) years of age, ranging from 17 to 40 years old, and the control group aged 23 (±5.5) years old.The median abortion number in the study group was 3.2 (±1.9), ranging from 2 to 13 abortions, in which 41 (47%) women had two spontaneous abortions; 25 (29%) of them had three spontaneous abortions, and 21 (24%) of them had more than three abortions. The gestational age, when the abortions occurred, was divided into first, second or third trimester and mixed when there were intersessions between them.Our data show that in 29 (33%) women, the abortion occurred in the first trimester; 22 (25%) of them in the second trimester; 8 (9%) of them in the third trimester, and 28 (32%) women were included in the mixed abortion group, because they presented these events at different times during their pregnancies. Obstetric complications presented in the study group included 4 women with pre-eclampsia (8.5%); 7 with high blood pressure (16%) and 5 with cervical incompetence (10.6%).In the control group, there were 8 women (5.6%) who had a pre-eclampsia event. The frequency of smoking in the study group was 2% (1/47), and 42.6% (20/47) had drunk some alcoholic beverage once a week when they were not pregnant.The frequency of smoking in the controls group was higher than in the study group, 12.6% (19/150), showing statistical difference (p=0.02).Twenty six (17%) women in the control group drank some alcoholic beverages during pregnancy. Fifty-nine percent (28/47) of the women who comprised the case group had three daily meals.The most frequent combination of food was rice, beans and meat in 59.6% (28/47) of the cases.The ingestion of food-containing fat was observed in 74.5% (35/47) of this group.However, a salad was included in the meal of 89.4% (42/47) of women, varying from 1 to 4 times a week. Gene polymorphisms distribution We determined the MTHFR C677T and A1298C genotypes in 89 women with recurrent spontaneous abortions and 150 controls (Table 1).The frequencies of the T allele of the C677T genotype were 19 and 19.6 in case and control groups, respectively; and the C allele of the A1298C genotype was 21 and 26% in case and control groups, respectively.There was no statistical difference among the allele distribution and the genotype frequencies between case and control groups (p>0.05).A double heterozygote, 677CT/1298AC, was found in this study at frequencies of 8 and 10% in case and control groups. The frequencies of the MS A2756G and CBS 844ins68 insertion genotypes between case and control groups are shown in Table 1.There were no significant differences in the frequency of the genotypes between the studied groups (p>0.05).However, when we analyzed both the mutant heterozygous and homozygous genotypes together, we found a higher frequency of the mutant genotype of the MS gene in the control group (p=0.04). All gene polymorphisms frequencies were in Hardy-Weinberg equilibrium. The CBS 844ins68 insertion was more frequent in women with recurrent pregnancy loss in the third trimester.The polymorphism investigated was associated with the gestational age when the abortion occurred.There was a significant difference in the distribution of the homozygous genotype between the women with recurrent pregnancy loss (Table 2), which suggested that the homozygous 844ins68 was a risk factor for the occurrence of abortion in the third trimester (p=0.03). Homocysteine, vitamin B12 and folate serum levels did not differ between the different polymorphism genotypes.Serum from 47 women with recurrent abortion and 47 control ones were tested for homocysteine.The levels were not different between all genotypes investigated, as shown in Tables 3 and 4, respectively. The relationship between vitamin B12 and folate was also examined in the cases, but there was no association with the gene polymorphisms genotypes (Table 4). The linear regression analysis of the tHcy and folate serum levels from both groups showed a negative correlation between those variables (p=0.03)(Figure 1).The same analysis developed between the vitamin B12 and tHcy serum concentration was not significant (p=0.58). Discussion This is the first study comparing CBS 844ins68 insertion and recurrent abortion.We observed a higher frequency of this insertion among homozygous women and spontaneous recurrent abortion within the third trimester of pregnancy.The most common type of inherited homocystinuria is an autossomal recessive trait related to a genetic deficiency of CBS 18 , and there are no reports which associate this change with fetal loss.The analysis of the C677T and A1298C MTHFR genotypes and the A2756G MS genotype did not show a statistical association with recurrent abortion and gestational age, but we observed a trend related to the A1298C.This indicates the need for additional research involving a larger number of women, once 19 it was observed the association of MTHFR A1298C polymorphism (but not of MTHFR C677T) with elevated homocysteine levels and placental vasculopathies. Folate and vitamin B12 play an important role in the homocysteine metabolism; participating in the more frequent pathway, the remetilation.The dietary intake of essential micronutrients determines the efficiency of homocysteine metabolism in general 5 .The intake of these vitamins is essential during pregnancy, and it has been suggested that a greater reduction in risk of pregnancy complications may occur if a combination of folate and vitamin B12 is given 20 . We investigated the serum levels of folate and vitamin B12 in women with recurrent miscarriage and control groups.The normal value for folate is >3 ng/mL and for vitamin B12 is 223 to 1132 pg/mL.Our data show that there was no vitamin deficiency in either group, even when it was categorized into different genotypes of the polymorphisms investigated.However, the analysis of linear regression showed a dependence of folate serum levels in the maintenance of the levels of tHcy, as previously described 3 .As the fetal loss is a multifactorial problem, supplementation of folate and vitamin B12 will reduce the risk of abortion caused by the deficiency of these micronutrients. Risk factors such as socioeconomic status, age parity, maternal birth weight, past obstetric outcomes, race, nutrition, smoking, and drug use are thought to be key determinants of the adverse outcome of pregnancy 21 .Our study was undertaken in a public hospital and was comprised of women of low social economic status.The linear correlation between folate and homocysteine levels indicates an involvement of the nutritional status among the studied group.This fact should be considered in that previous work has suggested an association between poor nutritional intake and recurrent miscarriage 6,22 .The finding of a possible role of the cystathionine beta synthetase gene in the fetal-loss group emphasizes a possible contribution of this and other molecules and nutritional status in the homocysteine metabolism and abortion. The information that the homozygous for C677T transition in the MTHFR gene is associated with the fast increment of plasma tHcy is well known 7,20 .Interestingly, in this study, such association was not shown.This could be due to lack of vitamin deficiency in both groups.In spite of that, the C677T thermolabile variant of the MTHFR is an important genetic determinant of tHcy metabolism.Individuals homozygous for the mutant T allele exhibit increase elevations of the amino acid in their blood in comparison with other genotypes only under conditions of lower folate status 6,23 .This shows that B-vitamin status modifies the relationship between the MTHFR C677T mutation and tHcy serum levels.However 24 , in a crosssectional study, the folate-dependent relation between the MTHFR C677T genotype and plasma tHcy concentration by gender was examined, which showed that, among young women, the MTHFR genotype is not a strong predictor of tHcy levels under diverse conditions of folate status.This data were not confirmed among men. In the present study, tHcy serum levels were similar between the two groups investigated. Previous studies had demonstrated a higher frequency of homozygous to the MTHFR C677T genotype and higher tHcy serum levels in women with recurrent miscarriage than in control groups 15,22 .However, Creus et al. 6 and Pihusch et al. 25 did not demonstrate such association. In our study, the frequency of genetic polymorphisms that encode enzymes involved in folate-and vitamin B12-dependent homocysteine metabolism, the MTHFR C677T and A1298C; MS A2756G and CBS 844ins68 insertion, did not show differences in a group of women with recurrent miscarriage and a control group.This is in agreement with Puri et al. 15 and Zonouzi et al. that investigated the MTHFR polymorphisms in North India and Iran respectively.In contrast, other studies described the relationship between the gene polymorphisms: MTHFR 4,20 ; MS 26 and recurrent abortion. However, in the study group, there was an association with a presence of CBS 844ins68 insertion and recurrent abortion in the third trimester.As we are the first to relate the association of the CBS 844ins68 insertion and recurrent abortion, further studies are needed to confirm this finding, as well as its influence in the vitamin-intake levels and fetal loss in different populations with a larger sample. On the other hand, the inconsistent results between our data and other studies may be attributed to the difference of gestational weeks used to define RSA in each study, the differences in the populations, or due to the small sample size our results weren't able to reach statistical significance. Table 1 . Frequencies of the polymorphisms of the enzymes MTHFR, MS and CBS gene between women with recurrent miscarriage and controls W: Wild type; I: Insertion. Table 2 . Distribution of the CBS 844ins68 insertion in the case group and gestational age when the miscarriages occurred *Kruskal-Wallis; **Anova. Table 3 . Homocysteine serum levels among women with recurrent miscarriage, according to genotypes of each polymorphism Table 4 . Homocysteine serum levels of control group according to genotypes of each polymorphism *Kruskal-Wallis.Figure 1.Linear regression analysis of folate (ng/mL) and tHcy (umol/L) serum levels of the study and control groups, p=0.03
4,149.6
2015-02-01T00:00:00.000
[ "Biology" ]
Towards a precision calculation of N eff in the Standard Model. Part III. Improved estimate of NLO contributions to the collision integral We compute the dominant QED correction to the neutrino-electron interaction rate in the vicinity of neutrino decoupling in the early universe, and estimate its impact on the effective number of neutrino species N eff in cosmic microwave background anisotropy observations. We find that the correction to the interaction rate is at the sub-percent level, consistent with a recent estimate by Jackson and Laine. Relative to that work we include the electron mass in our computations, but restrict our analysis to the enhanced t-channel contributions. The fractional change in N eff SM due to the rate correction is of order 10-5 or below, i.e., about a factor of 30 smaller than that recently claimed by Cielo et al., and below the nominal computational uncertainties of the current benchmark value of N eff SM = 3.0440 ± 0.0002. We therefore conclude that aforementioned number remains to be the state-of-the-art benchmark for N eff SM in the standard model of particle physics. The effective number of neutrinos, N eff , is an important parameter in standard hot big bang cosmology [1].Defined as the energy density residing in free-streaming, ultra-relativistic particle species relative to the photon energy density in the post-neutrino decoupling early universe (i.e., at temperature T ≲ 1 MeV), the primary role of N eff is to fix the universal expansion rate at T ≲ 1 MeV up to the end of the radiation-domination epoch.Its observable consequences are many-from setting the primordial light element abundances, to influencing the correlation statistics of the cosmic microwave background (CMB) anisotropies and the large-scale matter distribution.Coupled with the fact that many beyond-the-standard-model scenarios predict N eff -like effects (e.g., light sterile neutrinos [2,3], axions [4,5], gravitational waves [6], hidden sectors [7,8], etc.), pinning down its value both theoretically and observationally has enjoyed an unwavering interest for over four decades [1,9].From the theoretical perspective, the expected value of N eff in the context of the standard model (SM) of particle physics is 3 (for three generations), plus percent-level corrections due to residual energy transfer between the quantum electrodynamic (QED) plasma and the neutrino sector during neutrino decoupling [10][11][12][13][14][15] as well as deviations of the QED plasma itself from an ideal gas [16][17][18][19][20][21].Historical estimates of these corrections have ranged from 0.011 to 0.052 [12,[22][23][24][25]. Detailed modelling in recent years [26][27][28][29], however, have drastically reduced the spread.Notably, two of us (Drewes and Wong) and collaborators reported in [29] a prediction of N SM eff = 3.0440 ± 0.0002 from a fully momentum-dependent precision transport calculation that accounted for (i) neutrino oscillations, (ii) finite-temperature corrections to the QED equation of state (EoS) to O(e 3 ), and (iii) a first estimate of finite-temperature corrections of type (a) to the weak interaction rates depicted in figure 1 (see also table 1); the error bars are due mainly to numerical resolution and experimental uncertainties in the input neutrino mixing angles.More remarkably still, this result is in perfect agreement with the independent calculations of [27,28] modelling the same physics-to five significant digits in the central value and with comparable error estimates.The precision computation of N SM eff appears therefore to have reached convergence, at least in a limited sense. There are nonetheless reasons to be cautious.For one, while the general expectation is that the physics summarised in table 1 dominate corrections to N SM eff , as yet missing is a systematic study of possible higher-order effects that may be at least as important.Indeed, in estimating only next-to-leading-order (NLO) effects due to diagram (a) to the weak rates rather than the full range of corrections displayed in figure 1, one could argue that the computations of [27][28][29] are, at least conceptually, incomplete.Two recent works have taken a first step towards filling this gap: • Cielo et al. [30] took the finite-temperature rate corrections for e + e − → ν α να from [31], computed originally in the context of energy loss in a stellar plasma, and appealed to detailed balance to estimate the corrections to collision integrals.The claimed effect of this correction on N SM eff is quite substantial-at the ∼ 0.001 level.We have reservations about this result: Aside from mapping rate corrections from an incompatible energy Sources of uncertainty Numerical solution by FortEPiaNO ±0.0001 Input solar neutrino mixing angle θ 12 ±0.0001 Table 1: Leading-digit contributions from various SM corrections, in order of importance, thus far accounted that make up the final N SM eff − 3 = 0.0440 ± 0.0002 [28,29].Table adapted from [29].regime (O(1) keV ≪ m e in a stellar plasma versus O(1) MeV > m e in the early universe), the analysis of [30] also neglected (i) corrections due to diagram (d) in figure 1 (the "closed fermion loop"), (ii) corrections to elastic scattering reactions like ν α e → ν α e, as well as (iii) Pauli blocking effects of the final-state neutrinos.Of particular note is that the neglected closed fermion loop diagram (d) contains a t-channel enhancement, which should, at least naïvely, dominate the weak rate corrections. • The more recent work of Jackson and Laine [32] considered all diagrams in figure 1 in a first-principles calculation using the imaginary-time formalism of thermal field theory which accounts for both vacuum and finite-temperature corrections, 1 along with an estimate of hadronic corrections to diagram (d) following [34].Because the imaginary-time formalism applies only to systems in thermal equilibrium, the computation of [32] effectively neglected non-equilibrium neutrino phase space distributions.Additionally, the authors assumed a negligible electron mass by setting m e = 0, which is not necessitated by the formalism.The final result, presented as a set of corrections to the leadingorder (LO) neutrino damping rate, confirms the expected t-channel enhancement in diagram (d).Nevertheless, these corrections are minute-of order 0.2 to 0.3%.While the authors of [32] did not report the corresponding change in N SM eff , it is clear that corrections of this magnitude cannot effect a shift in N eff as sizeable as that claimed in [30]. The purpose of the present work is to clarify whether or not QED corrections to the neutrino interaction rate can alter the standard-model N SM eff at the ∼ 0.001 level as claimed in [30].While this correction is small relative to the anticipated sensitivity of the next-generation CMB-S4 program to N eff , σ(N eff ) ≃ 0.02 − 0.03 [35], an accurate theoretical prediction for N SM eff at per-mille level accuracy is nonetheless desirable to justify neglecting the theoretical uncertainty in cosmological parameter inference.In this regard, our work shares a common motivation with Jackson and Laine [32].However, our work also differs from [32] in three important ways: 1. Our first-principles computation of the QED corrections uses the so-called real-time formalism and focuses on corrections of type (d) which contain a t-channel enhancement.Computation of the non-enhanced (and hence sub-dominant) diagrams (a)-(c) is postponed to a follow-up work.Like the imaginary-time formalism, the real-time formalism also automatically takes care of both vacuum and finite-temperature corrections.While in equilibrium situations the two formalisms are exactly equivalent [36], the latter has the advantage that it can easily be generalised to non-equilibrium settings [37,38].Thus, although we shall restrict the present analysis to the same equilibrium conditions as in [32] and hence provide an independent partial cross-check of their results, our calculation will also pave the way for incorporating NLO effects in neutrino decoupling codes such as FortEPianNO [26] to deliver a final verdict on N SM eff . 2. We retain a finite electron mass m e in our calculation, which represents a departure from the m e = 0 approximation made in [32].Neutrino decoupling occurs at temperatures T ∼ 1 MeV, in whose vicinity the setting of m e = 0 may not be well justified.We shall examine the validity of the approximation and its impact on N SM eff . 3. Finally, we use our NLO results to estimate the corresponding change in N SM eff .This estimate was missing in [32]. The rest of the article is organised as follows.In section 2 we describe the physical system and sketch how N SM eff is computed.Section 3 outlines the computation of QED corrections to the neutrino damping rate due to diagram (d) and presents our numerical estimates of these rate corrections.The shift in N SM eff due to these corrections is presented in section 4, and section 5 contains our conclusions.Five appendices provide details on the bosonic and fermionic propagators at finite temperatures, explicit expressions for certain thermal integrals, derivation of the 1PI-resummed photon propagator, parameterisation of the collision integrals, and functions appearing in the continuity equation. Preliminaries The SM effective number of neutrinos N SM eff is defined via the ratio of the energy density contained in three generations of neutrinos and antineutrinos, ρ ν , to the photon energy density ρ γ in the limit T /m e → 0, Here, T is the photon temperature, m e the electron mass, and the limit T /m e → 0 is understood to apply only to T ≫ m i , where m i is the largest neutrino mass. To compute the precise value of N SM eff requires that we track the evolution of ρ ν and ρ γ simultaneously across the neutrino decoupling epoch, i.e., T ∼ O(10) → O(0.01) MeV.Assuming that at these temperatures the photons are held in a state of thermodynamic equilibrium together with the electrons/positrons-collectively referred to as the "QED plasma"-this is typically achieved by solving two sets of evolution equations: (i) a continuity equation that follows the total energy density of the universe, and (ii) some form of generalised Boltzmann equations-often referred to as the quantum kinetic equations (QKEs)-which describe the non-equilibrium dynamics in the neutrino sector across its decoupling. Continuity equation In a Friedmann-Lemaître-Robertson-Walker (FLRW) universe, the continuity equation is given by where ρ tot and P tot are, respectively, the total energy density and pressure, H ≡ (1/a)(da/dt) the Hubble expansion rate, and a is the scale factor.For the physical system at hand, ρ tot ≡ ρ QED + ρ ν and P tot ≡ P QED + P ν , where ρ QED ≡ ρ γ + ρ e subsumes the photon and the electron/positron energy densities, and similarly for P QED .The assumption of thermodynamic equilibrium in the QED sector in the time frame of interest means that the standard thermodynamic relation ρ QED = −P QED + T (∂/∂T )P QED applies.Then, the finitetemperature corrections to the QED equation of state summarised in table 1 simply refer to corrections to the QED partition function Z QED and hence P QED = (T /V ) ln Z QED that alter ρ QED +P QED = T (∂/∂T )P QED from its ideal-gas expectation.Corrections to Z QED are known to O(e3 ) for arbitrary m e and chemical potential µ [39] and to O(e 5 ) for m e = µ = 0 [40,41].Their effects on N SM eff have been estimated in references [17,20,21] to O(e 4 ). Quantum kinetic equations State-of-the-art neutrino decoupling calculations employ the QKEs to track simultaneously the effects of in-medium neutrino flavour oscillations and particle scatterings on the one-particle reduced density matrix of the neutrino ensemble, ϱ = ϱ(p, t).Schematically, the QKEs take the form [42] where H = H(p, t) = H vac + V is the effective Hamiltonian incorporating vacuum flavour oscillations H vac and in-medium corrections from forward scattering V, 2 and is the collision integral encapsulating the non-unitary gains (Γ < = Γ < (p, t)) and losses (Γ > = Γ > (p, t)) of ϱ.In the context of a precision calculation of N SM eff , the quantities ϱ, H, and Γ ≷ are understood to be 3 × 3 hermitian matrices in flavour space, with the diagonal entries of ϱ corresponding to the occupation numbers f α (p, t) ≡ {ϱ(p, t)} αα , for α = e, µ, τ . 3We assume a CP -symmetric universe, which is well justified if any asymmetry in the lepton sector mirrors the baryon asymmetry in the observable universe [43].In practice this means it suffices to follow only one set of QKEs for the neutrinos; the antineutrinos evolve in the same way. For the problem at hand, the gain and loss terms Γ ≷ incorporate in principle all weak scattering processes wherein at least one neutrino appears in either the initial or final state.In the temperature window of interest, however, it suffices to consider only 2 → 2 processes involving (i) two neutrinos and two electrons any way distributed in the initial and final states (labelled "νe"), and (ii) neutrino-neutrino scattering ("νν").The leading-order Γ ≷ for these processes are well known, and take the form of two-dimensional momentum integrals, The momentum integrals (2.5) are hard-coded in the purpose-built neutrino decoupling code FortEPianNO [26,29], which solves numerically the continuity equation (2.2) and the three-flavour QKEs (2.3) in their entirety in precision N SM eff computations. Damping approximation We would like to compute QED corrections to Γ ≷ νe , and estimate their impact on N SM eff .Ideally these corrections would take the form of corrections to the scattering kernel Π νe , to be incorporated into a neutrino decoupling code such as FortEPianNO.As a first pass, however, we may work within the damping approximation, which makes the simplifying assumption that all particle species-besides that at the momentum mode p represented by ϱ(p)-are in thermal equilibrium specified by a common temperature T equal to the photon temperature.Then, defining δϱ = ϱ(p) − ϱ eq (p), where ϱ eq (p) = diag(f is the Fermi-Dirac distribution, the collision integral (2.4) can be expanded to linear order in δϱ to give where δ αβ is a Kronecker-δ, and are the damping coefficients composed of components of the damping rate matrix Linearisation in δϱ ensures that Γ(p, T ) is diagonal in the neutrino interaction basis, and that Γ > (p, T ) = e Ep/T Γ < (p, T ) holds by detailed balance, where E p = p is the neutrino energy at the momentum mode p of interest.This is the approximation under which we shall compute Γ in section 3. Details of the derivation of (2.6) can be found in, e.g., appendix B of [29]. In the following, we shall devote section 3 to evaluating QED corrections to the damping rate {Γ νe (p, T )} αα due to the closed fermion loop (diagram (d) in figure 1).Our estimates of its impact on N SM eff will be presented in section 4. Calculation and NLO results We compute QED corrections to the interaction rates of the weak processes ν α e → ν α e and ν α να ↔ e + e − within the framework of Fermi theory, whose effective Lagrangian can be expressed as Here, the spinors ψ α represent neutrino fields of flavour α; J µ L/R = ψe γ µ P L/R ψ e are the leftand right-handed electron current operators, with the chiral projectors P L/R = 1 2 (1 ∓ γ5 ); the right-handed coupling reads g R = sin 2 θ W , while the left-handed coupling g α L = − 1 2 + sin 2 θ W + δ αe depends on the neutrino flavor α = e, µ, τ .Strictly speaking, the closed fermion loop in figure 1 receives in principle contributions also from quarks.This interaction is also well described by a Lagrangian of the form (3.1), with the couplings g α L and g R updated for the quarks of interest. 4We shall however omit the contributions from quarks in this work: at finite temperatures these contributions are Boltzmann-suppressed at the O(1) MeV temperatures of interest, while hadronic corrections in vacuum have been studied in [34].As QED is a vector-like theory, we also introduce the vector-axial couplings g α V,A = 1 2 (g α L ± g R ) as an alternative notation. Evaluation of the closed fermion loop In order to compute the damping coefficients (2.7), we must evaluate the flavour-diagonal entries of the neutrino damping rate matrix in the equilibrium approximation for ϱ, i.e., Γ α (p, T ) ≡ {Γ νe (p, T )} αα , which is given by the sum where Γ ≷ α are the α-flavoured neutrino production (<) and destruction (>) rates in the equilibrium limit.In the framework of the real-time formalism of finite-temperature QED, these rates can be extracted from the Wightman self-energies Σ ≷ , 5 where Σ ≷ are identified with the self-energies with opposite (Keldysh) contour indices:6 Σ < ≡ Σ 12 and Σ > ≡ Σ 21 .In thermal equilibrium, detailed balance is established by the fermionic Kubo-Martin-Schwinger (KMS) relation Σ > = −e p 0 /T Σ < .Then, the modedependent damping rate can also be written as where the first equality follows from the fact that Γ α can be related to the discontinuity of the retarded neutrino self-energy evaluated at the quasiparticle pole, −idiscΣ R = 2 ImΣ R = Σ > − Σ < , in accordance with the optical theorem at finite temperature.While the KMS relation makes explicit use of the fact that Σ ≷ are computed in thermal equilibrium, the realtime formalism used here can be generalised in a straightforward manner to non-equilibrium situations [37]. 7Where there is no confusion, we shall drop the flavour index α in the following.On general grounds the dominant QED correction to Γ α can be expected from diagram (d) in figure 1 in the regime where the photon propagator is on-shell.This expectation has been confirmed in [32].In the real-time formalism this process is contained in the contribution to the Wightman self-energy shown in figure 2, where, in terms of finite-temperature cutting rules, diagram (d) can be recovered from a cut through one closed electron loop and the internal neutrino line. 8We shall compute only this contribution and denote the corresponding Wightman self-energy with Σ ba NLO .For notational convenience, we further split Σ ba NLO into a sum Σ ba NLO = c,d=1,2 Σ ba,cd over the internal real-time indices c and d, with the partial self-energies Σ ba,cd given by Tr iS ad e (q)γ ρ (P Here, we have introduced the abbreviation p ≡ d 4 p /(2π) 4 for loop integrals; the traces are over the Clifford algebra; the subscripts "e" and "ν α " on the fermionic propagators refer to the associated particle; and the definitions of the propagators can be found in appendix A. As we would like to compute Σ 12 NLO , we set the external contour indices to a = 2 and b = 1.Then, the contributions Tr / pΣ 12,21 and Tr / pΣ 12,12 , which correspond to cutting both the photon and neutrino lines, vanish by momentum conservation (see also footnote 8).Of the remaining "11" and "22" contributions, the transformation behaviour of the thermal propagators under hermitian conjugation dictates that Tr / pΣ 12,11 = Tr / pΣ i.e., the Wightman self-energy represented by figure 2 can be determined entirely through the diagonal "11" contribution. It is instructive to recast the expressions for the self-energy (3.5)-(3.6)into the form of a standard Boltzmann collision integral commonly found in textbooks (e.g., [50]).To do so, we first identify the 4-momenta p, l, q in figure 2 with the momenta p 1 , p 2 , p 3 , p 4 of the external neutrinos and electrons of the underlying 2 → 2 process represented by diagram (d) of figure 1 via Then, writing out the propagators iS where the population factor contains the equilibrium phase space distributions of the three integrated external particles and is analogous to F ≷ in equation (2.5).The quantity T NLO plays the role of QED corrections to the LO squared matrix element, 9 and can be written in terms of the one-loop photon selfenergy Π µν ab as where we have introduced the photon momentum Since T NLO has the interpretation of a squared matrix element, we can split it into a vacuum and a thermal part according to temperature dependence, The vacuum correction T vac has no intrinsic temperature dependence in the sense that it makes no explicit reference to the temperature or to any phase space distribution.It is simply the correction to the weak matrix elements arising from the interference of the closed ) .If we were to replace TNLO with TLO in equation (3.8), we would recover the leading-order equilibrium neutrino production rates. fermion loop diagram (d) with the LO graph in standard T = 0 quantum field theory, and can be expressed in terms of the vacuum photon self-energy as where α em is the electromagnetic fine-structure constant, and the form factor Π 2 is defined in appendix C; see equation (C.7).The simplicity of the expression follows from the fact that the integration domain is symmetric under the interchange p 3 ↔ p 4 .This symmetry, along with momentum conservation, also ensures the absence of all antisymmetric terms containing Levi-Civita symbols arising from traces of γ 5 with four or more γ-matrices.Vacuum QED corrections to the neutrino-electron interaction matrix element were previously computed in reference [33]. The thermal correction T th , on the other hand, depends explicitly on equilibrium phase space distributions, f F or f B , of the internal particles ("F " for Fermi-Dirac; "B" for Bose-Einstein), and can be thought of as a temperature-dependent correction to the squared matrix element.This T -dependence originates in the thermal part of the "11" propagator, which is proportional to δ(p 2 − m 2 )f F/B (|p 0 |) (see appendix A) and, where it is applied, effectively puts an internal line of the closed fermion loop diagram (d) in figure 1 on-shell.Purely from counting, there are altogether seven possible ways to put one, two, or all three internal lines of diagram (d) on-shell.Not all combinations contribute to T NLO : Terms proportional to f B correspond to putting the photon line on-shell, which are forbidden for diagram (d) for kinematic reasons [49,51].Similarly, putting both internal electrons on-shell leads to a purely imaginary contribution that is irrelevant to the real part of the self-energy we wish to compute.The only surviving two combinations correspond to putting either internal electron line on-shell, and are proportional to f F (|k 0 |) and f F (|k 0 + P 0 |) respectively. Observe that T NLO contains a t-channel contribution from elastic ν α e scattering (i.e., p 0 2 < 0) that is logarithmically divergent for soft photon momenta.This divergence comes from the fact that the finite-temperature photon self-energy scales not as P 2 like in vacuum, but as T 2 in the hard-thermal loop limit which do not compensate anymore for the 1/P 2 behaviour of the photon propagator.In addition, soft photons are Bose-enhanced.To remedy the problem, we replace the tree-level photon propagator D cd µν in equation (3.10) with the fullyresummed photon propagator Dab µν . 10Furthermore, because both Dab µν and Π νσ 11 split into a longitudinal ("L") and a transverse ("T ") part, the same decomposition applies also to T th , i.e., T th = T L th + T T th , where T L,T th can be brought into the form Here, DR denotes the retarded resummed photon propagator; Π R,T ̸ =0 is the retarded thermal photon self-energy comprising the f F (|k 0 |) and f F (|k 0 + P 0 |) terms described above; we have employed the shorthand notation P i,j L/T = P µν L/T p i,µ p j,ν ; and a L,T = (3 ∓ 1)/4 further differentiates between the longitudinal and the transverse contribution. Note that the imaginary part of the photon propagator, Im DL,T 11 , does not appear in equation (3.13) because it is formally of higher-order in α em and we are only interested in the O(α em ) corrections.We also only use the resummed photon propagator in the IR divergent tchannel contribution; where the divergence is absent, i.e., in the s-channel, we set DR → D R , where D R is the un-resummed counterpart of DR .The final expressions for Re Π 11 .Details of their derivation, along with the relevant thermal integrals, can be found in appendices B and C. Numerical results for NLO neutrino interaction rate We evaluate the self-energy correction (3.8) and hence the correction to the neutrino damping rate (3.4) by numerically integrating over the two remaining momenta l and q in (3.8) using the parameterisation shown in appendix D. We adopt the experimentally-determined values given by the Particle Data Group [52] for the following input parameters: • Electron mass: m e = 0.510 998 950 00(15) MeV, • Electromagnetic fine-structure constant: α −1 em (0) = 137.035999 180 (10), and • Weinberg angle: sin 2 θ W (0) MS = 0.238 63 (5). The renormalisation scale µ R appearing in the photon self-energy of the vacuum contribution is identified with the electron mass, µ R = m e .Figure 3 shows the closed fermion loop corrections to the damping rates Γ e (p) and Γ µ,τ (p) at the mean momentum p = 3.15T .Relative to their respective LO rates, we find the corrections at temperatures T ∼ 1 → 3 MeV to fall in the range −0.2 → +0.1% and −0.0005 → +0.0002%, respectively, for ν e and ν µ,τ .We further note that: 1.At T ∼ 2 MeV, the vacuum and the thermal contributions to the correction are roughly equal in magnitude (green vs blue lines in figure 3), in contrast to the findings of [30], where finite-temperature corrections were determined to be subdominant.We note however that a direct comparison is not possible because a different set of diagrams was investigated in [30]-(a), (b), and (c) in figure 1-as opposed to the type (d) corrections considered in this work. 2. Reference [30] also found no significant flavour dependence in the rate corrections: their Γ e and Γ µ,τ corrections differ by less than 1% in the temperature regime around neutrino decoupling.Our corrections are on the other hand strongly flavour-dependent-by more than two orders of magnitude-and trace their origin to the fact that electron neutrinos experience charge current interactions while the muon-and tau-flavoured neutrinos interact only via the neutral current with the e ± thermal bath.This difference renders the corresponding vector couplings, g e V ∼ 0.49 and g µ,τ V ∼ −0.012, very roughly two orders of magnitude apart from one another.This strong flavour dependence in the rate corrections was also seen in reference [32]. 3. We have computed the NLO contributions to the interaction rates in two different approximations: (i) retaining the full dependence on the electron mass (solid lines in figure 3), and (ii) in the limit m e → 0 (dashed lines).The massless calculation aims to quantify the effect of the m e = 0 approximation used in reference [32], along with the Hard Thermal Loop (HTL) approximation of the photon propagator.We observe that the error from neglecting m e is relatively minor for T ≳ 3m e , but becomes sizeable at low temperatures.In particular, in the limit T → 0 the ra-tio Γ NLO α /Γ LO α vanishes for m e > 0, but diverges for m e = 0 because of Boltzmann suppression of the LO rates.Precision computations of N SM eff track the evolution of neutrinos down to temperatures much below m e [28,29].Thus, although it is commonly understood that (electron) neutrino decoupling occurs at relativistic temperatures T ∼ (2 → 3) × m e , what we assume for m e in the NLO rate computations may yet have some impact on N eff (see section 4.2). Figure 4 focuses on the t-channel contribution to the interaction rate, where the enhancement near the photon mass-shell occurs.We show four sets of results, based on the resummed photon propagator detailed in appendix C: (i) the complete one-loop result including a finite m e everywhere, (ii) the complete one-loop result in the limit m e → 0, (iii) using the HTL photon propagator (which does not depend on the electron mass), but with m e everywhere else, 11 and (iv) using the HTL photon propagator and setting m e = 0 everywhere.As expected, we observe that (ii) and (iv) match to a very good approximation.Indeed, since the scattering rates are dominated by the kinematic region around the t-channel singularity where photons are soft, the 1PI-resummed propagator is well-approximated by the HTL one when, in addition, we set m e = 0. On the other hand, visible differences can be discerned between (i) and (iii) at T ≲ 1 MeV, which can be explained by the fact that the HTL approximation only holds for T ≫ m e .In the lower panel of figure 4, we highlight the impact of m e by displaying the ratio of (i) to (iv). NLO decoupling temperatures The lower panels of the two plots in figure 3 show the LO electron neutrino and muon/tau neutrino interaction rates juxtaposed with the Hubble expansion rate.The latter is given by Pl ), where M Pl = 2.435 36 × 10 21 MeV is the reduced the Planck mass, ρ ν = 3g ν (7/8)(π 2 /30)T 4 is the energy density in three families of neutrinos with g ν = 2, and corresponding to an increase of ∼ 0.04 % for ν e and of ∼ 8 × 10 −7 % for ν µ,τ .Around the muon neutrino decoupling temperature, the vacuum and thermal contributions to the NLO rates approximately cancel, explaining the smallness of the correction to T NLO d(µ,τ ) .Given that [32] computed the NLO weak rates assuming m e = 0, it is also of interest to study how such an assumption modifies the decoupling temperature shifts.Using our NLO effects on N SM eff Having computed the closed fermion loop correction to the damping rate Γ α , we are now in a position to estimate its effect on N SM eff .Within the damping approximation, two avenues are available to us: • Given Γ α at a representative momentum, we have already estimated in equation (3.16) the corresponding correction to the neutrino decoupling temperature T d , defined via Γ α (T d ) = H(T d ).Then, under the assumption of instantaneous neutrino decoupling, an estimate of the change to N SM eff , δN eff ≡ N NLO eff − N LO eff , can be obtained via entropy conservation arguments. • We may also compute δN eff by solving directly the continuity equation (2.2) and the QKEs (2.3) in the damping approximation (2.6) and neglecting neutrino oscillations. We consider both approaches in the following.The corresponding estimates for δN eff are summarised in table 2. Entropy conservation The entropy conservation argument posits that entropies in a comoving volume in two decoupled sectors are separately conserved, i.e., where s να and s ≡ s γ + s e + β̸ =α s ν β + • • • denote, respectively the entropy density of a decoupled neutrino species ν α and of the QED plasma plus any neutrino species ν β that may still be in equilibrium with it, and the scale factors a 1,2 represent two different times after decoupling.We take a 1 to correspond to the time immediately after ν α decouples instantaneously from the QED plasma, i.e., a 1 = a + d , where the sector temperatures satisfy T (a 1 ) = T να (a 1 ) ≡ T d , while a 2 > a 1 is some later time to be specified below. The assumption of instantaneous decoupling allows the neutrinos to maintain to an excellent approximation an ideal-gas equilibrium phase space distribution at all times, so that s να (a) ∝ T 3 να (a).It then follows simply from (4.1) that T να (a 2 ) = (a 1 /a 2 )T να (a 1 ) = (a 1 /a 2 )T d .In the case of the QED+ν β plasma, we parameterise its entropy density as where the entropy degree of freedom parameter gs is given in the ideal gas limit by with g γ = 2, g e = 4, and g ν β = 2.For our current purpose of estimating δN eff due to NLO contributions to the weak rates, it suffices to use the ideal-gas gs .We do note however that QED entropy density at MeV is subject in principle to a sizeable finitetemperature correction to the QED equation of state, which needs to be included in precision N SM eff calculations.See, e.g., reference [21] for details.Then, combining the above, we find an estimate of the neutrino-to-photon temperature ratio of at the later time a 2 .We use this temperature ratio in the following to provide two estimates of δN eff due to the rate corrections. Common decoupling temperature In the first estimate, we assume all neutrino flavours to decouple effectively at the same time, an assumption that may to some extent be justified by the observed large mixing in the neutrino sector.We choose a 1 = a + d to correspond to the time immediately after ν e decoupling, and set a 2 at a time significantly after e ± annihilation, where T (a 2 ) ≪ m e .The latter leads immediately to an entropy degree of freedom of gs (a 2 ) = g γ = 2, while for the former we have Then, using the temperature ratio (4.4) and the ideal-gas relations ρ γ ∝ g γ T 4 and ρ να ∝ (7/8)g να T 4 να , we find or, equivalently in terms of N eff , where the factor 4/11 corresponds to 2/g s (T d ) evaluated in the limit T d ≫ m e .Using the LO and NLO electron neutrino decoupling temperatures in (3.15) and (3.16) respectively, we find a fractional shift of δN eff /N LO eff ≃ −1.1×10 −5 due to the rate corrections.Had we instead used the m e = 0 NLO decoupling temperatures (3.17), the correspond shift in N eff would have been δN me=0 eff /N LO eff ≃ −1.2 × 10 −5 .Thus, while setting m e = 0 leads to a ∼ 10% change in the estimate of δN eff , its ultimate impact on N SM eff appears to be small, at least within the entropy conservation picture.These results are summarised in table 2. Flavour-dependent decoupling In the absence of neutrino oscillations, ν e and ν µ,τ decouple at different temperatures, T d(e) and T d(µ,τ ) > T d(e) .Then, to estimate δN eff requires that we consider entropy conservation across four epochs: the time immediately after ν µ,τ decoupling a 1 = a + d(µ,τ ) ; immediately before ν e decoupling a 2 = a − d(e) ; immediately after ν e decoupling a 3 = a + d(e) ; and a 4 is a time significantly after e ± annihilation.The corresponding entropy degrees of freedom are 2, we note that the flavour-dependent decoupling estimates of δN eff /N LO eff are generally about a factor of two smaller, in both the m e = 0 and m e ̸ = 0 cases.This difference is to be expected, as the QED corrections to the ν µ,τ interaction rates are negligible compared with the corrections to the ν e rates.We emphasise, however, both estimates are very crude approximations: the true shift in δN eff will probably fall somewhere in-between. Solving the neutrino Boltzmann equations and the continuity equation Following reference [20], we introduce the comoving quantities x ≡ m e a, y ≡ a p, and z ≡ T a, and rewrite the continuity equation (2.2) as a differential equation for the quantity z (corresponding to the photon-to-neutrino temperature ratio): where J(x/z) and Y (x/z) describe the ideal-gas behaviour of the QED plasma, while the G 1,2 (τ ) account for deviations of the QED plasma from an idea gas.Explicit forms for these expression to O(e 2 ) can be found in appendix E. Equation (4.12) also requires as input the total time derivative of the comoving neutrino density ρν ≡ ρ ν a 4 , which can generally be constructed from the neutrino density matrices via where d{ϱ(x, y)} αα /dx corresponds in general to the QKEs (2.3).Neglecting neutrino oscillations, the density matrices ϱ are diagonal in the flavour basis, in which case the QKEs (2.3) simplify to a set of classical Boltzmann equations.Then, together with the damping approximation (2.6) and in terms of the new variables, we find Equation (4.14) can be solved together with the continuity equation (4.12) for a range of momenta y covering the bulk of the neutrino population (a typical range would be y ∈ [0.01, 30]).We call this the "full-momentum" approach.Alternatively, we can simplify the equation (4.14) further by adopting the ansatz and Γ α (y) = Γ α (⟨y⟩).Identifying ⟨y⟩ with the mean momentum mode y 0 = 3.15 z(T 0 ), where T 0 is the photon temperature at initialisation (which is equal to the neutrino temperature), we can rewrite equation (4.13) in this approximation as We call this alternative the "mean-momentum" approach. Irrespective of whether we use the full-momentum or the mean-momentum approach, the final N eff can be estimated from the solutions to ρ α and z at x → ∞ using the definition of N SM eff in equation (2.1), reproduced here in terms of the rescaled variables: Table 2 shows our estimates of δN eff /N LO eff due to QED corrections to the neutrino interaction rates using both the full-momentum and the mean-momentum approaches, with or without the electron mass in the correction. Evidently from table 2, both the full-momentum and the mean-momentum estimates for m e ̸ = 0, δN eff /N LO eff ≃ (−7.8 → −7.9)×10 −6 , are broadly consistent with their counterpart obtained in section 4.1 from entropy arguments: in fact they sit between the common-decoupling and flavour-dependent decoupling estimates from entropy considerations.In contrast, the estimates assuming m e = 0 differ by ∼ 50% between the full-momentum (δN me=0 eff /N LO eff ≃ −2.6 × 10 −5 ) and the mean-momentum (δN me=0 eff /N LO eff ≃ −1.6 × 10 −5 ) approaches, and are furthermore ∼ 30% to a factor of four larger than those from entropy arguments.This result is consistent with expectations.As we have seen previously in figures 3 and 4, the rate correction under the m e = 0 assumption diverges as T → 0 relative the LO rate, whereas its m e ̸ = 0 counterpart tends to zero.Since neutrino decoupling in the early universe is not instantaneous but extends into the e ± -annihilation epoch at T ∼ m e , any estimate of δN eff that accounts for non-instantaneous decoupling will to some extent be sensitive to exactly what we assume for m e in the T ≲ 3m e region.Indeed, the low-T effect of m e on δN eff /N LO eff is not captured by entropy conservation arguments, which are based on one point estimate (the decoupling temperature) at T > m e , where the m e ̸ = 0 and m e = 0 rate corrections differ by less than 10%.In contrast, the full-momentum approach, which is sensitive to the largest range of temperatures, yields the strongest dependence of δN eff /N LO eff on what we assume for the electron mass.Thus, while the overall effect of the rate corrections on N SM eff is small, we conclude that neglecting m e in their computations is strictly not a good approximation in high-precision calculations of N SM eff . Conclusions In this work we have computed the QED corrections to the neutrino-electron interaction rate in the vicinity of neutrino decoupling and evaluated its impact on the standard-model value of the effective number of neutrinos N SM eff .We have focused on diagram (d) in figure 1, because of the expectation of a t-channel enhancement and hence dominance over the other three diagrams.Similar corrections have also been recently investigated in Cielo et al. [30] and Jackson and Laine [32], which the former analysis [30] found to lead to a significant shift in N SM eff : from the benchmark value of 3.044 [28,29] to 3.043.Contrary [30], our first-principles calculations show that QED corrections to the neutrino interaction rates are modest.In the temperature range T ∼ 1 → 3 MeV, the corrections to electron neutrino interaction rate fall in the range −0.2 → +0.1% relative to the LO rate, while for the muon and tau neutrinos the effect is even more minute: −0.0005 → +0.0002%.These results are consistent with those reported by Jackson and Laine [32], despite differences between the formalisms (imaginary vs real-time) and some approximations (zero vs finite m e ) used.The more than two orders of magnitude difference between the relative corrections for the ν e and the ν µ,τ rates also confirms the strong flavour dependence found in [32], which was not observed in Cielo et al. [30]. Using our QED-corrected neutrino interaction rates, we proceed to estimate the corresponding shift in N SM eff under a variety of approximations and methods: via entropy conservation arguments which assume instantaneous decoupling, and by solving the Boltzmann equation in the damping approximation.Depending on the exact method/approximation used, we find the relative change in N SM eff to fall in the range δN eff /N LO eff ≃ (−0.5 → −1.1)×10 −5 .That is, relative to the current SM benchmark of N SM eff = 3.0440 ± 0.0002 [28,29], QED corrections to the neutrino interaction rates can at best shift the number in the negative direction in the fifth decimal place, and are hence completely within the quoted uncertainties.Thus, while we can confirm the sign of δN eff computed by Cielo et al. [30], even our most "optimistic" estimate is a factor of 30 smaller than their claimed correction. It is also interesting to observe that setting m e = 0 in the rate corrections can have an O(1) impact on δN eff /N LO eff , even though the rate corrections themselves at T ∼ 1 → 3 MeV differ by less than 10%.This is because corrections assuming m e = 0 deviate from their m e ̸ = 0 counterparts at T ≲ 3m e , and diverge relative to LO rates just as the m e ̸ = 0 corrections vanish in the T → 0 limit.Since neutrino decoupling in the early universe is not instantaneous, these T ≲ 3m e effects will imprint on N SM eff despite the common understanding that neutrino decoupling occurs at T ∼ 1 MeV.Thus, while the overall contribution from QED weak rate corrections to N SM eff is ultimately small, we argue that neglecting m e their computations is strictly speaking not a good approximation in high-precision N SM eff calculations.In conclusion, our results strongly suggest that the SM benchmark value N SM eff = 3.0440± 0.0002 [28,29] is correct within the quoted uncertainties.Naturally, a full numerical solution of the QKEs by a dedicated neutrino decoupling code such as FortEPiaNO [26]-with all NLO contributions from diagrams (a) to (d) incorporated in the collision integral-would be highly desirable.However, short of a new and previously unaccounted-for effect, we believe it is unlikely that a more detailed investigation of NLO effects on the neutrino interaction rate will yield a deviation from the existing SM benchmark N SM eff that is large enough to be of any relevance for cosmological observations in the foreseeable future.while the ones for bosons are where f B (E) = 1/(e E/T − 1) is the Bose-Einstein distribution. For the computation of the resummed photon propagator, it is useful to write down the advanced and retarded bosonic propagators, as well as the statistical propagator where the relation to the spectral function ) is another way of stating the KMS relation.In terms of the scalar propagators, the tree-level photon propagator is given by where we have defined the polarisation tensor, for a general R ξ -gauge. B Thermal integrals Throughout the calculations we encounter thermal integrals of the following forms To evaluate them, we choose P as the reference direction with the corresponding unit vector P, and parameterise the integration variable via where, together, P and the two unit vectors e 1 and e 2 form a complete orthogonal basis. The contributions from e 1 and e 2 to the spatial components of K µ vanish after the azimuthal integration, giving where ω = |k 0 | = |k| 2 + m 2 e , and arise from integration over cos ϑ.Correspondingly, is the remaining, temporal component of K µ .For K µν , after application of the completeness relation δ ij = P i P j + k=1,2 e i k e j k , the spatial components become where we have introduced the two auxiliary functions The remaining components with at least one index being zero are dω ω 2P 0 ωℓ 1 (ω, P ) + P 2 ℓ 2 (ω, P ) f F (ω). (B.8) In all cases, the remaining single integral over ω can, for our purposes, be evaluated numerically efficiently. C 1PI-resummed photon propagator A crucial ingredient in our calculation is the resummed photon propagator used in our NLO thermal matrix element (3.13).In this appendix, we provide an exact derivation of the photon self-energy at the one-loop level within the real-time formalism, assuming electrons in thermal equilibrium with zero chemical potentials.From there, we extract the resummed Feynman propagator.While exact expressions for this resummed photon propagator have been long known in the simplified scenarios wherein m e = 0 [53] or in the HTL approximation (see, e.g., [54,55]), here we remove these assumptions and derive a propagator valid for all values of the electron mass m e and photon 4-momentum P .We note that, while the reference [49] also studied the photon propagator without HTL approximations (and accounting for nonzero electron chemical potentials), no explicit formulae were provided.with the corresponding projectors [36,56] P 00 T = P 0i T = P i0 T = 0, P ij T = −δ ij + P i P j , P µν L = g µν − P µ P ν P 2 − P µν T . (C.3) Writing the self-energy in this form makes also manifest that Π µν ab fulfills the Ward-Takahashi identity P µ Π µν ab = 0.The longitudinal projector can alternatively be expressed through the heat bath four-velocity U µ = δ µ 0 (in its rest frame) as P µν L = Ũ µ Ũ ν / Ũ 2 with Ũ µ = P 2 U µ − (U • P )P µ , so that the transverse projector becomes P µν T = g µν − P µ P ν P 2 − P µν L .In the following, we only highlight the derivation of Π 11 and Π 12 , as the remaining self-energies can be related to the first two via the bosonic KMS relation and the finite-temperature part Π µν 11,T ̸ =0 (Π µν 22,T ̸ =0 ).The renormalised vacuum part Π µν 2 (P ) = (α em /π)(P 2 g µν − P ν P µ )Π 2 (P 2 ) is textbook material and, in the MS-scheme, reads for m e → 0, with τ = 4m 2 e /P 2 (see, e.g., appendix A of [57] for the derivation).In practice, we note that the vacuum contribution to the photon self-energy is numerically irrelevant when resumming the photon propagator, as it vanishes in the soft photon exchange limit for which P 2 = 0. We can therefore safely neglect it in the resummed propagator. In terms of the integrals K µ and K µν computed in appendix B, the corresponding real parts of the two diagonal components read After collecting all the terms according to the projectors defined in equation (C.3), we obtain for the longitudinal part with the thermal photon mass m γ = eT /3.Here, the symbol HTL ≈ refers to the result obtained in the HTL approximation [58], defined through the assumption that the quantities in the integral can be separated into hard O(πT ) and soft scales O(eT ), and assuming that hard momenta k dominate the loop integral.The external momentum P on the other hand is defined to be soft and therefore also negligible compared to the loop momentum.More precisely, to go from the fully momentum-and mass-dependent expression (C.9) (or (C.11)) to the HTL result (C.10) (or (C.12)), we have expanded in P 0 /|k| at the first step and then in P 0 /|P| in the second step, using also in the same expansion scheme.We further assume that m e ≪ T , such that the electron can be treated as effectively massless.This leaves us with an integral over ω that can be performed analytically using the relation , thereby yielding the HTL result (C.10) (or (C.12)). The (purely imaginary) off-diagonal component Π µν 12 , on the other hand, reads where the minus sign from the fermionic loop and the one from the fact that we have a type 1 and a type 2 vertex cancel out, and For the evaluation of the integrals, we make use again of the decomposition of k shown in equation (B.2).The additional δ-function here (compared to the diagonal case) fixes then the polar angle θ to the value With that, T µν separates into a transverse and a longitudinal part according to The final result then reads (C.18) where the sum runs over positive and negative energies k 0 = ±ω.In the HTL limit we find where we have used the integral As will be explained in the subsequent section, it is convenient to compute the retarded and advanced photon self-energy Π R/A in order to perform the resummation of the photon propagator.Using the relation Im Π 11 = i 2 (Π 12 + Π 21 ), we can write the advanced and retarded self-energies as our results are in agreement with [54] after carefully sending the time-ordering parameter ϵ to zero.In the longitudinal part we differ by a term P 2 0 /|P| 2 with respect to [54], 13 but agree with [55].We note in passing that the self-energy (C.18) leads to the unphysical process of photon decay γ → e + e − at high enough temperatures, where m γ exceeds 2m e [49].In practice, this could be resolved by resumming the electron propagator.In our case, however, the relevant dynamics happen at temperatures much below that threshold, such that this resummation is not necessary. In figure 6, we display the finite-temperature contribution to the real and the imaginary parts of the retarded transverse and longitudinal propagators, for T = 2m e and various choices of |P|/T .We compare the exact one-loop photon self-energy to (i) the equivalent selfenergy in the limit m e → 0, and (ii) the equivalent self-energy in the HTL limit.Similarly, in figure 7, we examine the impact of the temperature on the photon self-energy.We observe that the effect of the finite electron mass m e , while small for large temperatures T /m e ∼ 5, remains sizeable for temperatures around the electron neutrino decoupling temperature where T /m e ∼ 2. Beyond the photon self-energy, other quantities of interest are the residue of the trans- 14 Longitudinal photons are also sometimes referred to as "plasmons" in the literature.It is understood that (C.34) should be replaced by the causality-respecting ϵ-prescription if the imaginary part of the photon self-energy vanishes for kinematic reasons at the given order in perturbation theory. E Functions in the continuity equation The continuity (4.12), written in terms of the rescaled variables x, y, z, contains the functions J(x/z), Y (x/z), and G 1,2 (x/z).We give the explicit forms of these functions here to O(e 2 ) [20], , (E.1) 2 5 19 A Solving the neutrino Boltzmann equations and the continuity equation 18 Conclusions Bosonic and fermionic propagators at finite temperatures 20 Figure 1 : Figure 1: The four qualitatively different QED corrections to the neutrino interaction rate corresponding to (a) modification of the electron dispersion relation, (b) virtual photon exchange, (c) thermal photon emission and absorption, and (d) corrections with a closed fermion loop.All diagrams are schematic in that all time directions are possible. Standard-model corrections to N SM eff Leading-digit contribution m e /T d correction +0.04 O(e 2 ) FTQED correction to the QED EoS +0.01 Non-instantaneous decoupling+spectral distortion −0.006 O(e 3 ) FTQED correction to the QED EoS −0.001 Flavour oscillations +0.0005 Type (a) FTQED corrections to the weak rates ≲ 10 −4 Figure 2 : Figure 2: The three-loop diagram containing the closed fermion loop.Explicitly labelled are the external (a and b) and summed (c and d) real-time contour indices, the momenta (k, l, p and q) and particles (e and ν α ). self-energy (3.5)-(3.6)can be brought into the form ̸ =0 and Re DL/T 11 are given in equations (C.10), (C.12) and (C.32), which can be easily mapped to Re Π L/T R,T ̸ =0 and Re DR via Re DL/T Figure 3 : Figure3: Top: NLO contributions to the ν e interaction rate from the closed fermion loop with a finite electron mass (solid) or m e = 0 (dashed) at different temperatures for the mean neutrino momentum p = 3.15T .For comparison we normalise all curves to the LO rate (without QED corrections), which we always evaluate with m e ̸ = 0 to ensure a common normalisation.The total rate correction (red) is further split into the vacuum (blue) and the thermal contribution (green).At T ≫ m e , the green curve flattens out as the thermal correction contains no scale besides the temperature in this region, while the vacuum correction retains a mild dependence on the renormalisation scale µ R .The lower panel shows the LO neutrino interaction rate compared to the Hubble rate, and T d indicates the decoupling temperature, defined via Γ α (T d(e) ) = H(T d(e) ).Bottom: Same as top, but for ν µ,τ . . 14 )αRatioFigure 4 : Figure4: Top: The t-channel contribution (l 0 > 0) to the NLO electron neutrino interaction rate at the mean momentum p = 3.15T in different approximations, namely, using 1PIresummed (green) and HTL (magenta) photon propagators, in each case with m e ̸ = 0 (solid lines) or m e = 0 (dashed lines) in both electron loops of the self-energy diagram of figure2.As in figure3, we normalise all curves to the LO rate, which we always evaluate with m e ̸ = 0.The lower panel shows the ratio of the 1PI result for m e ̸ = 0 to the HTL result for m e = 0 as a function of the temperature.Bottom: Same as top, but for α ̸ = e. Figure 5 : Figure 5: One-loop contribution to the photon self-energy. Figure 6 : Figure 6: Comparison between the finite-temperature part of the photon self-energy at the one-loop level with (continuous lines) and without (dashed lines) the HTL approximation at 2m e and for various choices of |P|/T . Table 2 : Estimates of the relative correction to N SM eff due to NLO weak rate corrections, with and without the electron mass, using different methods.computations of the m e = 0 rate corrections (but with m e ̸ = 0 LO rates), we find the corresponding NLO decoupling temperatures to be i.e., a 0.05 % and 6 × 10 −6 % shift for ν e and ν µ,τ , respectively, relative to their corresponding LO decoupling temperature(3.15). shift in N eff of δN eff /N LO eff ≃ −5.4 × 10 −6 if we use the m e ̸ = 0 corrections, or δN me=0 F (E e , T d(e) ),(4.8)andgs (a 4 ) = 2.An estimate of the ν e -to-photon energy density ratio at a 4 follows straightforwardly from the temperature ratio (4.4) and ideal-gas temperature-energy relations: For ρ νµ,τ (a 4 )/ρ γ (a 4 ), we note that ν e decoupling at a d(e) introduces a discontinuity in gs , thereby leading to a more complicated energy density ratio at a 4 ,
13,206.2
2024-02-28T00:00:00.000
[ "Physics" ]
Identification of an anaerobic bacterial consortium that degrades roxarsone Abstract The degradation of roxarsone, an extensively used organoarsenic feed additive, occurs quickly under anaerobic conditions with microorganisms playing an important role in its degradation. Here, an anaerobic bacterial consortium that effectively degraded roxarsone was isolated, and its degradation efficiency and community changes along a roxarsone concentration gradient under anaerobic conditions were assessed. We used batch experiments to determine the roxarsone degradation rates, as well as the bacterial community structure and diversity, at initial roxarsone concentrations of 50, 100, 200, and 400 mg/kg. The results showed that roxarsone was degraded completely within 28, 28, 36, and 44 hr at concentrations of 50, 100, 200, and 400 mg/kg, respectively. The anaerobic bacterial consortium displayed considerable potential to degrade roxarsone, as the degradation rate increased with increasing roxarsone concentrations. Roxarsone promoted microbial growth, and in turn, the microorganisms degraded the organoarsenic compound, with the functional bacterial community varying between different roxarsone concentrations. Lysinibacillus, Alkaliphilus, and Proteiniclasticum were the main genera composing the roxarsone‐degrading bacterial community. When the poultry litter is used as an organic fertilizer, arsenic pollution is likely to occur (Walrod, Burriss, Blue, Beck, & Atwood, 2016); for example, in 212 samples of animal manure-based compost collected in China, 13.7% exceeded arsenic limits (Yang et al., 2017). ROX is easily biodegraded to HAPA, but HAPA persists for long periods of time in the environment, increasing the risk of arsenic contamination (Shi, Wang, Yuan, & Hu, 2014;Zhang, Wang, Yuan, & Hu, 2014). ROX can be transferred from the diet of chickens to rice plants, and soil attributes govern the phytoavailability of ROX metabolites to rice plants (Yao et al., 2019(Yao et al., , 2016. Seasonal stratified analyses by poultry type strongly suggest that the historical use of arsenic-based poultry drugs contributes to arsenic exposure in the population of the United States (Nigra, Nachman, Love, Grau-Perez, & Navas-Acien, 2017). Similarly, in Chinese province of Guangdong, both ROX and inorganic arsenic are detected at elevated levels in the chicken tissues from live poultry markets, particularly liver, and the overall health risk from dietary exposure to inorganic arsenic associated with chicken consumption is rather high (Hu, Zhang, Cheng, & Tao, 2017). Therefore, litter applied to soil as organic fertilizer can release ROX and arsenic into soil and groundwater, affecting both the environment and human health (Chen, 2007;Fisher, Yonkos, & Staver, 2015;Jackson et al., 2003;Morrison, 1969;Oyewumi & Schreiber, 2017;Wang, Chen, Sun, Gao, & Yu, 2006a;Wang & Liao, 2005). Because of the potential food safety and environmental risks, in 1999, 2011, and 2013, the European Union, Canada, and the United States, respectively, announced that they were scrapping or discontinuing ROX use. However, it continues to be used in many other countries (Fu, He, Gong, Blaney, & Zhou, 2016). In 2003, China used 1,200 tons of ROX between the livestock and poultry industry (Zhu, 2013), and together, these animals produced 3.19 billion tons of manure which could contribute to the release of arsenic species into the environment (Wang, Ma, et al., 2006b). The China pharmacopoeia bureau issued a strong suggestion in June 2017 discouraging the use of ROX, but most livestock and poultry breeding companies continue to use the chemical. Degradation is faster in anaerobic conditions than in aerobic conditions, with ROX being completely degraded after 48 hr of dark anaerobic incubation, while only 79.9% and 94.5% were degraded after 288 hr of dark aerobic and light aerobic incubation, respectively (Liu, Zhang, Li, Wen, et al., 2017a). Light and microbial action are the main factors responsible for ROX degradation, which is also controlled by environmental factors such as moisture, temperature, and the organic content of the vadose zone (Fu, Blaney, & Zhou, 2017;Katherine et al., 2017;Sun, 2012). Some ROX-related bacteria have been reported (Stolz et al., 2007); however, the ROX-degrading bacterial community of anaerobic conditions and its degradation ability at different ROX concentrations have not been completely described. This paper isolated a stable anaerobic bacterial consortium that effectively degraded ROX and evaluated its response to the initial ROX concentration through high-throughput sequencing. Meanwhile, the ROX degradation rates of the stable bacterial consortium at different ROX concentrations under anaerobic conditions were determined. Ultimately, the relationship between ROX and the bacteria was revealed. | Poultry litter samples The poultry litter used for experiments was taken from a chicken farm (36°07′N, 115°49′E) in Yanggu County, northern Shandong plain, China. The ROX concentration of the poultry diet (34 mg/ kg) was within the recommended concentration range (50 mg/kg). There were approximately 15,000 broiler chickens on the farm, and the poultry litter was perennially stored in the open air and regularly collected to use as fertilizer. The collected litter for experiments was stored at 4°C until further analysis. The collected litter was determined to have a concentration of 11.3 mg/kg of HAPA and no ROX, likely because of the rapid degradation rate of ROX. | Enrichment of microbial communities degrading ROX The basal medium consisted of (per L) 4.2 g of NaHCO 3 , 0.095 g of MgCl 2 , 0.5 g of yeast extract, 10 mM lactate (Stolz et al., 2007), 10 ml of trace elements, and a vitamin mix. The pH was adjusted to 7.3. To prepare the slurry used for experiments, 5 g of chicken litter was suspended in 100 ml of sterile basal medium. The mixed solution was dispensed into a 50 ml headspace bottle containing 40 ml of medium, 1 ml of litter slurry, and 200 mg/kg ROX, aerated with oxygen-free N 2 , sealed, and incubated at 37°C in the dark. All experiments were performed in triplicate. The solution changed from yellow to colorless after 24 hr, and 1 ml of the solution was then transferred into a new 50-ml headspace bottle containing 40 ml of medium and 200 mg/kg ROX. This solution was then incubated for 24 hr under the same culture conditions. The above steps were repeated 10 times to attain a stable microbial community that effectively degraded ROX, which was then used as the inoculum in the next experiment. | ROX degradation along a concentration gradient under anaerobic conditions To evaluate the ability of the microbial community to degrade ROX along a concentration gradient, a 150-ml headspace bottle was filled with 150 ml of basal mineral medium and 1 ml of a mixed bacteria solution, which was obtained from the above experiment. Four ROX concentrations (50, 100, 200, and 400 mg/kg) were examined in batch experiments. In addition, a blank control was prepared without the addition of ROX, which was compared with the samples containing ROX to determine its influence on the diversity and abundance of the microbial community. Each test was conducted in triplicate. The medium and headspace were aerated with oxygen-free N 2 , sealed, and incubated at 37°C in the dark. Aliquots (1 ml) were taken every 4 hr and filtered through a 0.22µm millipore membrane for ROX analysis. Aliquots (1 ml) were taken every 2 hr to assess cell growth by measuring the optical density at 600 nm. At the end of the anaerobic incubation period, the microorganisms from different ROX concentrations were harvested by filtering the solution through a 0.22-µm millipore membrane. Then, the DNA was extracted for high-throughput sequencing. | Kinetic model First-order kinetic models are often used to describe the degradation of organic compounds (Gao, Zhang, Chen, Zheng, & Liu, 2010). Therefore, the biotransformation of ROX was characterized by a first-order kinetic model. | Amplification and sequencing of bacterial 16S rRNA genes High-throughput sequencing was conducted at Shanghai Personal Biotechnology Co., Ltd. Total bacterial genomic DNA samples were extracted using Fast DNA SPIN extraction kits (MP Biomedicals), following the manufacturer's instructions, and stored at −20°C prior to further analysis. The quantity and quality of extracted DNA were measured using a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific) and agarose gel electrophoresis, respectively. PCR amplification of the bacterial 16S rRNA genes V4-V5 region was performed using the forward primer 515F (5′-GTGCCAGCMGCCGCGGTAA-3′) and the reverse primer 907R (5′-CCGTCAATTCMTTTRAGTTT-3′). Sample-specific 7-bp barcodes were incorporated into the primers for multiplex sequencing. PCR amplicons were purified with Agencourt AMPure Beads (Beckman Coulter) and quantified using the PicoGreen dsDNA Assay Kit (Invitrogen). After the individual quantification step, amplicons were pooled in equal amounts, and pair-end 2 × 300 bp sequencing was performed using the Illlumina MiSeq platform with the MiSeq Reagent Kit v3. | High-throughput sequence analysis The Quantitative Insights Into Microbial Ecology (QIIME, v1.8.0) pipeline was employed to process the sequencing data, as previously described (Caporaso et al., 2010). Briefly, raw sequencing reads with exact matches to the barcodes were assigned to respective samples and identified as valid sequences. Low-quality sequences were excluded. Paired-end reads were assembled using FLASH (Magoc & Salzberg, 2011). After chimera detection, the remaining high-quality sequences were clustered into operational taxonomic units (OTUs) at 97% sequence identity by UCLUST (Edgar, 2010). The most abundant sequence of each cluster was picked to be the representative sequence. OTU taxonomic classification was conducted by BLAST searching the representative sequences set against the Greengenes Database (DeSantis et al., 2006). An OTU table was further generated to record the abundance of each OTU in each sample and the taxonomy of these OTUs. The taxonomic compositions and abundances were visualized using MEGAN (Huson, Mitra, Ruscheweyh, Weber, & Schuster, 2011) and F I G U R E 1 ROX degradation and the first-order fitting curves of different ROX concentrations under anaerobic conditions GraPhlAn (Altschul et al., 1997). A Venn diagram was generated to visualize the shared and unique OTUs among samples using R package "VennDiagram," based on the occurrence of OTUs across samples/ groups regardless of their relative abundance (Zaura, Keijser, Huse, & Crielaard, 2009). Partial least squares-path modeling (PLS-PM) was used to explore the relationships between bacterial communities and ROX concentration by using the R package (Sanchez, 2013). Table 1. At concentrations of 50, 100, 200, and 400 mg/kg, ROX was degraded completely within 28, 28, 36, and 44 hr, respectively, and the half-lives were 8.9, 8.8, 7.3, and 11.55 hr, respectively. The marginally longer half-life at 400 mg/L when compared to the other concentrations indicates that, under anaerobic conditions, ROX was degraded more rapidly at higher initial concentrations. There were no significant differences between the degradation rates of different concentrations of ROX over time (p > .05; Figure 1), when compared with a one-way analysis of variance (ANOVA) using SPSS Statistics for Windows (Fu et al., 2017). This indicates that increasing the ROX concentration had minimal effect on ROX degradation and further illustrates that anaerobic microbes possess great potential to degrade ROX. | ROX degradation at different concentrations The microorganisms in all media grew rapidly, and a lag period was hardly observed. Cell numbers increased exponentially, reaching the stationary phase by 20 hr (Figure 2). The ROX-free medium had the lowest microbial biomass, with microorganisms growing faster as ROX concentrations increased. The cell density at 400 mg/kg ROX was nearly threefold higher than that at 0 mg/kg. These results revealed a relationship between ROX and the microorganisms. The presence of ROX did not inhibit microbial growth; instead, it promoted microbial growth, and in turn, the microorganisms degraded ROX. | Alpha diversity of the bacterial community Analysis by high-throughput sequencing resulted in 60,282 highquality, nonplastid, and partial sequences were obtained for the five samples after quality control. The alpha diversity indices are shown in | Bacterial community composition and relative abundances There were seven phyla identified in the five samples, with Firmicutes being the major phylum in each sample, with relative abundances of 96.70%-99.85% (Figure 5a). The bacterial community composition and relative abundance of each sample at the genus level (Figure 5b) | D ISCUSS I ON The results of ROX biodegradation at different concentrations illustrated that anaerobic microbes possess great potential for this process. Similar results were obtained in a study of ROX degradation at different ROX concentrations (0, 50, 100, and 200 mg/kg) in soil, and it was found that the degradation rate increased with increasing concentrations, further highlighting the considerable potential of soil microbes to degrade ROX (Liu, Zhang, Li, & Fei, 2017b). The presence of ROX promoted microbial growth, and in turn, the microorganisms degraded ROX. These results are consistent with previous observations in which the addition of ROX did not inhibit, but rather stimulated, the growth of the anaerobe Shewanellaoneidensis MR-1 (Chen, Ke, Liang, Liu, & Wang, 2016). A previous study also showed that an isolated aerobic bacterial consortium degraded ROX and that the growth rate of the aerobic bacterial consortium was 1.4-fold higher in the presence of ROX (Guzmán-Fierro et al., 2015). ROX has a hazardous effect on the native microbial community diversity and metabolic activity of soil and significantly affects overall microbial activity in soil (Jiang, Li, Wang, Li, & Wang, 2013;Mangalgiri, Adak, & Blaney, 2015). An analysis of fluorescein diacetate hydrolysis in soil found that ROX does not exert acute toxicity on soil microbes; however, fluorescein diacetate hydrolysis was inhibited gradually with the release of inorganic arsenic (Liang, Ke, Chen, Liu, & Chen, 2014). Measuring the half-maximal inhibitory and half-maximal effective concentrations of the As(V)-and As(III)-bearing photodegradates of ROX exhibited 10-fold higher toxicity than ROX itself, which was attributed to the improved membrane permeability of the inorganic arsenicals (Zhang, Xu, Han, Sun, & Yang, 2015). The toxicity for eukaryotic or prokaryotic cells is primarily caused by ROX biodegradation by microorganisms (Mafla et al., 2015). Therefore, in the present study, the inhibition of some soil bacteria might be from the arsenic compounds released following ROX degradation. The increasing ROX concentrations enriched the bacteria that metabolized ROX and promoted the growth of these same bacteria. higher concentrations led to arsenic accumulation and a dramatic decrease in the efficiency of its adsorption (Mohamed & Farag, 2015). Arsenic stress causes negative cell responses, such as a decrease in surface area and shrinking. The Lysinibacillus strain B1-CDA showed potential to bioremediate arsenic compounds from contaminated water, forming a long chainlike structure following arsenic exposure (Rahman et al., 2014). The carboxyl groups of the glutamic acid residues in peptidoglycan are the major sites of metal deposition (Zouboulis, Lazaridis, Karapantsios, & Matis, 2010). Alkaliphilus oremlandii sp. nov. strain OhILAs is capable of transforming ROX (Fisher et al., 2008). In previous studies, Alkaliphilus was shown to be closely correlated with the presence of arsenic and ROX (Liu, Zhang, Li, & Fei, 2017b), which aligns with our own results showing Alkaliphilus to be the predominant genus of the bacterial community that degraded ROX. However, to our knowledge, we are the first to report the relationship between Proteiniclasticum and arsenic, although further studies are needed to reveal the function of Proteiniclasticum in arsenic transformation. Second, the presence of ROX promoted the growth of functional microorganisms, and microorganisms grow faster as ROX concentrations increased from 0 mg/kg to 400 mg/kg, resulting in a nearly 3-fold increase in cell density from samples with no ROX to 400 mg/kg ROX. Moreover, a stable anaerobic bacterial consortium that effectively degraded ROX was identified, and it is the first to report the relationship between Proteiniclasticum and arsenic. Oliver from CSIRO in Australia for their valuable suggestions for improving the manuscript. CO N FLI C T O F I NTE R E S T S None declared. E TH I C S S TATEM ENT None required. DATA AVA I L A B I L I T Y S TAT E M E N T All data generated or analyzed during this study are included in this
3,602.2
2020-02-13T00:00:00.000
[ "Engineering", "Biology" ]
Orographically induced spontaneous imbalance within the jet causing a large-scale gravity wave event To better understand the impact of gravity waves (GWs) on the middle atmosphere in the current and future climate, it is essential to understand their excitation mechanisms and to quantify their basic properties. Here a new process for GW excitation by orography–jet interaction is discussed. In a case study, we identify the source of a GW observed over Greenland on 10 March 2016 during the POLSTRACC (POLar STRAtosphere in a Changing Climate) aircraft campaign. Measurements were taken with the Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) instrument deployed on the High Altitude Long Range (HALO) German research aircraft. The measured infrared limb radiances are converted into a 3D observational temperature field through the use of inverse modelling and limited-angle tomography. We observe GWs along a transect through Greenland where the GW packet covers ≈ 1/3 of the Greenland mainland. GLORIA observations indicate GWs between 10 and 13 km of altitude with a horizontal wavelength of 330 km, a vertical wavelength of 2 km and a large temperature amplitude of 4.5 K. Slanted phase fronts indicate intrinsic propagation against the wind, while the ground-based propagation is with the wind. The GWs are arrested below a critical layer above the tropospheric jet. Compared to its intrinsic horizontal group velocity (25– 72 ms−1) the GW packet has a slow vertical group velocity of 0.05–0.2 ms−1. This causes the GW packet to propagate long distances while spreading over a large area and remaining constrained to a narrow vertical layer. A plausible source is not only orography, but also out-of-balance winds in a jet exit region and wind shear. To identify the GW source, 3D GLORIA observations are combined with a gravity wave ray tracer, ERA5 reanalysis and high-resolution numerical experiments. In a numerical experiment with a smoothed orography, GW activity is quite weak, indicating that the GWs in the realistic orography experiment are due to orography. However, analysis shows that these GWs are not mountain waves. A favourable area for spontaneous GW emission is identified in the jet by the cross-stream ageostrophic wind, which indicates when the flow is out of geostrophic balance. Backwards ray-tracing experiments trace into the jet and regions where the Coriolis and the pressure gradient forces are out of balance. The difference between the full and a smoothorography experiment is investigated to reveal the missing connection between orography and the out-of-balance jet. We find that this is flow over a broad area of elevated terrain which causes compression of air above Greenland. The orography modifies the wind flow over large horizontal and vertical scales, resulting in out-of-balance geostrophic components. The out-of-balance jet then excites GWs in order to bring the flow back into balance. This is the first observational evidence of GW generation by such an orography–jet mechanism. Published by Copernicus Publications on behalf of the European Geosciences Union. 10394 M. Geldenhuys et al.: Orographically induced spontaneous imbalance Abstract. To better understand the impact of gravity waves (GWs) on the middle atmosphere in the current and future climate, it is essential to understand their excitation mechanisms and to quantify their basic properties. Here a new process for GW excitation by orography-jet interaction is discussed. In a case study, we identify the source of a GW observed over Greenland on 10 March 2016 during the POL-STRACC (POLar STRAtosphere in a Changing Climate) aircraft campaign. Measurements were taken with the Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) instrument deployed on the High Altitude Long Range (HALO) German research aircraft. The measured infrared limb radiances are converted into a 3D observational temperature field through the use of inverse modelling and limited-angle tomography. We observe GWs along a transect through Greenland where the GW packet covers ≈ 1/3 of the Greenland mainland. GLORIA observations indicate GWs between 10 and 13 km of altitude with a horizontal wavelength of 330 km, a vertical wavelength of 2 km and a large temperature amplitude of 4.5 K. Slanted phase fronts indicate intrinsic propagation against the wind, while the ground-based propagation is with the wind. The GWs are arrested below a critical layer above the tropospheric jet. Compared to its intrinsic horizontal group velocity (25-72 m s −1 ) the GW packet has a slow vertical group velocity of 0.05-0.2 m s −1 . This causes the GW packet to propagate long distances while spreading over a large area and remaining constrained to a narrow vertical layer. A plausible source is not only orography, but also out-of-balance winds in a jet exit region and wind shear. To identify the GW source, 3D GLORIA observations are combined with a gravity wave ray tracer, ERA5 reanalysis and high-resolution numerical experiments. In a numerical experiment with a smoothed orography, GW activity is quite weak, indicating that the GWs in the realistic orography experiment are due to orography. However, analysis shows that these GWs are not mountain waves. A favourable area for spontaneous GW emission is identified in the jet by the cross-stream ageostrophic wind, which indicates when the flow is out of geostrophic balance. Backwards ray-tracing experiments trace into the jet and regions where the Coriolis and the pressure gradient forces are out of balance. The difference between the full and a smoothorography experiment is investigated to reveal the missing connection between orography and the out-of-balance jet. We find that this is flow over a broad area of elevated terrain which causes compression of air above Greenland. The orography modifies the wind flow over large horizontal and vertical scales, resulting in out-of-balance geostrophic components. The out-of-balance jet then excites GWs in order to bring the flow back into balance. This is the first observational evidence of GW generation by such an orography-jet mechanism. Gravity waves can impact our lives directly through the generation of turbulence, endangering air traffic (Fritts and Alexander, 2003;Bramberger et al., 2018;Geldenhuys et al., 2019). Additionally, they are known to enhance and act as a trigger for convection (de la Torre et al., 2011), impact the movement of weather systems and affect the ozone hole (Kidston et al., 2015). Gravity waves are essential drivers of the middle atmosphere circulation (Holton, 2004) through drag deposited by their breaking and saturation (McLandress, 1998;Alexander et al., 2010). By downward coupling, these circulations in the middle atmosphere again impact the surface (e.g. Kidston et al., 2015;Polichtchouk et al., 2018a). Thus, this GW drag must not be neglected. Gravity waves are not properly resolved by most general circulation models (GCMs); hence, GW drag parameterisations are required (Kim et al., 2003;Geller et al., 2013). The few GCMs that do resolve a large spectrum of GWs are computationally too expensive for climate and chemistry runs. General circulation models use orographic GW drag (OGWD) and non-orographic GW drag (NOGWD) schemes. The OGWD scheme represents the drag exerted by mountain waves alone (Lott and Miller, 1997;Kim and Arakawa, 1995;Xie et al., 2020). The NOGWD scheme is developed to represent all other sources (e.g. Charron and Manzini, 2002;de la Camara et al., 2014a). Parameterisation schemes have several poorly constrained parameters, and one method of improving models is finding better constraints by observations (e.g. Plougonven et al., 2020). Direct observational evidence for the relative importance of different GW sources is rare. Hence, often the effect of GWs on the large-scale circulation is used to infer properties of the GW parameterisations (e.g. Manzini et al., 1997). A good example is a recent debate on which parameterisation scheme is responsible for the missing GW drag around 60 • S (McLandress et al., 2012). In their study, de la Camara et al. (2016) found that the intermittency of their NOGWD parameterisation scheme solves the 60 • S problem. Garcia et al. (2017) suggested that increased orographic sources are the key to solving this missing GW drag problem in models. For this, Garcia et al. (2017) increased the orographic drag for the Southern Hemisphere only. On the other hand, the European Centre for Medium-Range Weather Forecasts (ECMWF) employed a stronger non-orographic GW drag with favourable results (Polichtchouk et al., 2018b). Garcia et al. (2017) stated that non-orographic GW drag can also be a solution. Moreover, Charron and Manzini (2002) showed that increased GW emission from fronts provides good results in the Northern Hemisphere but is less effective in the Southern Hemisphere. Later, Richter et al. (2010) confirmed this by increasing convective and frontal GW sources. Attempts to improve on the realism and to employ physical GW sources (Richter et al., 2010;Kim et al., 2013) or mimic natural GW intermittency (de la Camara et al., 2014b) are still experimental. The main concerns are that parameterisations use their own assumptions and tunable parameters, which are only weakly constrained by observations. Charron and Manzini (2002), Richter et al. (2010), and Kim et al. (2013) all agree that the trend is toward replacing non-orographic parameterisation schemes by source-specific schemes in low-resolution models. Richter et al. (2010) continue by asserting that GW observations are required to constrain these parameterisation schemes. In particular, the attribution of observed GWs to different sources requires further investigation. For instance, Hertzog et al. (2008) associated GW momentum flux obtained from superpressure balloon measurements in the Southern Hemisphere polar vortex with orographic and non-orographic sources by regional selection. However, orographic GWs from the Andes and the Antarctic peninsula can propagate far downstream into the Drake Passage (Rapp et al., 2020). On the other hand, Preusse et al. (2014) and Krisch et al. (2020) show that GWs observed over the Scandinavian mountains may mostly originate from upstream jet sources. More sophisticated methods than simple spatial collocation are required to identify the sources of observed GWs. To contribute to this debate, a case over Greenland is discussed in the following. Greenland is an island with a high elevation and is surrounded by ocean. This has made it a good place to study wind flow above and around terrain (e.g. Doyle and Shapiro, 1999;Tollinger et al., 2019). During this case, a strong Rossby wave is breaking and a GW packet exists over a large part of Greenland. Several of the potential source processes introduced above were present in our case: orography, breaking Rossby wave, jet streak, and strong horizontal and vertical wind shear. Observations were obtained during the PGGS campaign. The PGGS campaign consisted of smaller sections, namely POLSTRACC 1 , GWEX 2 , GW-LCycle and SALSA 3 (PGGS). One of the major aims of the campaign was the investigation of the generation and life cycle of GWs. Section 2 describes the data and methods (GLORIA measurements, ray tracing and reanalysis data) as well as the synoptic conditions. In Sect. 3, a presentation and discussion of the observations follow. Sections 3 and 4 contain a discussion on the numerical weather prediction experiment, source identification and GW evolution. The results are summarised in Sect. 5. 2 Data and methods 2.1 GLORIA -Gimballed Limb Observer for Radiance Imaging of the Atmosphere GLORIA Riese et al., 2014) is an imaging infrared spectrometer that is mounted in the belly pod of HALO (the German High Altitude Long Range research aircraft). The instrument comprises a Michelson interferometer with a 2D infrared detector array. GLORIA looks to the right side of HALO with regards to flight direction, and its field of view can be panned from 135 • to 45 • with respect to carrier heading in the horizontal. The vertical field of view is 4.1 • . With this, we image altitudes from ∼ 5 km to slightly above flight altitude. GLORIA measures spectra between 780 and 1400 cm −1 . This allows measurement and retrieval of temperature, O 3 , H 2 O, NH 3 , PAN, ClONO 2 and HNO 3 . GLORIA uses 48 × 128 pixels of the detector to provide ≈ 6000 simultaneous views. Each pixel is analysed for absorption lines of the above-mentioned gases . GLORIA can measure with a spectral sampling of up to 0.0625 cm −1 . However, the finer the spectral sampling, the longer the acquisition time needed to achieve this. A longer integration time implies a worse spatial resolution, as the aircraft is constantly moving. A lower integration time allows a finer spatial resolution but impacts the number of trace species that can be retrieved Riese et al., 2014;Ungermann et al., 2010a). Based on the integration time, three main observation modes exist: chemistry mode, dynamics mode and premier mode. Chemistry mode uses a spectral sampling of 0.0625 cm −1 for an increased number of detectable chemical species. Dynamics mode uses 0.625 cm −1 for an increased spatial resolution and to focus more on the retrieval of atmospheric temperature. Intermedi-ate premier mode employs a value of 0.2 cm −1 as a compromise. On 10 March 2016, GLORIA was flown in dynamics mode with the aim to perform tomographic retrievals. Tomographic measurements utilise the panning ability of GLO-RIA. During the flight, GLORIA was panned from 129 to 45 • in steps of 4 • . This provides multiple measurements of the same air mass from different angles, which allows for a tomographic retrieval. During full-angle tomography, the aircraft follows a closed (e.g. circular or hexagonal) flight path around the area of interest. During limited-angle tomography (used to obtain the data for this article) the aircraft flies in a (largely) straight line. During linear flight patterns, the area of interest is observed from fewer angles (Krisch et al., 2018(Krisch et al., , 2020. The processing is performed with the help of the GloriPy and JURASSIC2 (Juelich Rapid Spectral Simulation Code version 2; Ungermann et al., 2010b) software packages. Reconstructing the atmosphere from infrared observations is an ill-posed inverse problem. To solve this problem, an atmospheric state is iteratively adjusted by a Gauss-Newton type of trust-region method (Ungermann, 2011). This continues until the synthetic measurements generated by a forward model agree within expectation with the actual measurements. The final state of this iterative process is then used as the "retrieval" result (Krisch et al., 2017;Krasauskas et al., 2019). Fewer angles for limited-angle tomography mean more difficulty in 3D retrieval and frequently more artefacts. The resolution is also slightly worse when comparing limited-angle tomography to full-angle tomography. The resolution of our limited-angle tomography is 200 m in the vertical and ∼ 20-70 km in the horizontal direction. The retrieval data generated for this article were optimised to determine temperature, CCl 4 , HNO 3 , O 3 and aerosols. The spectral windows used for the optimised retrieval are listed in Appendix A1. A total of 16 channels were used in the retrieval, each for a different purpose (3 for temperature, 5 for CCl 4 , 4 for HNO 3 , and 4 for temperature and O 3 combined). The retrieval was conducted using Laplacian regularisation implemented using a Delaunay triangulation-based irregular grid-capable discretisation (Krasauskas et al., 2019). The Laplacian (second-order derivative) regularisation replaced the traditional first spatial derivative regularisation approach. The Delaunay technique reduces the computational cost of the tomographic retrieval. This flight was the first GLORIA limited-angle tomography retrieval with this method. To examine the robustness of our results, we tested different retrieval configurations. We found the derived temperature product to be robust within the region of high tangentpoint 4 density, whereas other parts of the volume were subject to large differences depending on the chosen a priori or regularisation. The a priori used in the retrieval is a smoothed ECMWF analysis and WACCM (Whole Atmosphere Community Climate Model) reanalysis field. GROGRAT -Gravity-wave Regional Or Global Ray Tracer GROGRAT is a ray-tracing tool that traces the propagation path of a GW and can be used for both forward and backward tracing (Marks and Eckermann, 1995;Eckermann and Marks, 1997). GROGRAT is based on the GW dispersion relation: where ω is intrinsic frequency, N is Brunt-Väisälä frequency, f is Coriolis frequency, H is scale height, and k, l and m are wavenumber in the x, y and z direction. A GW packet is fully characterised by its position in space and time as well as its 3D wave vector. The ray tracer projects this state vector forward or backward according to the ray-tracing equations (Lighthill, 1978): where i denotes the spatial direction (x, y or z), and ∂ ∂t is differentiation in time. In this study, the 4D version of GROGRAT is used. This means that the background (see the next section to see how the background state was determined) temperature, wind and pressure (from ERA5 reanalysis) change with time. For each time step of the ray integration the group velocity and state vector (x, y, z, t, ω gb , k, l, where gb indicates ground-based) change. Along the ray path, wave action density 5 A ≡ E ω is conserved, but GW saturation and GW dissipation by radiative damping and turbulence are taken into account (based on the scheme developed by Zhu, 1993). Gravity wave amplitudes are converted from wave action. For back tracing, it is important to keep in mind that the GW can be emitted at any point along the ray and is not necessarily emitted at the lowest point of the ray (see Preusse et al., 2014). One indication of a GW source along the ray is a violation of the Wentzel-Kramers-Brillouin (WKB) approximation (Hertzog et al., 2001). In GROGRAT this is tested sight with the largest atmospheric density where most of the radiance signal usually comes from. ω 2 −f 2 -here,T is temperature amplitude and T is temperature. via the parameter (see Eq. 5 of Marks and Eckermann, 1995) This implementation of the WKB approximation requires that the scale of change of the wavenumber is large compared to the wavelength of the GW. Fig. 1 and the calculation of the cross-stream ageostrophic wind in Sect. 4, which is calculated on a pressure grid). Reanalysis data and model integrations To investigate the influence of orography, in Sect. 4 two global model forecasts with the ECMWF Integrated Forecast System (IFS) are discussed: (i) CTL-run and (ii) T21-run. The forecasts are performed at TCo1279 horizontal resolution (corresponding to 9 km grid spacing on a cubic octahedral grid) with 137 vertical levels and use the operational ECMWF IFS configuration of cycle 45r1 (https: //www.ecmwf.int/en/forecasts/documentation-and-support/ evolution-ifs/cycles/summary-cycle-45r1, last access: 29 April 2021). The only difference in the two runs is the resolved orography field, which in CTL-run is at TCo1279 resolution and in T21-run at T21 resolution. This means that the orography in T21-run is much smoother and does not resolve, for instance, Fjords at the Greenland coast, and it is only 60 % of the TCo1279 orography field elevation. The two forecasts were initialised on 9 March 2016 at 12:00 UTC and run freely for 72 h (the GW observation takes place 30 h after initialisation). The model output and reanalysis data were separated into GWs and the large-scale background state. Zonally the data were separated with a fast Fourier transform, assuming that zonal wavenumbers up to 12 can be attributed to the largescale background and that higher zonal wavenumbers are attributed to GWs . In the two remaining directions, a Savitzky-Golay (Savitzky and Golay, 1964) filter was applied. A third-order polynomial was applied in the y (across-latitude) direction with a 50-point smoothing -15 • of latitude. A fourth-order polynomial was applied in the z (vertical) direction with a 15-point smoothing -3 km. The GW field (called the residual) remains after subtracting the large-scale background from the original model field. A comparison of different background removal methods can be found in the Appendix of Krisch et al. (2020). Jet geostrophic balance calculation A jet can generate GWs if an imbalance exists between the Coriolis and the pressure gradient forces in the momentum equation (Zülicke and Peters, 2006). The area of imbalance normally occurs in the jet exit region and can radiate GWs spontaneously in an attempt to balance the Coriolis and pressure gradient forces. The cross-stream ageostrophic wind speed or the cross-stream Lagrangian Rossby number can be used to diagnose an out-of-balance jet (Zülicke and Peters, 2006;Mirzaei et al., 2014;Plougonven and Zhang, 2014, e.g.). Similar to Mirzaei et al. (2014), it was found that similar results are obtained for both, with less noise when using the cross-stream ageostrophic wind speed. In this article, the cross-stream ageostrophic wind is used to diagnose unbalanced flow within the jet as suggested by Mirzaei et al. (2014). Firstly, horizontal wind (u and v components) and geopotential height fields were smoothed to remove GW signatures with a boxcar 6 over 500 km in the x and y direction (similar to Mirzaei et al., 2014). Secondly, ageostrophic winds (u a and v a ) were calculated from pressure-level data using MetPy (May et al., 2008(May et al., -2020. Thirdly, the cross-stream ageostrophic wind was calculated using the approach of Zülicke and Peters (2006). In a final step it was tested whether this quantity exceeds the threshold of 7.5 m s −1 , which corresponds to a critical Rossby number of 0.15 and a length scale of 500 km. In this case we consider this unbalanced region for GW emission. 3 Observations and GW ray tracing Synoptic situation The synoptic situation for our case study is shown in Fig. 1. The meandering 300 hPa geopotential height and horizontal wind field show a cyclonically breaking Rossby wave. At flight time (10 March 18:00 UTC, panel b), the potential vorticity lines steepen and turn back at the point of inflection, signalling Rossby wave breaking. An associated midtropospheric low-pressure system drifts from west to east (not shown). Accordingly, the sub-tropical jet drifts with time. However, the divergence of the winds within the jet remains above or in close vicinity of Greenland throughout the 30 h before observation. The above-mentioned synoptic conditions are favourable for the formation of jet-generated GWs (e.g. Uccellini and Koch, 1987;Plougonven and Zhang, 2014). A trapping layer inhibits GW propagation beyond the respective layer. A trapping and reflection layer is formed by an increase in wind speed and stability and is identified with the help of the Scorer parameter (e.g. Durran, 2003;Geldenhuys et al., 2019). Knowing potential GW reflection layers can therefore be important to find the sources of GWs. The Scorer parameter over the southern mainland of Greenland (not shown) indicates multiple reflection layers between 7.5 and 13 km. However, further investigation reveals that up to 30 h before observation all horizontal wavelengths > 4 km will propagate through these reflecting layers. With no reflecting layers present for the wavelengths considered in this study, it is clear the GW source can be located at the surface or in the free troposphere. Furthermore, this justifies the use of ray-tracing tools for freely propagating GWs (Sect. 3.3). A jet is known to release upward-propagating (above the jet) and downward-propagating (below the jet) GWs (Thomas et al., 1999;Guest et al., 2000). Hodographs can be used to distinguish between upward-and downwardpropagating GWs. In the Northern Hemisphere clockwise (anticlockwise) rotating hodographs indicate upwardpropagating (downward-propagating) GWs (Andrews et al., 1987;Hertzog et al., 2001). Multiple hodographs from the ERA5 reanalysis were drawn within the 500 and 300 hPa jet region. The hodographs (e.g. Fig. 2) depicted no rotation to weak clockwise rotation with altitude below 7 km, with strong clockwise rotation above 10 km. The rotation above 10 km is nearly circular, which implies that f/ω ≈ 1. This points to an upward-propagating inertia-gravity wave with a low intrinsic frequency close to the Coriolis parameter (Hertzog et al., 2001;Fritts and Alexander, 2003). A nearly full anticlockwise rotation is less pronounced between 7 and 10 km, even though this altitude range should be treated with care as the jet region (7.5 to 10 km at flight time) can present artificial results. High-resolution ECMWF medium-range weather forecasts predicted a large-scale GW event covering most of Greenland 2 d before the flight was performed. Accordingly, a PGGS research flight was planned to measure these GWs, presumably generated by the breaking Rossby wave. HALO flew from Kiruna (Sweden) to Greenland where it crossed the mainland from south-east to north-west at 10.5 km and, on the way back, at 13.5 km. The temperature field presented in this article is retrieved from this higher leg (black line crossing Greenland in Fig. 3) from 19:00 UTC (Universal Time Coordinated) to 21:00 UTC. Throughout this article, the closest synoptic time (18:00 UTC) is referred to as flight time. GLORIA observations Gravity waves are seen within the tangent-point area in Fig. 3. Outside this area the retrieval does not have measurement information and falls back to the a priori. The GWs within the tangent-point area compare well to ERA5 data (more on this in Sect. 3.3). In the horizontal (panel a), the The GW characteristics are determined within the tangentpoint area indicated in Fig. 3 and similar retrieved images. The characteristics of these GWs are listed in Table 1 and were used as input into GROGRAT (see next section). A horizontal wavelength between 320 and 390 km is observed in different parts of the GW packet. The vertical wavelength is between 1.6 and 2.1 km, and the GW orientation is between 130 and 140 • from the north. The amplitude and vertical wavelength decrease with altitude (as can be seen in Fig. 3b). This is indicative that a change in propagation conditions is taking place and can point to GW dissipation (more on this in Sect. 3.3). GROGRAT ray tracing Tracing the backward trajectory of a GW is an established method to find its source (e.g. Marks and Eckermann, 1995;Krisch et al., 2017Krisch et al., , 2020. According to Hertzog et al. (2001), the excitation of GWs by geostrophic adjustment from the jet is usually associated with enhanced values of the WKB parameter (δ) near the height of the wind maximum. This is attributed to the sharp upper and lower edges of the jet. Sharp changes in the jet wind speed will induce sharp changes in the vertical wavenumber. If the scales of change in the wavenumber become large compared to the wavelength, the WKB parameter is violated (see Eq. 4 and corresponding Sect. 2.2). Four main rays were back-traced, starting between 11 and 12.3 km based on the GW parameters given in Table 1. The GW ground-based frequency or input to GROGRAT was calculated via the dispersion relation (Eq. 1) using the horizontal and vertical wavelength in Table 1 as well as ERA5 reanalysis data. Table 1. GW characteristics determined by eye from the retrieval (Fig. 3). The wavelengths are represented by λ h and λ z for the horizontal and vertical direction, respectively. Ray nos. 0 to 3 were used as input for the GROGRAT ray tracer. GW Lat Table 1. The back tracing starts between 11 and 12.3 km and is depicted with respect to altitude, latitude and longitude. All rays trace backward into the jet and end over the ocean, with the exception of ray no. 3 ( Fig. 4; rays named as in Table 1), as the vertical cross section of the GLORIA observation indicates the GW is propagating intrinsically opposite to the wind. The horizontal group velocity, however, is slower than the wind velocity, which leads to a downstream drift of the GW packet. To provide further confidence in the ray-tracing study, sensitivity tests were performed. An ensemble ray trace (20 members) was conducted by perturbing the initial conditions (listed in Table 1) by ≈ 10 %: ±0.2 km for the vertical wavelength and ±30 km for the horizontal wavelength. This is the approximate error associated with the wavelength determination from Fig. 3. All ensemble members behaved similarly to the main rays (Fig. 4). The ray paths proved to be more sensitive to the launch orientation. A 10 • change in orientation frequently ended with the backtraced GW being evanescent or vertically stalling. In another experiment, the four main rays were back-traced, whereby the ray orientation at the end of the ray was perturbed (again by 10 • ) and forward-traced. In the forward tracing, a change in orientation was much less sensitive. Ray nos. 0 to 2 (named as in Table 1) all experienced large horizontal propagation and very little vertical propagation. This is normally characteristic of trapped GWs; however, the slanted phase fronts in Fig. 3 indicate that the GWs were not trapped. Only ray no. 2 is discussed here in detail (Fig. 5). The wavelengths and phase orientation predicted by GRO-GRAT correlate well with the ERA5 reanalysis and produce further trust in the experiment (same as for ray no. 3 - Fig. 6). The GROGRAT-calculated vertical group velocity peaks at 10 km with 0.2 m s −1 and has a minimum of 0.05 m s −1 at 9 and 11 km (Fig. 7). This translates to a vertical propagation speed of 180 to 720 m h −1 . Intrinsic horizontal group velocity peaks at 72 m s −1 and has a minimum of 25 m s −1 . This translates to a ground-based group velocity ranging from 6 to 17 m s −1 . The cross-stream ageostrophic wind (calculated as described in Sect. 2.3) indicates out-of-geostrophic-balance flow within the jet exit region at multiple locations along the ray path (Fig. 5). In this study, we use a safe value of 7.5 m s −1 to indicate that the jet exit region is out of balance and can spontaneously emit GWs. Mirzaei et al. (2014) used 1 m s −1 to indicate out-of-balance areas in the jet and argued theoretically that 4 m s −1 is a good value. In Fig. 5 at x = −1500 km the ray passes through a 10 m s −1 crossstream ageostrophic wind region 22 h before observation. Multiple other out-of-balance regions exist within the jet throughout the ray lifetime. From this, it is concluded that the jet is constantly emitting GWs. It is noted that ray nos. 0 to 2 did not have major WKB violations. Although WKB values reached a maximum within the out-of-geostrophic-balance jet regions, the peak values reached merely 0.5. Ray no. 3 is the only GW which traces to the orography (Fig. 6) and was hence investigated for a possible mountain wave. The ray traces to the plateau of Greenland and not the precipitous coastline orography. In addition, the ray passes through a cross-stream ageostrophic wind region at the steepest part of the ray. Clear violations are observed in the WKB values at 6.5 and 8 km (consistent with the findings of Hertzog et al., 2001), supporting the idea that the GW is released by the jet. It should be noted that cross sections through ERA5 reanalysis residual data did indicate mountain wave activity localised in time and space along the coast. However, as far as the GROGRAT experiment is concerned, these mountain waves were not observed by GLORIA. Ray no. 3 was emitted by the jet initially with a longer vertical wavelength (hence the steep propagation path at x = −250 km), which was immediately reduced as it propagated out of the strong wind regime. The vertical group velocity peaked around 8 m s −1 (Fig. 7), receding rapidly to 0.04 m s −1 at 11 km. The vertical wavelength of ray nos. 0 to 2 similarly decreased as the GW passed above the jet region (Fig. 7 left). The effect of the changing vertical wavelength is also observed in the vertical group velocity (Fig. 7 right). Similarly, the observed GW amplitude and the observed vertical wavelength decrease with height ( Fig. 3 and Table 1); this can imply that the GWs are approaching dissipation. A large drop in background wind speeds (17 m s −1 for ray no. 3 and 40 m s −1 for ray nos. 0 to 2) occurs from 8 to 9 km to the ray starting altitude (Fig. 8). Within the same region the stability ∂T ∂z of the atmosphere changes dramatically within the jet exit region (location indicated by the blue cross in Fig. 1). Stability changes from 0.00025 K per 100 m between 8 and 12 km to 0.2 K per 100 m between 12.5 and 15 km are observed. Although this greatly resembles a wave duct, the GWs in this study are not reflected (see the previous discussion in Sect. 3.1). The strong decrease in wind speed is responsible for the decrease in the vertical wavelength and hence responsible for the GW dissipation. As the wind speed approaches the horizontal phase velocity (Fig. 8), the intrinsic frequency decreases to zero, which means the vertical wavenumber will approach infinity, representing a critical layer for the GW. Numerical weather prediction experiment and GW source identification Originally designed as an attempt to entirely rule out topography as a source, a numerical experiment with strongly reduced topography was designed. This yielded unexpected results implicating topography as a major contributor. Numerical experiment overview and results The unmodified ECMWF operational model was used as the control (CTL-run, Sect. 2.3) (Fig. 9a-h). CTL-run produced GWs (Fig. 9e) similar to observations (Fig. 3) and ERA5 data (Figs. 5 and 6). The second experiment (T21-run) uses a T21 topographic field (the lowest-resolution topographical field available) to achieve a smoothed orography. Comparing CTL-run with T21-run on 10 March at 18:00 UTC (Fig. 9e and m), in the area of interest GWs are observed in CTL-run, but hardly any are seen in T21-run (also seen in the following time steps). The very weak GWs observed in T21-run exist from the very first model time step, and no new GWs are forced in the following time steps. Clearly, the topography plays a significant role in GW generation. Are the two experiments hence an indication of direct orographic GW generation? Keeping in mind that Sect. 3.3 implicated the jet as the likely source, this hint to orography is a puzzling result. We therefore investigate the hypothesis that the orography is responsible for the GW excitation in an indirect way. CTL-run vs. T21-run: what is the difference? Which synoptic-scale differences then arise from the reduced orography that could induce GW excitation? As argued in Sect. 3.3 the GWs are likely excited by out-ofgeostrophic-balance flow. Therefore, we compare the crossstream ageostrophic wind (calculated as in Sect. 2.3) for the two model runs. In Fig. 9, the cross-stream ageostrophic wind is shown when all three following conditions are met: the values of the cross-stream wind are greater than 7.5 m s −1 , the total wind speed is greater than 20 m s −1 and latitudes are lower than 80 • N. As mentioned in Sect. 3.3, a critical value of 7.5 m s −1 is used to indicate when the jet can spontaneously radiate GWs. CTL-run, in Fig. 9, has large cross-stream ageostrophic wind regions. These cross-stream ageostrophic wind regions are an indication of an imbalance between the Coriolis and the pressure gradient force in the jet. Early after model initialisation (12 and 18 h - Fig. 9a and b), CTL-run indicates large out-of-balance jet regions over the ocean. Figure 9b depicts the CTL-run jet reaching cross-stream ageostrophic winds of 10 m s −1 , and 6 h later the CTL-run jet is unbalanced over the Greenland mainland (Fig. 9c). The greater the cross-stream ageostrophic wind is, the more unbalanced the jet is, and the more likely it is to spontaneously emit GWs (Zülicke and Peters, 2006;Mirzaei et al., 2014). After each imbalance in the jet a GW response is seen 6 h later 2 km higher and downwind of the imbalance region (comparing Fig. 9a-c with e-g). This height and area offset is understandable as the GWs take time to propagate from 8 to 10 km while drifting horizontally. Taking the mid-range min+max 2 vertical group velocity of ray no. 0 (Sect. 3.3), the GW packet will propagate 2.7 km vertically in 6 h. Through- Figure 7. Vertical wavelength (a) and vertical group velocity (b) along the back trace for ray nos. 0 to 3 as calculated by GROGRAT. The leftmost plot is cut at 8 km in order to achieve readability for ray nos. 0 to 2. Figure 8. The horizontal phase velocity (dashed) and the background wind (solid) along ray nos. 0 to 3. Where the phase velocity and the wind speed approach one another a critical layer exists. Altitudes above the ray starting point (Table 1) represent the same starting conditions ray-traced forward. out all shown time steps the unbalanced region is associated with a GW field downwind. T21-run ( Fig. 9i-p) shows a totally different picture. Firstly, the cross-stream ageostrophic wind indicates a smaller region and a more balanced jet. Small regions of 7.5 m s −1 (no 10 m s −1 region) are seen over the ocean in Fig. 9i and j. No cross-stream ageostrophic wind is observed upstream or over the Greenland mainland at 24 h after model initialisation (Fig. 9k). Matching the more balanced jet, GWs are almost nonexistent in T21-run. Only during one time step was the T21-run jet more unbalanced than CTL-run ( Fig. 9d and l). At forecast hour 30 (at flight time) a large area of imbalance occurs below the north-westernmost part of the flight track (Fig. 9l). This imbalance area (at 350 hPa) is larger in T21-run and indicated cross-stream ageostrophic wind values exceeding 10 m s −1 . A total of 6 h later, T21-run indicated more GWs than the previous time step (north of the flight track - Fig. 9p), and for the first time, a GW field comparable to (if not a greater than) CTL-run (Fig. 9h) was observed. CTL-run, in Fig. 9, has larger and more intense crossstream ageostrophic wind regions when compared to T21run. Throughout all shown time steps the greater unbalanced region has a greater GW field. Therefore, we assert that the GWs are directly caused by the increased lack of balance within the jet. In order to find direct evidence for the presence of GWs upstream of Greenland, divergence is considered. Divergence fields are frequently used to differentiate between GWs and balanced motions (Zülicke and Peters, 2006). Besides the emphasis on shorter scales by differentiation, this method removes the geostrophic modes and leaves the ageostrophic flow including GWs. We have applied this to the potential source region of the GWs upstream of Greenland for Fig. 10. At 6 h after model initialisation (left) we see a superposition of wave phase fronts parallel to the isobars and perpendicular to the flow between the blue cross and Iceland. The latter is likely to become the waves we observe later with GLO-RIA. At 18 h (right) the two directions of phase fronts have separated, forming long streaks parallel to 30 • W south-west of Iceland and an arc of phase fronts perpendicular to the flow between Iceland and Greenland. These GWs upstream of the Greenland coast are consistent with our hypothesis that flow instability upstream of the terrain is the source of the GW patterns. Furthermore, the source moves with the jet and drifts over Greenland in later time steps. CTL-run vs. T21-run: what causes the difference? If CTL-run has GWs and T21-run does not, then the difference between the two model runs must be the source of the GWs. By now we have established that the jet and its related imbalance are the cause of the GWs. We know that the Figure 9. CTL-run (a-h) and T21-run (i-p) cross-stream ageostrophic wind and temperature residuals at different times. The cross-stream ageostrophic wind (a-d and i-l) was calculated from pressure levels; hence, here it is depicted at 350 hPa (≈ 8.1 km). The temperature residuals (e-h and m-p) were determined based on geopotential heights; hence, this is valid for 10 km. The temperature residuals are depicted ≈ 2 km higher than the cross-stream ageostrophic wind, as the GW structure forms a complex interference pattern with upwardand downward-propagating GWs within the jet. The temperature residual plots are offset by 6 h from the cross-stream ageostrophic wind to allow time for the GWs to propagate to 10 km (a vertical group velocity of 200-700 m h −1 indicates vertical propagation of 2 km between 3 and 10 h). The overlaid wind barbs are as in Fig. 1. The flight path is indicated in grey, and thick black lines represent the coastline. The blue cross indicates the location of the stability discussion in Sect. 3.3. Times (in h) are since model initialisation (on 9 March at 12:00 UTC) + xx h ("xx" as specified in the top left corner of each panel). orography played a role in the balance of the jet, but we are missing a puzzle piece connecting these two. Comparing the two model runs, the jet location and shape remained similar. The centre of the low pressure (being stronger in T21-run) was displaced ≈ 5 • westwards (Fig. 11). To find the missing link the difference in the model basic variables (U , V , temperature, pressure and relative vorticity) was calculated. Subtracting the wind speed and relative vorticity of T21-run from CTL-run produced an interesting dipole structure (Fig. 12). In Fig. 12e-h green (brown) demarcates an area where the CTL-run wind speed was faster (slower) than T21-run. In order to investigate the origin of this difference, it is convenient to calculate relative vorticity. ζ = ∂v ∂x − ∂u ∂y (6) Figure 10. Divergence for CTL-run at 6 and 18 h after model initialisation. The divergence is indicated at 8.2 km to correspond to the crossstream ageostrophic wind at 350 hPa in Fig. 9. The overlaid pressure isolines (thin black lines) give an indication of the geostrophic wind direction. The thick black lines, blue cross and times in the top left corner are as in Fig. 9. It is well known that an uplift process induces relative vorticity (Holton, 2004). It then should be remembered that relative vorticity is only a different expression for the same wind field and that any process which causes changes in vorticity alters the wind velocity field. 7 Indeed, we find the same dipole structure which we observed in the wind velocity in the relative vorticity but offset by 90 • . The dipole structure in the wind speed and vorticity represents the changes that occurred in the wind speed and consequently the changes that occurred in the jet. The role of vorticity is known from the classical formation model of a Rossby wave (Holton, 2004). The flow above the ridge is compressed by the elevated orography, changing the potential temperature gradient, which in turn changes the vorticity, deflecting the wind and the synoptic flow. For the early time steps of 12 and 18 h (Fig. 12a, b) there is a vorticity difference between T21-run and CTL-run over Greenland close to the coast. This is the same area where a change in uplift process would be expected due to the changes in the T21-run and CTL-run topography. This illustrates the effect that the uplift by orography has on the vorticity (and the jet). Later time steps are more complex. We expect that the jet adjusted itself due to the lack of orography; for example, an adjustment to the orientation of the jet would additionally influence the relative vorticity field. Vorticity can also be introduced by dissipation processes. This includes blocking (Smith, 1982), flow splitting (Siedersleben and Gohm, 2016), mountain wakes (Grubisic, 2004;Siedersleben and Gohm, 2016;Smith, 2018), breaking GWs (Siedersleben andGohm, 2016) and wakes at the edges of mountain ranges (Grubisic, 2004). Given the location and synoptic conditions, all of the above processes most probably played a role in producing vorticity, but the dominant process (due to the altitude of the jet) is expected to be due to the compression of flow above Greenland. Equation (9) in Uccellini and Koch (1987) shows that when the vorticity (term 2 on the right-hand side) changes, divergence will also change. Uccellini and Koch (1987) and references therein found that an increase in divergence is directly linked to a more out-of-balance jet. We note that the difference in orography from CTL-run to T21-run induced the different vorticity areas (Fig. 12). These changes in the wind field will change the components in the jet, bringing the Coriolis and pressure gradient forces out of balance. This will trigger the jet to spontaneously emit GWs in order to bring the forces back into balance. Trüb and Davies (1995) showed in an idealised model simulation that evanescent GWs form over broad terrain in flow with a Rossby number (Eq. 7) < 0.25. Also, upwind and downwind of the mountain a change in the wind components was observed. For our case, using a wind of 30 m s −1 , Coriolis parameter of 0.00014 s −1 and a mountain half-width measured from the blue X (Fig. 1) to the Greenland northern coast of 1650 km, a Rossby number of 0.13 is achieved from where U is wind speed and L is mountain half-width (the width of the mountain at 0.5 × mountain height). The large evanescent GW (this should not be confused with the observations in Fig. 3, which are clearly propagating GWs), which is expected to form following Trüb and Davies (1995) over the Greenland terrain, can be one explanation for the rotation (Fig. 13) and the upstream slowdown of the wind. Wind being uplifted by orography will decrease due to kinetic energy changing into potential energy. In the geostrophic balance, the Coriolis parameter is multiplied by the wind to obtain the Coriolis force; thus, a slowdown in the wind will change the Coriolis force. The Coriolis force deflects winds to the right in the Northern Hemisphere, acting on the zonal (u) component of the wind. This explains the changes in the zonal component in Fig. 13. Figure 13 is a clear indication of the large horizontal and vertical scales over which the background wind is influenced by topography. Figure 11. Geopotential heights and winds at 500 hPa valid for 9 March at 18:00 UTC for CTL-run (a) and T21-run (b). Note that the spotted (especially along the coastlines) CTL-run is a result of the orographic GW drag parameterisation scheme. The rest of the display is the same as in Fig. 1, but with no potential vorticity line. Summary This has been the first GLORIA limited-angle tomography retrieval using Delaunay methods and the Laplacian regularisation discussed in Krasauskas et al. (2019). Using GLO-RIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere), on 10 March 2016 we observed GWs over Greenland in an area where multiple possible GW sources exist. Possible GW sources include the jet or wind shear embedded in a breaking Rossby wave or orography. Observations show the GWs to have a long horizontal wavelength (≈ 330 km) and short vertical wavelength (≈ 2 km). The temperature amplitude is 4.5 K. The eastwards (upwind) tilt of the observed GW phase fronts (Fig. 3) indicates a vertically propagating GW. Using ERA5 reanalysis winds it is determined that this GW is upward-propagating. Intrinsically the GW packet propagates horizontally against the wind. Along the back trace, between 12.3 and 7.5 km, the intrinsic phase velocity varies between 25 and 72 m s −1 (ground-based velocity of 6-17 m s −1 ). This is very fast compared to its vertical group velocity, which is 0.05-0.2 m s −1 , creating very oblique propagating GWs. The GLORIA-observed horizontal and vertical wavelengths as well as the calculated frequency were used as input into GROGRAT (Gravity-wave Regional Or Global Ray Tracer). Back-tracing rays trace into the jet, with one ray (ray no. 3 in Fig. 4) descending to the Greenland plateau. Despite the GWs drifting horizontally for hundreds of kilometres with little to no vertical propagation, these GWs are vertically propagating GWs. Our study illustrates how far vertically propagating GWs can drift horizontally from their source. This reflects the nonphysical nature of single column parameterisation schemes currently in use for GWs (e.g. Kim et al., 2003;Sato et al., 2012;Ribstein and Achatz, 2016;Amemiya and Sato, 2016;Krisch et al., 2017;Plougonven et al., 2020). The GROGRAT ray-tracing rays passed multiple regions where the jet was out of balance. Ray nos. 0 to 2 passed above (and through -for ray no. 3) elevated values of the crossstream ageostrophic wind over the mainland and through elevated values over the ocean (Figs. 5 and 6). The cross-stream ageostrophic wind is an indicator for an imbalance between the Coriolis and pressure gradient forces in the jet exit region (Zülicke and Peters, 2006;Mirzaei et al., 2014). Such an imbalance in the jet exit region is normally brought into balance by spontaneous emission of GWs. Associated WKB violations were observed for ray no. 3 around 6.5 and 8 km (Fig. 6), another piece of information suggesting the jet as the GW source. This compares well to the hodographs, which indicate downward-propagating GWs between 7 and 10 km (Sect. 3.1), another feature of jet-generated GWs. A numerical experiment was designed to investigate the effect of topography on the GWs in the region of interest. The results of this model experiment, however, were unexpected. Two model runs were compared: one with the usual operational ECMWF forecast settings (CTL-run) and one with a flattened and smoothed orography (T21 topographic field -T21-run). All model runs produced similar meteorological fields, while T21-run produced virtually no GWs (Fig. 9). At first glance and without further analysis, this would have formed a compelling (but incorrect) argument that the likely GW source would be a typical case of mountain waves, i.e. a direct effect of orography. Figure 12. Difference between CTL-run and T21-run for relative vorticity (a-d) and background total wind velocity (e-h) at 8 km. The revealed dipole structure is closely related to the GW excitation. The dipole structure in the wind speed difference (e-h) and the relative vorticity difference (a-d) field are offset by half a phase (90 • ). Isobars (thin black line -in hPa) at the respective level are overlaid to indicate the shape of the jet. Coastline, flight path and the times indicated in the top left are similar to Fig. 9. Further investigation, however, revealed that changing the orography caused the cross-stream ageostrophic wind to differ between the model runs ( Fig. 9). For all time steps, leading up to flight time, the CTL-run jet produced larger areas of imbalance and higher values in the cross-stream ageostrophic wind (except for time step 30 -flight time). The areas of greater imbalance (including time step 30, when T21-run had a stronger imbalance region) were followed ≈ 6 h later by a stronger GW field. The location of the cross-stream ageostrophic wind and synoptic conditions observed in our case are very much in agreement with a synoptic situation probable to release spontaneous GWs, as discussed, for example, by Uccellini and Koch (1987) and Plougonven and Zhang (2014). It is concluded that the jet, which depends heavily on the orography, is responsible for the observed GWs. A jet is regarded as a localised source in the sense that it releases a spectrum of GWs. Ray-tracing experiments show that a variation of the initial conditions in the forward ray tracing converges to the observed GW field. On the other hand, backward ray tracing is highly sensitive to the launch conditions of the ray. This shows that the excited GW spectrum expands from the source and organises itself by the propagation conditions to GW packets of similar characteristics and spread over a large area. A similar behaviour (known Fig. 11d from Trüb and Davies, 1995). The cross section with altitude is aligned roughly along the jet axis (see Fig. 1 for the jet) at 40 • W longitude from 85 • N to 50 • N. Black indicates the Greenland topography. as frequency dispersion) is known for ocean waves (Holthuijsen, 2007). Large-scale vorticity is used to illustrate the link connecting the orography to the change in jet balance. Subtracting the total wind of T21-run from CTL-run produced a dipole structure (Fig. 12). A similar dipole structure (with a 90 • phase shift) was obtained by subtracting the T21-run relative vorticity from CTL-run. A well-established link exists between orography and orography-induced vorticity changes. Vorticity is produced by the compression of air above orography (Holton, 2004). Moreover, vorticity can be produced by dissipation, which can include blocking, flow splitting, wakes of mountains, GW breaking or the edges of mountain ranges (Smith, 1982;Doyle and Shapiro, 1999;Grubisic, 2004;Siedersleben and Gohm, 2016;Smith, 2018). All of the above-mentioned processes are expected to be present during this case, but only GW breaking and the compression of air have the capacity to directly deposit vorticity within the upper tropospheric jet. It is shown by difference fields that flow over broad terrain is directly responsible for large changes in the jet (Fig. 13). These changes would bring the jet out of balance, triggering the release of GWs. Based on the chain of arguments presented above we find that the observed GWs were excited by the jet. The dynamics of the jet were heavily influenced by orography through large-scale vorticity, forced by flow over broad terrain. The connected changes in the wind field often occurred upwind of the orography. According to Plougonven and Zhang (2014), our understanding of GWs from jets is still too inadequate to understand all the dynamics. With that in mind, we acknowledge that the hypothesis presented here might not be the only feasible hypothesis. With the exception of the modelling study of Trüb and Davies (1995), we could not find literature directly connecting orography with the release of GWs which are not mountain waves. Trüb and Davies (1995) go further in saying that observational evidence of GWs linked to orography-induced ageostrophic imbalanced flow "will be difficult" to obtain. Furthermore, we could not find any studies linking topography with upwind GWs. As we could find no observational studies that observed GWs from this orography-jet combination, we believe this to be the first documented observational evidence of this mechanism. A marginally in-balance jet approaching orography is a common feature at middle and high latitudes. Therefore, it is likely that this jet-orography interaction causes the jet to come out of balance on a frequent basis in many regions. Gravity wave generation by this jet-orography mechanism is capable of producing spontaneous adjustment regions over Greenland, Scandinavia, Antarctica, South America, New Zealand and others. In numerical weather prediction models, most of these GWs would be resolved, but a large part of the spectrum would not be accounted for in climate models, which need to be operated at a lower resolution for long-term runs. Parameterisation schemes that could represent these GWs do not exist for such excitation processes. These GWs are also difficult to diagnose. Considering statistical studies of observed or model-resolved GWs, GWs excited by the suggested orography-jet mechanism could frequently be misinterpreted as classical mountain waves despite having quite different characteristics. This article hence illustrates how challenging it is to disentangle the sources of GWs. Data availability. The GLORIA limited-angle tomography retrieval data are available on the HALO database (Geldenhuys and Ungermann, 2020). The ECMWF specialised runs (CTL-run and T21-run) can be obtained from Polichtchouk (2021). ERA5 fields can be obtained from the Copernicus Climate Data Store (Copernicus Climate Change Service, 2017). Author contributions. MG performed figure production, all data analysis and write-up of the article. PP supervised the research and helped extensively with the paper. IK performed flight planning, initiated the idea of a Greenland paper and supported initial analysis. CZ assisted in many discussions and with identifying the outof-balance jet regions. JU supervised the research, assisted in the GLORIA retrieval process and produced the level 1 dataset. ME contributed valuable knowledge and experience to the discussions. FF and MR obtained funding for the campaign. FF (and team) also managed the GLORIA sensor to obtain measurements and managed the data. MR also contributed to discussions on the paper. All authors contributed to the revision of the paper and its figures. Competing interests. The authors declare that they have no conflict of interest. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Acknowledgements. The authors gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA (Jülich Supercomputing Centre, 2018) at Forschungszentrum Jülich. Markus Geldenhuys would like to acknowledge everyone that contributed to the campaign, especially the FX team and the pilots. Much of the article is based on the model runs produced by Inna Polichtchouk; thank you for the model experiment and support. Markus Geldenhuys would also like to thank Andreas Dörnbrack for the many interesting discussions we had regarding the case, as well as Eshané Geldenhuys for her undying support. We would like to thank two anonymous reviewers for their valuable contribution to the article. The article processing charges for this open-access publication were covered by the Forschungszentrum Jülich. Review statement. This paper was edited by Geraint Vaughan and reviewed by two anonymous referees.
13,244
2021-01-21T00:00:00.000
[ "Environmental Science", "Physics" ]
What Connects Dark Matter and Black Holes? Dark matter is a major component of the universe, about six times more ab-undant than ordinary visible matter. We measure the effects of its mass, but it escapes the telescopes. It has the particularity of emitting no radiation and interacting only by the action of gravity. The main purpose of this article is to try to answer what dark matter is: we conjecture that it is composed of mag-netically charged neutrinos, true magnetic monopoles. But that requires a huge conceptual leap: Maxwell’s laws must be inverted and the electric charge becomes a magnetic charge. Asymmetric “reversed” Maxwell’s laws would provide the “dark” magnetic charge that would replace the electric charge. The very form of the Dirac equation, which imposed on ordinary matter that the particle carries an electric charge and obeys the principal properties of the electron, would impose in the dark matter that the “dark” particle obeys the main properties of a neutrino associated with a magnetic charge. The second aim of the article is to show that dark matter is derived from black holes, mainly from active supermassive black holes. This requires a second conceptual leap: the horizon of the black hole undergoes a high temperature and an intense pressure of magnetic fields which cause a blackout and a phase transition (or broken symmetry) when the matter crosses the horizon. The result is a reversal of Maxwell’s laws: a magnetic charge is substituted for the electric charge, and the electric current becomes a tributary of the magnetic current. A third im-portant conceptual leap follows: sterile magnetic neutrinos created inside the black hole would cross the horizon to the outside to constitute dark matter. Introduction The problem of dark matter is well known: observational evidence and theoreti-with micro black holes connected together by a quantum phenomenon. Stars and galaxies are thought to have been created by jets of matter expelled by supermassive black holes. The black holes that were taken at first for the worst cosmic monsters would actually be the greatest builders [2]. The idea that black holes are dark matter has been proposed by several theorists. They first thought of massive primordial black holes formed in the first second of the universe. But it would take billions to explain the missing mass, and we would see their influence on the motion of the stars. In the 1990s, they then thought of micro black holes, of the order of a nanometer, but weighing one hundredth of the mass of the Moon. Except that their evaporation would have been detected by the gamma satellites in the 2000s. They are currently studying the possibility of primordial black holes weighing between 20 and 100 solar masses. Gravitational wave detectors have seen the fusion of objects in this category in recent years. Our hypothesis is not that black holes are dark matter, but that dark matter would consist of substances from black holes, including sterile neutrinos associated with a magnetic charge. We say that a "black out" happens at the event horizon of a black hole (due to the enormous pressure and high temperature) that reverses the Maxwell's laws, transforms the electric charge into a magnetic charge, and makes invisible and imperceptible particle emission. Active black holes are internally filled with "dark" energy. According to the theory of Relation [3] [4], this energy would be the dark energy of the beginning (amalgamated with the kinetic negative energy, with the cosmological constant) which has dissolved to form the ordinary matter with positive energy, according to the principle of Compensation. This energy is the same as that of polarized vacuum, except that it is all the more excited as the temperature is high. The strong fluctuations cause the expulsion of sterile neutrinos inside the black hole horizon with a relativistic speed close to the speed of light. The emitted particles become slower and "magnetized" with cooling. They will automatically be magnetic monopoles. This paper will address three links between dark matter and the black hole. netic walls, generated by the high temperature and the intense pressure of the magnetic fields, would disturb any object meeting them. There would be a concomitant blackout with an inversion of the laws of electromagnetism and the appearance of a magnetic charge that squeezes out the electric charge. The black hole would produce a dark substance similar to that of dark matter (Sect. 3.2.1 to 3.2.5). In Section 4, "Creation of Magnetic Sterile Neutrinos inside the Active Black Holes that can Cross to the outside to Constitute Dark Matter", we figure that the black hole space is filled with dark energy. In the theory of Relation, there was at the beginning a maximum energy (identified with dark energy) which declined by transforming itself into ordinary matter. The black hole does the opposite process by turning ordinary matter into energy (Sect. 4.1). Some will feel that this paper deals with the problem of dark energy in a manner that is not consistent with the standard model of particle physics and general relativity. We argue, on the contrary, that it is the problem of dark energy that is inconsistent with the standard model of particle physics and general relativity, and we explain why (Sect. 4.2). It is logical to expect that the gravitational energy density inside the black hole can easily convert into virtual couples of particles and materialize them. This enormous energy would behave like an intense accelerator of materialization and annihilation. "Dark" particles and antiparticles could escape from the black hole. Third conceptual leap: the sterile "magnetic" neutrinos could be the dark matter (Sect. 4.3). In Section 5, "Efforts of Four Researchers", we highlight some aspects of the work of four researchers who are contributing to the extension of knowledge about dark matter, magnetic monopoles, black holes and sterile neutrinos. In Section 6, "Heat, entropy and information have everything to do with black holes", after describing the problem of entropy (Sect. 6.1) and the information paradox (Sect. 6.2), we presume that not only information escapes from the black hole, but also the destroyed matter (Sect. 6.3). In section 7, "Comments and Conclusion", we realize that dark matter, different from ordinary matter, generates a crisis. A major conceptual overhaul is needed. It concerns, in addition to dark matter, electromagnetism, sterile neutrino and black holes. A final summary serves as conclusive. Omnipresence of Dark Matter in All Regions of the Universe The suspicion of the existence of a dark matter is due to the astronomer Fritz Zwicky in 1930. He noticed a dynamic anomaly within each cluster of galaxies whose mass he proposed to determine by measuring the speed of galaxies constituting these clusters. The speeds of the galaxies were too great to be balanced by the gravitational pull of the cluster, which should have been scattered. He came to the conclusion that the mass of these clusters must be greater than all that was observable and that a hidden matter must be present in each cluster. He estimated that the hidden mass represents more than 90% of the mass of the cluster. Around 1960, astrophysicist Vera Rubin, while studying the dynamic behavior of gaseous clouds orbiting the center of certain galaxies, discovered that this unknown dark matter was also distributed outside the clusters. These clouds are sometimes located at distances from this center that sometimes far exceed the visible radius of their galaxies and the rotational speeds of these galaxies should have decreased with their distance from the center of the galaxy. She realized that the speed of rotation of the gaseous clouds was independent of their distance from the galaxy. If all the matter, visible and dark, had been concentrated in the galaxies, the speed of rotation of these clouds should have been all the smaller as their distance in the center was great. All these experimental facts testified to the presence of dark matter, uniformly distributed not only within these galaxies, but also in vast external volumes. Observations on larger scales, clusters of galaxies that cover a few million light-years, have confirmed the presence of dark matter [5]. The Candidates Theorists consider radically different paths to explain this unknown fluid of ordinary matter that bathes the whole cosmos and whose nature remains to be explained. Over the decades they have scrolled through several candidates: wimps (neutralino, Kaluza-Klein particle, little Higgs particle), wimpzilla, axions, machos, black holes, sterile neutrinos, etc. Wimps (weakly interacting massive particles) are the preferred candidates. These hypothetical particles have in common to be more massive than the particles known today, and are supposed to be able to interact with the latter only via the force of gravity and the weak nuclear force. These are ideal candidates for dark matter, as it is assumed that they would have just the abundance required to explain the current structure of the universe. Several distinct theories, all supposed to correct the imperfections of the standard model that describes particle physics, predict different types of wimps. For twenty years, astrophysicists and particle physicists have given themselves the means to discover them. Whether it is a direct detection (the aim is to detect the impact of a wimp on a core of ordinary material in an underground laboratory) or indirect (the products of the collision of two wimps are tracked in galaxies, in the heart of the sun, in cosmic rays, in the LHC at CERN), no dark matter particles have been detected to date [6]. A Variant of Electromagnetism It seems that particle physicists are living a nightmare scenario. It turns out that they found no new particles beyond the standard model with their accelerators. They took a first look everywhere and found nowhere. They still continue to move forward. They rely on the great diversity of detection approaches to hope to one day get their hands on the good particle(s). We believe that this crisis must be solved by a major conceptual overhaul. Without being guided by theoretical prejudices, we propose a variant of electromagnetism that will provide an explanation for dark matter. Why would something remarkable and unprecedented not have occurred at the heart of the electromagnetic theory that would make it incapable of emitting or absorbing electromagnetic radiation? There is almost complete symmetry between electrical and magnetic phenomena. The difference lies in the fact that no free magnetic pole exists (north or south), while there are free electric charges (positive or negative): the two types of poles can never be physically separated. This makes us consider magnetism as a secondary phenomenon whose existence depends on the flow of an electric current [7]. Maxwell's four equations fully describe the electromagnetic behavior on a very large scale, including that of light. The electromagnetic field is the space between the lines of force of the electric field E and the magnetic field B, and there is thus ( ε : vacuum permittivity; µ : vacuum permeability) Suppose a severe astrophysical event has occurred that would cause the visible light to cease. And that to provoke this darkness, it would have been necessary that the electric charge no longer plays its role, that its physical size becomes another (the letter q, in Coulomb's formula, would no longer play the exact role played by the letter m in the Newton's formula). This last possibility has already been considered by Paul Dirac, while he was wondering about the reason for the existence of the smallest electric charge. Dirac's Theory Establishes a Connection between the Smallest Electrical Charge and the Smallest Magnetic Pole Although in classical electromagnetism the existence of magnetic monopoles is not compatible with Maxwell's equations, and although special relativity allows us to demonstrate all Maxwell's laws, including that which predicts the non-existence of magnetic monopoles, Paul Dirac demonstrated in 1931 that the existence of magnetic monopoles was compatible with Maxwell's equations in the hypothesis of the quantification of the electrical charge [8]. His theory establishes a connection between the elemental electric charge (that of the electron) and the hypothetical elementary magnetic charge. It showed symmetry between electricity and magnetism, which is still completely foreign to established conceptions. We know that the smallest electric charge exists experimentally. With a purely electronic quantum condition, we obtain the value e (in CGS system) given approximately by (3) and briefly illustrate the character of the relation [9]. Here we consider that poles exist and one isolated plate (a capacitor plate holding a magnetic pole charge) holds a pole density of o µ σ poles per unit area. By analogy with the calculation of the electric field from such a plate holding an electric charge density and the symmetry between the electric field E and the magnetic Bc, we find that the magnetic field near the plate is The electric charge moving with a velocity v will move in a circle of radius r if the centrifugal force on the charged particle, From the quantization of angular momentum, mvr is equal to nћ, where n is an integer. Substituting the pole density expression for B ( Although this estimate is crude, it serves to illustrate the connection between the quantization of angular momentum and the quantization of charge and pole strength. Quantum mechanics requires a quantization of charge-if monopoles exist. Instead of finding a purely electronic quantum condition, such as (2), Dirac found a reciprocity between electricity and magnetism, a connection between the magnetic pole quantum and the electronic charge. His theory contains no arbitrary characteristic, gives no possibility to modify it, and would have the ef-fect of creating a magnetic monopole. For that, the theory requires a quantization of the electric charge, because any charged particle moving in the field of a pole of intensity o µ must have as it charge an integer multiple (positive or negative) of e, so that functions wave describing the motion may exist. We might still ask why, if poles and charges are symmetric in principle, we have charges and not the poles. If the universe were constructed so that there was no electric charge, but only magnetic poles with the same value of pole strength as the fundamental charge strength, we believe that this universe would be indistinguishable from ours. If we could communicate with the inhabitants of that universe (who are made of protons, which have no electric charge, but hold a unit magnetic pole strength, and electrons, with no charge, but with an opposite magnetic pole strength), we could not determine whether they live in a magnetic universe or an electric universe as we do [9]. Our Theory for Dark Matter: Sterile Particles Associated with Magnetic Monopoles And if the universe was constructed in such a way that there is no electrical charge, but only magnetic poles not having the same value of pole strength as the fundamental charge strength, so that the left-hand side of Equations (2) or (3) no longer corresponds to the experimental value 137 or the theoretical value 2, we think we would be in a total darkness that would have the appearance of a dark matter. We can imagine that the elements that make up this dark matter would be composed of elements charged magnetically, with electricity and the electric field considered as a relativistic consequence of the magnetic field, which involves reversing Maxwell's laws. Inversion of Maxwell's Laws The experimental dissymmetry of Maxwell's equations with respect to the elec-tric-magnetic duality is related to the fact that the electric field is generated by the usual charges which give it a non-zero divergence, but the magnetic field is always of zero divergence because of the absence of corresponding punctual charge. Experimentally, the only source of the magnetic field comes from the existence of an electric current, that is to say a motion of electric charges. The equations are still asymmetrical but no longer subject to the electric charge. Equations (14) and (15) (14) indicates that there is no electric charge at any point in space. Basically, moving charges are equivalent to currents. But because the above reversed Maxwell's equations assume that there is no electric charge in dark matter, there is no electric current e J on the right side of Equation (15). Equations (13) and (16) Because the Maxwell's equations above assume that there is a magnetic charge, there is a magnetic current o J µ on the right side of Equation (16). Therefore, the absence of electric charge and the presence of magnetic charge reverse the asymmetry. In fact, the electrical charge would become a magnetic pole, which would result in an attribution reversal, so that electricity should be considered as a sec-ondary phenomenon whose existence depends on the flow of a magnetic current. The overthrow, in addition to the darkness caused, would in a way make that there would be free magnetic poles when there would be no more free electric charges. Magnetic monopoles would exchange "dark photons". Note: There is no question of continuing by presenting a critical analysis of the hypothesis of Maxwell's "inverted" laws, because these must not be considered in an absolute sense, as if the nature of dark matter had to conform precisely to these laws. It is only a simplistic schema of reality, a kind of approximation, an image. As such, it corresponds to reality, even if it does not identify with reality. Magnetoelectric Force The inversion that we have just presented shows that there would be a dark magnetoelectric force (ME) with a dark photonic wave, just as there is an electromagnetic force (EM) with a photonic wave. Dirac's theory ensures that the magnetic monopole can coexist with an electric charge in ordinary matter. Maxwell's laws are not reversed, they are completed in order to obtain a perfect symmetry: According to our hypothesis of the inversion of the laws of Maxwell, the magnetic pole would replace the electric charge: There would only be a magnetic pole, which evicted the electrical charge. We suggest the existence of an electric charge (known electric monopole) in ordinary matter and a magnetic pole (unofficial lightweight magnetic monopole) in dark matter. With rare exceptions, there is no coexistence of the two charges in ordinary matter or in dark matter. There would be no electric monopole in dark matter just as there would apparently be no magnetic monopole in ordinary matter. "Magnetic" Sterile Neutrinos To penetrate the mystery of dark matter, we think that it is a different electromagnetism, a magnetoelectrology, with the necessity of qualifying this variant as a "new force". And that it is also a new particle: "magnetic" sterile neutrino. Physicists know three types of neutrinos. Since the 1970s, many researchers have assumed that there is a fourth type, a "sterile" neutrino, much heavier, but which interacts even less than the others with ordinary matter. It is a hypothetical type of neutrino that does not interact via any of the fundamental interactions of the standard model of particle physics except gravity. It is a right chirality neutrino or a left chirality antineutrino that can be added to the standard model and can take part in phenomena such as the mixing of neutrinos. We assume that there is a fifth type of neutrino. Our hypothesis is that there would be a sterile neutrino linked to magnetic pole, that would belong to dark matter and that would be a magnetic monopole. The term magnetic sterile neutrino is used to distinguish it from sterile neutrino. The mass of the neutrino in both cases is unknown and could take any value between less than 1 eV and 10 15 GeV. Dark matter would consist of invisible magnetic sterile neutrinos that swarm in the universe and exert a gravitational attraction everywhere [11]. This sterile neutrino would depend on a magnetic pole that would be undetectable since it is not an integral multiple of the conventional electric charge. Where would it come from? Our idea is that they come from active black holes, which risks upsetting our vision of black holes even more. Black Holes Revolution outside the Event Horizon Black holes are giving birth to a new genesis of the universe at every level. Our universe would have emerged from a primordial black hole that would be the big crunch of another universe. Initially, the black holes were at first only a mathematical singularity, a cosmic curiosity very difficult to observe. These gravity wells come from general relativity, they are an effect of the existence of a spacetime curved by the masses and it is only in 1968 that John Wheeler invents the expression "black hole". As early as 1916, Karl Schwarzschild found the solution of general relativity that describes the gravitational field around a star. A singularity in its heart is seen as impenetrable before becoming, after the 1970s, the horizon of the black hole that swallows everything. From the 1990s, with the progress of observations, there is a need to believe that they really exist. Astronomers build a bestiary of destructive black holes. There would be everywhere: in the primordial universe, at the center of our Milky Way and all galaxies. Cosmologists and physicists see the concept of the black hole as a means to marry the irreconcilable theories of relativity and quantum physics. The proof of their existence fell on September 14, 2015 when the Ligo experiment for the first time captured gravitational waves caused by the fusion between two black holes. Today, it seems that those who were thought to be the worst cosmic gluttons have become the great architects of the universe. They would have structured the primordial universe, modeled galaxies and lit up stars [2]. Supermassive black holes have been observed everywhere in the cosmos and astrophysicists believe that one of them sits at the center of most galaxies. They concentrate millions of times the mass of the Sun and would have created stars and galaxies. Observations made in the late 1990s were the first to reveal three creative roles of supermassive black holes: First, a role of regulator, guardian of galaxies. Astrophysicists first realized that in nearby galaxies, the central black hole always seemed to weigh 1/1000 of the star bulb that shelters it, a sign that the two are linked. Then, from 2005, it became apparent that the energy of the most energetic black holes can modulate star formation, stopping the formation of galaxies that otherwise would be enormous. Plasma winds ejected by the disk from black holes could control the growth of galaxies. These winds reach 1/10 the speed of light and carry enormous amounts of energy. In this way, today, 80% of the gas in the universe is found outside galaxies. Thus, the black holes play a regulatory role, soothing through their winds a cosmos too eager to generate stars. Second, a role of cleaner (ionizer) of the primordial universe. They could also, in other circumstances, have played the opposite role: concentrating the gas clouds, boosting the star formation, they would have been the main actors of the reionization process some 400,000 years after the big bang. Their X-rays, much more energetic than the UV of stars, could have been extracted from galaxies and ionized the intergalactic medium at greater distances than the UV of stars. Even smaller black holes, the stellar black holes a few dozen times the mass of the Sun, could have participated because in this primordial universe, they were always accompanied by a star whose matter they vampirized. What to maintain over the long term their production of X-ray at a high rate. They could thus disperse the thick neutral hydrogen fog by ionization of the atoms and make the cosmos transparent. Third, a trigger role of star births through their jets. A supermassive black hole compresses and heats the gaseous material that accumulates and revolves around it so much that this burning plasma begins to radiate and create an intense magnetic field. The radiation pressure exceeds gravity. Strong winds are generated in all directions and some of the matter escapes from the poles, in the form of two fine jets, several hundred km/s away. The winds blow the galaxy's gas and regulate its star production. The jets strike distant clouds of gas, initiating their condensation into new stars. This is suggested by the observation of some active galaxies. It seems proven, that with their jets, black holes form stars. It was discovered that a surge of new stars followed the direction of the jet emitted by its central black hole. The jet is so powerful that in a short time it can form 10% of a galaxy, like a spider spinning its web. Although there are only a handful of examples, black holes are now taken into account in the theory of galaxy formations. Black Holes Revolution inside the Event Horizon The three previous roles involve the photon sphere and the accretion disk outside the event horizon. We propose here a fourth role, a role of creator of dark matter, by their emission of "magnetic" sterile neutrinos. This role concerns the internal space of the black hole, between the horizon and the center of the black hole. The Classic Horizon of the Black Hole Roger Penrose, wrote a short article in 1964 in the journal Physical Review Letters, where he described the problem of the singularities associated with star implosions and demonstrated a mathematical theorem that said that when a star collapses to the point where gravity becomes strong enough to form an apparent horizon around it that brings back the photons that are trying to emerge, nothing can prevent the gravity from becoming strong enough to create a singularity. Therefore, any black hole must contain a singularity. In the late 1960s, Penrose searched unsuccessfully for a mathematical example of a collapse that produced a naked singularity. In 1969, he issued the conjuncture of cosmic censorship: no object can, collapsing, give birth to a naked singularity. If a singularity is formed, it is dressed with a horizon that makes it invisible from the outside world [12]. This apparent horizon (like a spherical membrane) is in fact the Schwarzschild horizon. It is not singular in the strong sense, space-time is defined and it is permeable to incoming particles, it is a unidirectional membrane. The membrane of a sphere formed of light rays which define its surface. At the center of Schwarzschild's solution lies the true singularity, the heart of the black hole. The Schwarzschild sphere is an apparent singularity called horizon ( But if we take a quantum view of cosmic censorship, the collapse of the structure at the level of a singularity must not affect any physical measurement. A description of the particle in free fall should allow to drive the particle through the horizon to the center by a path integral [14]. The most general model of black holes, according to general relativity, says that imploding stars towards the state of the black hole must, by passing their horizon, lose all the differences to the spherical symmetry, "all their hair", all their characteristics (except three parameters: mass, charge, angular momentum), and therefore, for example, their protuberances, their asymmetries and their magnetic field; they must, willingly or forcibly, become "bald". This lost structure must be evacuated previously in the form of radiation, in the form of an emission of gravitational waves. Power Failure at the Crossing of the Horizon It may be said that this model predicts the emission of gravitational waves, but the particle of the collapsing star does not end its life on the horizon, as if it were finally dying in the center, on the true singularity. At the crossing of the horizon, it seems to simply disappear on one side to reappear the other side. It becomes invisible to any observer left outside the sphere. Everything happens as if the sudden invisibility of the light was caused by a blackout that begins on the horizon. How to understand what happens to a particle, whether material or luminous, immediately after it has crossed the horizon? One could perhaps understand by taking an ordered magnetic field that would have settled on the "enlarged horizon" of a black hole located in the center of a quasar. An enlarged horizon is a fictional area just outside the horizon while an "inner horizon" is a fic- membrane. In fact, the lines of the field do not cross the real horizon, but wrap around it and form loops. The density of these loops and the intensity of gravity are such that they cause symmetry breaking in the membrane. There is a charge reversal; the magnetic pole replaces the electric charge. It's the blackout. Magnetic Field of the Horizon and Energy of the Quasars To provide the energy of a quasar, a magnetic field should cross the enlarged horizon throughout the life of the quasar. However, there is a source, outside the black hole, likely to generate such a field: the interstellar gas attracted to the black hole. The interstellar gases are the seat of magnetic fields and, when the gases heat up and ionize near a black hole, they form a plasma disk where the field lines are "frozen". The rotation and turbulence of this plasma in accretion entangle the field lines, some of which settle on the enlarged horizon, during the fall of plasma fragments. In the membrane, surface currents continuously dissipate the energy of this chaotic field, leaving only "clean", ordered, field lines that penetrate the membrane at the South Pole and exit at the North Pole. After an ordered field line has been deposited on the black hole, it no longer disappears: the plasma of the accretion disk and the magnetic field make it persist as long as the disk does not explode or is not swallowed by the black hole. The black hole acquires a magnetic field more than 10,000 times more intense than the Earth's magnetic field. electric charge disappears to become the magnetic pole. We go from light to darkness; we witness a kind of reversal of Maxwell's laws, as described above [15]. Where Electromagnetism Becomes Magnetoelectricity With the reversal of charges, electric currents become electric fields and magnetic fields become magnetic currents. The result is that on the horizon, this region of space-time pressed by the electromagnetic tidal forces which culminate, the light does not act suddenly anymore. To illustrate the phase transition, consider that electromagnetism, before breaking through the horizon, is like a piece of wood impregnated with water. In this analogy, wood is magnetism and water is electricity, and both (wood and water; magnetism and electricity) are intimately intertwined, unified. Close to the horizon, according to our better understanding, the laws of quantum mechanics begin to combine with those of Einstein's general relativity and are already beginning to change the "rules of the game". (They will be totally changed at the singularity, and the new rules will be called quantum gravity.) The horizon and the laws of gravity combined with those of the quantum mechanics that govern it are like a fire in which wood swollen with water is thrown. The fire boils the water coming out of the wood, leaving it alone and master. On the horizon, the laws of quantum gravity expel electricity, leaving magnetism alone and resistant. Electricity is reduced to a current without conduction, extinguished [12]. One could also explain what happens with invariance groups that have proven themselves in quantum mechanics. They are algebraic transformations that retain the form of the equations and reveal physical properties. However, if we imagine a particle that crosses the horizon of the black hole, it "oscillates", it has a charge no longer electric but magnetic: it is the magnetic monopole. The particle described by the Dirac equation thus acquires another gauge invariance: an inverted gauge invariance. The Dirac equation might have a gauge invariance that changes a bit the wave described by the equation, but the new particle does not just change "phase", it is no longer an integral multiple of the charge of the electron. It no longer interacts with electromagnetism. Nevertheless, there are other "dark" invariance groups that fall under "magnetoelectrology" and that can intervene in accessible disintegration phenomena because these monopoles are not only magnetic but endowed with weak interactions. To summarize, on the periphery of the black hole, there is a horizon: a region where light no longer works and where electromagnetism has given way to magnetoelectricity. This means that with the reversal of charges, electric currents become electric fields and magnetic fields become magnetic currents. In a pictured language, it will be said that electric fields are "frozen" in magnetic currents. Light Can Escape from the Black Hole In the context of general relativity, a black hole is defined as a gravitational singularity occulted by a horizon of events. It is a celestial object so compact that the intensity of its gravitational field prevents any form of matter or radiation [16]. According to him, these space ogres, capable of devouring galaxies and making light disappear, could actually release some quantities of matter and particles. The matter and the energy could actually be held temporarily, then modified, before being released into space. A phenomenon that would be inversely proportional to the mass of these objects: the smaller a black hole, the more it would let large quantities of matter escape. Black holes would not be so "black" as most cosmological models portray. Dark Energy inside the Black Hole The deep meaning of the discovery of Hawking radiation emanating from black holes is that the quantum vacuum is polarized by the very intense gravitational field prevailing in the vicinity of a mini black hole; the gravitational energy of the latter is converted spontaneously into particles. Quantum vacuum means minimal energy. According to the theory of Relation [3] [4] [17], there was at the beginning a maximum energy (identified with dark energy) which declined by transforming itself into ordinary matter. Black hole does the opposite process by turning ordinary matter into energy. This enormous energy is like a vacuum energy (which has become very dense) inside the black hole since it is no longer materialized. It is logical to expect that the enormous density of gravitational energy (which has a colossal mass) inside the black hole can easily convert into virtual couples of particles and materialize them. Following the reversal of the charges explained above which converts visible ordinary matter into invisible matter, let us imagine a distribution of dark matter inside a black hole whose mass increases by engulfing a whole astrophysical jumble of gas pockets, stars, etc. As the mass of the black hole enhances, the dark matter sees its distribution contract, become more compact and denser. The black hole accumulates a colossal black mass that is equivalent to dark energy. The latter means a gigantic density of matter-dark energy inside the horizon. This enormous energy would behave like an intense acceleration of annihilation [18]. The boost in the mass of black holes thus augments the rate of annihilation of the dark energy-matter inside the horizon. In principle, because the density of dark matter is prodigiously high (inside the supermassive black holes, and even the intermediate-mass black holes of with a mass between a hundred and a mil-lion times the mass of the Sun), the probability exists that the rate of annihilation of particles of dark matter will accelerate to the point of injecting outwardly, out of the horizon, ample energy to constitute the unknown substance known as "dark" which fills the universe. To describe the states of a magnetoelectric field, we will use an intra-horizon space, which is a first circle once crossed the horizon superimposed on several circles leading to the central point, called a singularity. In this intra-horizon space, the magnetoelectric field contains a huge concentration of energy, allow- The Idea of Dark Energy We have just seen that according to the theory of Relation there was at the beginning a maximum energy (identified with dark energy) which declined while being transformed into ordinary matter. Some will feel that this paper deals with the problem of dark energy in a way that is not consistent with the standard model of particle physics and general relativity. A big nuance to bring: it is the problem of dark energy which is inconsistent with the standard model of particle physics and general relativity. Let's take a closer look. According to official cosmologists, 70% of the contents of the universe are made up of a mysterious, undetectable and anti-gravitational dark energy that accelerates the expansion of the universe. It was through the observation of distant supernovae, which constitute "standard candles" intended to measure the universe on a large scale that they were able to deduce that dark energy existed. The latter was not predicted by any theory. It was introduced as a simple parameter in the equations of quantum particle physics and general relativity, which are two diametrically opposed theories. The result is that the dark energy, which looks like the energy of the quantum vacuum, seems to be 10 120 times too strong compared to what the observations indicate. This gigantic gap is at the heart of the greatest crisis in contemporary physics. In our opinion, we have reached this gigantic gap, or rather this unacceptable error, when astronomers, to measure the distance of very distant supernovae, have assumed that the intrinsic luminosity of the supernovae is the same for all, independent of the particular object measured. With this gratuitous hypothesis, impossible to prove, they came to the conclusion that expansion accelerates instead of slowing down (slowing down is what it does in an honest Friedmann model, and this is what is predicted in the equation of the theory of Relation). They then thought it wise to use an engine of unknown origin to produce the desired effect: dark energy [19]. Astrophysicists have associated this dark energy with negative pressure on a cosmological scale that would translate into a "current acceleration" in the expansion of space. It corresponds to a quantum vacuum energy whose value would be disproportionately greater. They gave this energy density value of the vacuum the same status as a repulsive cosmological constant, which pushed them to rehabilitate Einstein's cosmological constant, but about 10 120 times larger. There is a deep contradiction between the concepts of quantum field theory (according to which the energy density of vacuum is about 10 120 times the density of matter-energy of the present universe), and the ideas of general relativity (vacuum energy is a source of gravitation, hence of curvature of space-time) used to associate this estimate with astrophysical observations. This dark energy in the form of a repulsive cosmological constant imposed by the omnipresent quantum vacuum would produce hallucinating cosmological effects: our universe would bend so intensely that the visibility horizon would be at centimeter distances [5]. By decreeing that the supernovae of yesteryear were the same as those of today, by affirming dogmatically that the first supernovae were necessarily of a similar chemical composition of the following [20], we are arrived at the "vacuum catastrophe" or the problem of the cosmological constant. The high degree of intoxication of the scientific community was manifested by the award in 2011 of the Nobel Prize in physics to three astrophysicists belonging to two different teams for their discovery of the acceleration of the expansion of the universe. This discovery, based on the unconfirmed hypothesis of the uniformity of supernovae and uncertain distance measurements, endorsed by the judgment's passivity of official cosmology, is as aberrant as the Ptolemy's epicycles. More and more, specialists advance the hypothesis that the acceleration of the expansion of the universe, which motivated the creation of the concept of dark energy, could in fact result from an observational bias [21]. What will be said in astrophysical publications (whose content was sadly uniform) the day when they will have to announce the non-uniformity of the concerned supernovae? Creation and Emission of Sterile Neutrinos in Black Holes Sterile neutrinos have often been proposed as dark matter candidates. It is also our preference. They would interact only by gravity with ordinary matter, with the exception of a small ability to mix with familiar neutrinos of the standard model. Sterile neutrinos associated with a magnetic charge would be one of the only by-products of annihilations that would successfully leak from the inside of the black hole to the outside, as would solar neutrinos associated with an electrical charge (electron) are the only ones who manage to escape from the heart of the Sun. We will therefore limit ourselves here to the creation of sterile neutrinos dependent on a magnetic charge. The wave-corpuscle duality of photons, extended by de Broglie to the waves of matter, led to the quantum concept of matter field. This quantum field of matter is a set of operators, creations and annihilation of fermions, including the neutrino: the operator k v + creates a neutrino of pulse k, and the operator k v annihilates a neutrino of pulse k. In this intra-horizon space of the active (hot and dense) black hole, which is part of Dirac's restless ocean, virtual pairs are constantly being created and destroyed. For a brief moment, a particle and its antiparticle separate. There are four possibilities Process 1: The two partners meet and annihilate each other. Process 2: The antineutrino remains in the black hole and the neutrino materializes in the outside world. Process 3: the neutrino remains in the black hole and its antineutrino escapes into the outside world. Process 4: Both partners stay in the black hole. Particles escaping to the outside would be fermionic monopoles (refusing to put themselves in the same state). They would leave with a relativistic speed. Could there be several types of sterile magnetic neutrinos that can oscillate between them? Could the magnetic charge neutrino get the flavor of an electronic neutrino? Certainly not by an oscillation process, since all the neutrinos involved should be associated with the same type of charge. It is however possible to envisage that the sterile magnetic neutrino can decay into gamma rays (photons), into standard neutrinos (electric charge), into weaker sterile magnetic neutrinos (magnetic charge), and other particles. To return to the neutrinos escaped from the black hole, they would rather tend to slow down and regroup with the cooling to form the dark cosmos whose rules do not reflect our bright world. They would obey other non-symmetry laws, Maxwell's inverse laws, and be provided with magnetoelectric and weak interactions (very small compared to nuclear forces). The mass of these neutrinos affiliated with the magnetic charge would be rather small instead of being huge or zero, but sufficient to fill the missing mass gap. Scientists think that there may be more than just a type of dark matter. A possibility is that several classes of dark matter particles exist, as well as a variety of forces that act only on them. One idea is that particles of dark matter interact with each other by a force that ordinary matter cannot feel. These particles could carry a "dark charge" that attracts or repels them even if they are electrically neutral and could emit "dark photons". Dark atoms would emit dark photons at a different rate than ordinary matter that emits ordinary photons. We know by observing the shapes of galaxies that this rhythm must be very weak. Efforts of Four Researchers In line with our paper, we highlight certain aspects of the work of four tenacious researchers who contribute to the extension of knowledge on dark matter, magnetic monopoles, black holes and sterile neutrinos: Georges Lochak, president of the Louis-de-Broglie Foundation, is known for his work on magnetic monopoles: the magnetic monopole is a fermion endowed with weak interactions. He found an equation, analogous to that of Dirac, which no longer represents an electron but a magnetic monopole, which is, in a way, the other side of the electron. His equation finds Dirac's formula which shows that the charge of a magnetic monopole is equal to an integer multiple of the charge of the electron multiplied by 68.5: its equation joins this result. For him, it indicates that if the multiple is equal to zero-so if the monopole has no charge and is neutral-its equations coincide with those of the neutrino [22]. Georges Lochak worked for ten years with Leonid Urutskoiev of the Kurtchatov Institute who had headed a team that was looking for the origin of the Chernobyl disaster. Urutskoiev had hypothesized a flood of monopoles, resulting from an electrical explosion that occurred in the engine room. Some clues made him lean towards the hypothesis of a light magnetic monopole that corresponded to the Lochak monopole. Dozens of physicists contributed to a joint research work. The experiments were counted in the hundreds. The main theoretical center was the Louis de Broglie Foundation. André Michaud explored the foundations of an electromagnetic mechanics of elementary particles whose laws apply to all levels. He described a space-time geometry that represents the mutual induction of electrical energy and magnetic energy within moving elementary particles in accordance with Maxwell's equations [23]. He details an experiment he performed that proves out of any doubt the inverse cube relation with distance between the magnetic fields of a magnet whose both north and south poles physically coincide, proving by the fact that the same inverse cube interaction law also applies by similarity to the elementary electromagnetic particles colliding with quasi-punctual behavior. This experiment also demonstrates that the magnet used behaves like a magnetic monopole [24]. Eue Jin Jeong basically demonstrated that the black hole jets and the dark matter problems are essentially one integrated physical phenomenon caused by dipole gravity. His outstanding discovery of the long ranged dipole gravity is in the fulfillment of Einstein's general relativity in its simplicity of the equivalence principle. He explains the dark matter problem in his book by invoking the fact that jets from both the south and the north poles of the rotating black hole constitutes a point source of the continuous outgoing matter following the dipole gravity force lines [25]. Jeong started early in 1982 when he was a graduate stu-dent, wondering why general relativity does not explain the jet phenomena from the black hole accretion discs. He was perplexed by the dismissed dipole gravity in the weak field limit of general relativity. His quest for the solution to the problem led him to realize in 1995 that the rotating hemisphere has a rotation frequency which depends on the relativistic shift of the center of mass. By investigating further, he derived Lense-Thirring force from the dipole gravity potential generated by the two hemispheres oppositely superposed inside the rotating sphere. The result is described with detailed mathematical derivation in an article published in 1999 [26]. Kevork Abazajian, an American physicist who works at the University of California, has demonstrated in an article the mechanism by which 7 keV sterile neutrinos can be produced and be the source of unknown gamma rays observed at 3.5 keV from center of galaxy cluster [27] [28]. Several teams of astrophysicists have observed an X-ray (gamma) spectral line with energy of about 3.5 keV, which corresponds to nothing known and seems very real, that is to say statistically significant. The only remaining hypothesis to explain the existence of these photons seeming to come from where there is the most dark matter, is that they would come from the disintegration of sterile neutrinos. As they are a little heavy, they would disintegrate producing "normal" neutrinos and photons whose energy would be half their mass. Abazajian considers that all dark matter consists of such 7 keV sterile neutrinos. Heat, Entropy and Information Have Everything to Do with Black Holes The Entropy Problem Entropy is forbidden to black holes by general relativity, because the theory requires them to be completely smooth, without substructure. The Information Paradox In agreement with the standard picture of quantum mechanics, information can never be destroyed. Even when you burn a letter, for example, the original information encoded in the atoms of the letter is preserved in the ashes. In quantum mechanics, every system is described by a formula called the wave function, which encodes the chances that the system will be in any particular state. In keeping with Hawking's first calculation, the particles that escape from a black hole do not depend at all on the properties of the material that went into the hole. We could send a note with a message into the black hole, and there would then be no process to reconstruct the message from the final particles that would emerge. Hawking radiation implies that black holes destroy the information about the matter that falls into them. In Hawking's thought experiment, the loss of information means that we have no method to predict the wave function of Hawking radiation based on the properties of the mass that went into the black hole. Information loss is forbidden by quantum mechanics, so Hawking concludes that the laws of quantum physics had to be modified to allow for such loss in black holes. In an effort to resolve these puzzles (this information paradox), physicists looked for new approach to combine general relativity and quantum mechanics into a coherent theory that could describe black holes. In 1997, Juan Maldacena came up with an idea around the information loss problem-a solution sometimes called the Maldacena duality. This duality is equivalence between quantum mechanics and gravity-a quantum theory of gravity. It means that the quantum physics of a black hole is equivalent to that of an ordinary gas of hot nuclear particles. It also means that spacetime is fundamentally different from what we perceive, more like a three-dimensional hologram projected from a two-dimensional surface of a sphere. If Maldacena's assumptions are true, then ordinary quantum laws would apply to gravity of black holes as well, and information cannot be lost [30]. R. Bagdoo Hawking had proposed that general relativity works for black holes but that quantum must be modified. Maldacena concludes that spacetime is holographic. In 2004 Hawking announced that he had changed his mind about the need for black holes to lose information. Entropy, Heat and Information of Black Hole According to the Theory of Relation We consider that quantum physics inside a black hole is equivalent to that of concentrated energy magma, or that of a gas of hot nuclear particles. According to the theory of the Relation, energy is "dark" for a double reason: it undergoes a change of energy (principle of Compensation) [31], and because a blackout accompanied by a charge reversal occurs at the passage of the event horizon. We conjecture that energy within active black holes-surrounded by an accretion disk whose matter feeds them-is subject to high thermal quantum fluctuations (kinetic energy of particle motion). The temperature, that is to say the energy absorption capacity, is very high. Not only quantity of energy is huge but also its availability. According to quantum mechanics, pairs of particles and their antimatter counterparts are born incessantly, then disappear a few moments later in the universe. Pairs of real thermal particles that can be as well leptonic than bosonic. A huge amount of radiation and particles escapes from the inside of the black holes. Under general relativity, no signal of any kind can come back from beyond the horizon because that would suppose exceed the speed of light. But if we rely on the equation that Hawking has derived from the temperature of a black hole [32], ]. The link between energy, entropy and temperature refers to the second law of thermodynamics, which says that entropy always rises. The law of entropy implies irreversibility. The principle of irreversibility is that if you leave things to themselves at different temperatures, with the passage of time, their temperatures are getting closer and closer, and the availability of energy is continually decreasing. The one way always leads to a loss of energy availability. The drop in temperature, and therefore the decrease in the energy absorption capacity, goes hand in hand with an enlargement in entropy (which is a degraded energy) [33]. In the case of active black holes, there is a very high temperature around and beyond the horizon. The rise in temperature, and therefore the growth in energy, should go with a drop in entropy. But one concludes that the energy-mass increment goes hand in hand with a gain in entropy. In this case, the second law of thermodynamics, which states that entropy rises, presents a serious problem with temperature and energy [34]. Energy is a subtle concept, hard to grasp. It can be said that energy stops the motion as much as it provokes it. Bekenstein first conjectured that black holes have entropy. Entropy always goes hand in hand with energy. In itself, the existence of entropy does not imply that a system has a temperature. For Hawking, the key was temperature, not entropy. He anticipated that black holes also have a temperature. They are not cold objects, dead. They radiate thanks to an internal heat, but, in the end, it is this heat that causes their destruction. On the subject of information, we think that the laws of general relativity are inapplicable beyond the horizon and that quantum mechanics must not be modified: information loss is forbidden. Imagine the particles falling into a black hole, each with its particular frequency that is its message. Very quickly, the sharply frequencies begin to dissolve, the message becomes almost impossible to discern in this magma of dark energy. The message becomes hopelessly scrambled in this inextricable mix of quantum fluctuations. The principles of quantum mechanics ensure that the message is always present within deformed particles moving in a chaotic manner. Although scrambled, not a single bit of information was eradicated. Each bit of information ends up being transferred to photons and other particles that evacuate energy from the black hole. The information is stored among the particles that form the Hawking radiation. The latter calculated that the disturbance of vacuum fluctuations due to black holes caused the emission of photons, as if the horizon of a black hole was a blackbody [32]. Hawking believed that a particle in a virtual pair escapes from the black hole but carries no information. Many theorists concluded that Hawking was wrong, that he had mistaken the scrambling of information for actual information loss. Our opinion is that not only information escapes from the black hole, but also the destroyed matter. Perhaps the truth is somewhere in a hologram. A hologram is a two-dimensional image that makes it possible to reconstruct three-dimensional images. The holographic principle is a speculative conjecture in the framework of quantum gravity theory, proposed by Gerard't Hooft in 1993 and then improved by Leonard Susskind in 1995. This conjecture proposes that all the information contained in a volume of space can be described by a theory that lies iverse is mathematically similar to the horizon of a black hole. The difference is that in the first case we are in and we look outward, and in the other we look at it from the outside. We can assume that the photons of the cosmic microwave background radiation that surround us are the messengers of the cosmic horizon that would carry the coded images of the megaverse. Just as one can surmise that the physical events that take place behind the horizon of the black hole would be telegraphed to the outside in a scrambled telegraph code in the form of Hawking radiation [35]. The idea that the universe is a kind of holographic image is surprising. Comments and Conclusions More than 80 percent of the mass of the universe is invisible. The presence of this dark matter is detected thanks to its gravitational signature [36]. His nature remains one of the great enigmas of cosmology. But we at least know that it is, for the most part, of a different nature from the ordinary matter that composes planets and stars [37]. In practice, the observations show that we cannot explain the distribution of matter by supposing that it is, on the one hand, solely baryonic and, on the other hand, governed only by the laws of gravitation. To reconcile theory and observation, scientists considered either changing the material content of the universe or changing the laws of gravitation. The hypothesis of an unknown form of matter remains the most accepted. A plethora of scenarios of high energy physics postulates new forms of matter that are difficult to detect [38]. Whether direct or indirect detection experiments, the tracks-especially the supersymmetric particle track-are very similar to a dead end. We will agree that in this moment of crisis, we must leave no track aside. In this paper, we have just presented a radically different track to explain the enigma of dark matter: a major conceptual overhaul that concerns, in addition to dark matter, electromagnetism, sterile neutrino and black holes. Our model predicts that dark matter may be accompanied by a hidden and reworked version of electromagnetism (and possibly also a hidden weak force), implying that dark matter may emit and reflect hidden light. This "light" is invisible to us and so the dark matter remains unseen. Nevertheless, these new forces could have very significant effects. For example, they could distort interacting clouds of dark particles. Astronomers have sought this effect in the famous Bullet cluster, also called 1E 0657-56, which consists of two clusters of galaxies that have passed through each other. Observations show the co-mingling of clusters left the dark matter largely unperturbed, indicating that any dark forces are weak [39]. The new variant of electromagnetism, which we call magnetoelectricity, would also allow dark particles to exchange energy and momentum, a process that would tend to homogenize them and make the halos spherical. We can make some conclusions about the strength of the dark electromagnetism force-and thus how often dark matter annihilation occurs-by considering how this force would affect galaxies. The reason galaxies have a flattened structure is that electromagnetism allows ordinary matter to lose energy and settle into disks. Clouds of gas inside galaxies radiate electromagnetic energy through the emission of photons. That radiation results in the spinning matter inside the clouds clumping together and eventually relaxing into a dislike structure. Because we know that dark matter is primarily distributed spherically around most galaxies and does not collapse to a disk, we can conclude that it cannot lose energy via dark photon emission at the same rate that ordinary matter does [40]. We have seen above that if we apply the reversed laws of Maxwell and that if we try the same steps as those that led to the equation of the electron, we find another particle, no longer an electron but a magnetic monopole. A modification of the laws of gravitation in a somewhat ad hoc way constitutes an alternative to dark matter. Maxwell's reversed laws, on the contrary, justify the existence of this dark substance that would come from black holes. Supermassive black holes are emerging as the most prolific creators. Far from being passive, they spit, blow. They emit large amounts of energy accumulated around them with unparalleled power. Their jets of matter would have fertilized the cosmos on vertiginous distances, triggered outbreaks of stars, created galaxies [2]. And why should not the supermassive black holes also have engendered dark matter? Why would not they also play the role of dark matter producer? Certainly not by condensed gas jets. Dark matter is different from ordinary matter, as is matter inside the black hole. Our hypothesis of the inversion of Maxwell's laws as well as that of the black hole producing dark matter may seem as strange as absurd. But let us say it, the very existence of the dark matter seems absurd. Likewise the idea of black holes, which was originally a mathematical "catastrophe" shunned by theorists, including Einstein who had predicted them. Our view concerning the links between dark matter and black holes can be summarized as follows: Dark matter is intimately related to black holes. The darkness of dark matter and black holes is caused by the reversal of Maxwell's laws. This inversion is triggered near the horizon of the black hole while the magnetic currents combined with gigantic pressure and high temperature cause a phase transition which results in a reversion of Maxwell's laws. This means that a magnetic charge is substituted for the electric charge, and that the magnetic current subdues the electric current. It can be said that in the space of the black hole a magnetoelectric force is created. The substance of dark matter comes from black holes. The latter emit particles from the process of creation of pairs of particles triggered by the metamorphosis of high energy photons. Growth of energy-mass of black holes increases the rate of materialization and annihilation of the dark energy inside. Dark radiation will materialize by creating a neutrino and an antineutrino, particles associated with the magnetic charge. If they are not annihi-lated, some will cross the black hole with a relativistic speed close to that of the light before slowing down to constitute the dark matter. But it can also happen that two opposite particles meet within the black hole. They dematerialize, they turn into two rays of the same energy and directed in the opposite direction. One of these dark rays, if not both, can cross the black hole without hitting an ordinary particle on the outside, and this dark radiation can materialize by creating a sterile magnetic neutrino and antineutrino accompanied by particles and by normal high energy photons. There is more than thermal evaporation; it is the spontaneous emission of particles. Black holes do not constitute dark matter, as we are led to believe. On the other hand, the black holes produce and emit the substance that constitutes the dark matter, in this case the sterile neutrino with magnetic charge. The black holes come undone, producing a dark matter that gradually disintegrates. Crises in science are often the most creative. This redesign should have profound implications for theoretical physics and astrophysics. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
14,488.6
2020-01-17T00:00:00.000
[ "Physics" ]
Active vacuum brazing of CNT films to metal substrates for superior electron field emission performance The joining of macroscopic films of vertically aligned multiwalled carbon nanotubes (CNTs) to titanium substrates is demonstrated by active vacuum brazing at 820 °C with a Ag–Cu–Ti alloy and at 880 °C with a Cu–Sn–Ti–Zr alloy. The brazing methodology was elaborated in order to enable the production of highly electrically and thermally conductive CNT/metal substrate contacts. The interfacial electrical resistances of the joints were measured to be as low as 0.35 Ω. The improved interfacial transport properties in the brazed films lead to superior electron field-emission properties when compared to the as-grown films. An emission current of 150 μA was drawn from the brazed nanotubes at an applied electric field of 0.6 V μm−1. The improvement in electron field-emission is mainly attributed to the reduction of the contact resistance between the nanotubes and the substrate. The joints have high re-melting temperatures up to the solidus temperatures of the alloys; far greater than what is achievable with standard solders, thus expanding the application potential of CNT films to high-current and high-power applications where substantial frictional or resistive heating is expected. Introduction Most of the emerging and long-term potential carbon nanotube (CNT) applications [1] such as field emitters, high-current electrical interconnects, power transmission cables and thermal management in high-power applications require the availability of an appropriate joining methodology that allows the CNTs to be permanently transferred to relevant substrates leading to highly conductive, high-temperature resistant and mechanically robust contacts. Various methods of joining CNTs to each other or to other substrate materials have been attempted in the past, as outlined in the review paper of Seth Roberts and Singjai [2]. In particular macroscopic CNT films are of interest for field emitters, e.g. for application in cold x-ray cathodes [3,4]. Brazing and soldering are the preferred joining methods for such applications. CNT film soldering was previously demonstrated with solder alloys such as Bi-Sn-Pb [5,6], Sn-Pb [7], Sn-Ag and Au-Sn [8]. These alloys have low melting temperatures (<280°C) making them suitable for joining materials to Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. electronic circuits. However from a chemical point of view, they are not appropriate for joining carbon materials. It is known since the 1960s that Cu, Ag, Au, In, Sn, Bi and Pb do not wet the surface of carbon materials like diamond and graphite [9]. Likewise, it was experimentally shown that Pb does not wet singlewalled [10] and multiwalled nanotubes [11] and that Au and Cu form discontinuous coatings on suspended CNTs [12,13]. Alloy wetting, a necessary condition for soldering and brazing, is directly related to the strength of the interaction between the metal and carbon atoms. Reactivity to carbon is greatest for those elements having the most electron vacancies in d-and f-orbitals which rules out Au, Ag and Sn. Therefore, joining CNT films with these elements limits the joining mechanism to mechanical interlocking (nanotube entrapment) unless the CNTs are appropriately metalized with a carbide forming element. From a technical point of view, solder alloys based on Sn, Pb and In are ductile, provide limited mechanical strength and thermal stability to the joint which further discourages their use in situations where substantial heating is expected. A well-established methodology for joining carbon based materials [14,15] to metals is vacuum brazing with active filler alloys that contain carbide forming elements such as Ti, Zr and Cr. Diffusion of the carbide forming element towards the carbon material and the subsequent formation of an interfacial carbide, referred to as an interphase, leads to improved wetting and strong chemical bonding at the CNT/metal interface. Active brazes offer superior mechanical properties when compared to lead and lead-free solders, yet have substantially higher melting temperatures limiting the type of substrate with which they can be used. Brazing in vacuum has the advantage of preserving the excellent physical properties of the CNTs while permitting their bonding to reactive substrates such as copper and titanium by limiting both CNT and substrate oxidation. The feasibility of vacuum brazing of double-wall CNT bundles with a Ti doped Ag-Cu braze alloy was first demonstrated by Wu et al [16]. They confirmed the formation of strong Ti-C bonds at the CNT-braze alloy interface. However they did neither join the CNTs to metallic substrates nor test them with regard to their electrical properties. Experimental investigations on the possibility of brazing CNT films to metals were motivated by the fact that conventional soldering cannot provide mechanically robust, conductive and high-temperature resistant contacts with substrates for applications, beyond microelectronics, aiming to exploit the excellent thermal and electrical transport properties of CNTs. We demonstrate in this study how such films of vertically aligned multiwall CNTs can be transferred and joined to titanium substrates by active vacuum brazing. Brazing at 820 and 880°C is demonstrated with the Ag-Cu-Ti and Cu-Sn-Ti-Zr braze alloys, respectively. The excellent wetting and spreading of the metal alloys inside the CNT is leading to strong chemical bonding and superior CNT/substrate contacts with low electrical and thermal resistances. In particular, the electron field-emission performance of the brazed CNT film is excellent and is directly related to improved interfacial electron and heat transport. CNT film synthesis Films of vertically aligned multiwalled CNTs were synthesized from C 2 H 2 and H 2 by low-pressure chemical vapor deposition in a commercial reactor (Black Magic 2″, AIX-TRON) at 695°C for 20 min with a sputtered 2 nm Fe catalyst film on a 10 nm Al 2 O 3 support layer on a high resistivity boron-doped 〈100〉 silicon substrate diced into 4 × 4 × 0.75 mm 3 pieces. Active vacuum brazing The as-grown nanotube films were brazed facedown to 4 × 4 × 0.6 mm 3 Ni-metalized grade 2 titanium (Ti/Ni 2 μm) and to 4 × 4 × 0.95 mm 3 grade 2 titanium substrates in a vacuum furnace (Cambridge Vacuum Engineering) at 10 −6 mbar. The heating rate was 10°C min −1 , the dwell time was 5 min and the dwell temperature was 820°C when using 100 μm thick foils having a composition of Ag 63.25-Cu 35-Ti 1.75 wt% (Wesgo Metals, Hayward USA) and was 880°C with 60 μm thick foils having a composition of Cu 73.9-Sn 14.4-Ti 10.2-Zr 1.5 wt% (Sulzer Metco Germany). The solidus and liquidus temperatures for the silver alloy are 780 and 815°C, respectively. The copper alloy has a solidus temperature of 868°C and a liquidus temperature of 925°C [17]. The brazing foils were made by mixing a metal alloy powder (325 mesh: particle size <44 μm) with an organic binder. The resulting paste was manually printed on a flat surface, dried in air and compressed into a foil to the desired thickness. The braze foil, substrate and inverted CNT film are assembled in a jig and held in place with an adjustable screw during brazing. Once the brazing step was completed, the Si substrate was removed with tweezers. For inspection, the joints were manually cleaved transversely and longitudinally with a steel blade. The different stages of the process are sketched in figure 1. Characterization methods A FEI ESEM-FEG XL-30 scanning electron microscope operated at 20 kV was used to examine the CNT joints. A Carl Zeiss Orion Plus helium ion microscope (HeIM) was used for high resolution imaging. HeIM allows imaging of samples with a surface resolution of 0.3 nm and has a different contrast mechanism than electron microscopes [18]. This improved resolution is necessary to reveal structural details at the nanometer scale over larger, more representative areas in a non-destructive way as opposed to transmission electron microscopy. The typical parameters for image acquisition were: 30 kV of acceleration voltage, beam currents of 0.5-1 pA, a dwell time of 2 μs with a line averaging of 16-32. A CRM200 WiTec confocal Raman microscope equipped with a 10× (0.25 NA) objective with a 532.3 nm laser set at 5 mW in combination with a 600 grooves/mm grating was used to track changes in nanotube graphitization after brazing. Contact pads (2 nm Cr/200 nm Au) were deposited on one side of the joints, via shadow masking in a Plassys II electron beam evaporator system, with the following geometry: 100 μm in width, 1 mm in length, spacing within the same material was 100 μm and spacing across the joint was 500 μm. A Keithley 2001 electrical characterization equipment in combination with a closed four-probe station were used to obtain current-voltage curves across the joints in the dark and at room temperature. The field-emission properties over a 4 × 4 mm 2 area of the brazed films were measured at base pressures of 10 −7 mbar with a scanning anode field-emission microscope (SAFEM) [19]. The joints were mounted on a horizontal x-y translation stage. The anode consisted of a spherical tip 1 mm in radius mounted on a cantilever that was moved in the zdirection in 100 nm steps. The emission current was measured with a Keithley 237 source-measure unit at a fixed anode-tosample distance of 500 μm. Structure and morphology of brazed CNT-substrate joints A typical CNT film with a density of 10 10 -10 11 CNTs cm −2 grown on silicon is shown in figure 2(a). The vermicular nanotube diameters range from 2 to 20 nm as seen by HeIM in figure 2(b). Two representative CNT films brazed to Ti and Ti/Ni substrates with the Cu-Sn-Ti-Zr alloy at 880°C are shown in figures 2(c) and 1(d) respectively. In both cases, the braze alloy has formed a fillet along the film's edge which is indicative of a chemical reaction leading to wetting. Raman spectra of the top surface of the as-grown and brazed film after silicon lift-off are shown in figure 2(e). The Raman spectra indicate a slight increase in graphitization [20] after brazing. The G peak width decreased from 77 to 58 cm −1 and the 2D peak width decreased from 123 to 114 cm −1 . The intensity ratio of the D to G peaks (I D /I G ) also decreased from 0.91 to 0.89. A similar decrease in this ratio was reported when annealing multiwalled CNTs in vacuum above 800°C [21]. The side view SEM image of the Cu-Sn-Ti-Zr fillet, after Si lift-off, reveals three distinct regions as shown in figure 3(a). Region 1 at the top of the film consists of CNTs having retained more or less their vertical alignment after brazing. Region 2 contains metal-coated CNT bundles while the region closest to the brazing foil is characterized by larger bundles completely encased in metal; hereafter referred to as the metal matrix CNT composite region. The partially melted brazed foil is seen below this region and above the substrate. Brazing is usually carried out above the liquidus temperature of the filler alloy at 925°C, however preliminary experiments have shown that this alloy, when it is fully liquid, excessively penetrates the CNT film and reacts with the Si substrate preventing lift-off. At 880°C, 90% of the alloy is liquid which is sufficient for joining while limiting the infiltration to the first ∼100 μm. The top CNT layer (region 1) was mechanically removed with a blade as shown in figure 3(b). This image reveals how the molten alloy infiltrated the lower portion of the CNT film by capillarity. The bundling pattern observed in region 2 and shown in figure 3(c) is consistent with the so-called nanocarpet effect which is caused by lateral capillary forces during the invasive spreading of a liquid inside an ordered array of high aspect ratio structures [22,23]. The combination of shear and bending forces during the removal of the CNTs in region 1 lead to two fracture planes: at the bundle waist and between regions 1 and 2 (figure 3(c)). A high magnification HeIM image of the protruding CNTs in region 2 is shown in figure 3(d). Individual metal-coated CNTs can be resolved here. The fractured metal matrix composite bundles are shown in figure 3(e) and a HeIM image of the fracture surface is shown in figure 3(f). Individual CNTs can no longer be resolved here even at high magnification. Rather, flat crystals embedded in a matrix of irregular particles are seen in figure 3(f). High magnification HeIM images of the different regions along the joint's transverse cross-section, obtained by mechanical cleaving, are also shown in figure 4(a). These images confirm that the different regions observed along the fillet are also distinguishable in the interior of the film. Nanoparticles are seen on the aligned CNTs in region 1 far from the joint line. Individual CNTs and small bundles thereof are coated with metal at the top of region 2. Partially encased bundles are identified in the lower part of region 2. The fracture here is due to shear forces during cleaving. The metal matrix composite containing flat hexagonal crystals is seen in region 3. The qualitative results of an EDX elemental mapping of a selected area between regions 1, 2 and 3 are shown in figure 4(b). While Cu and Ti are clearly enriched in the lower part (i.e. in the composite region), a slight Ti enrichment can be also seen in the CNT region in the upper part. This indicates the strong tendency of Ti to interact with the CNTs. Figures 2 and 3 reveal that the joint microstructure is anisotropic with a complex metallurgy. It arises from the interaction of a quaternary alloy with a porous carbon material at high temperatures. It is difficult to characterize in detail and at the nanoscale the metallurgy of the joint since differences in solid state atomic surface diffusion, on the CNTs' outer graphene walls, and liquid state diffusion lead to elemental segregation. Chemical reactions away from equilibrium condition will occur locally and over short time scale leading to the formation of various compounds such as stoichiometric and sub-stoichiometric carbides as well as intermetallic phases, based on the Cu-Sn, Cu-Ti and Sn-Ti binary systems, as were experimentally identified [24] and predicted by thermodynamic assessments of the Cu-Sn-Ti system [25]. A detailed characterization of the microstructure is beyond the scope of this work, yet it is evident that the improved wetting of the CNTs in region 2 is due to the formation of a carbide interphase between the alloy and the outer CNT walls. Indeed, a thin reaction layer of TiC was experimentally observed at the CNT/Ag-Cu-Ti interface after brazing at 1000°C [16]. Likewise, Chen et al observed the formation of a Ti x C layer on single wall CNTs ultrasonically bonded to Ti electrodes [26], Similarly, the presence of a 5 nm SiC interphase on CNTs was confirmed experimentally and was credited with the improved wetting of an Al alloy containing 23 wt% Si [27]. Concerning region 3, the solubility of C in Cu is extremely low, in the parts per million range [28], making it unlikely that the CNTs were completely dissolved as atomic carbon in the melt. It is possible that the CNTs were fully converted to carbide particles since the thickness of the TiC layer that is formed when brazing diamond under similar conditions is larger than the diameter of the CNTs. A second alloy, Ag-Cu-Ti, containing only 1.75 wt% of Ti was used to join CNT films to Ti and Ti/Ni substrates at 820°C, that is, above the liquidus temperature of this alloy. A typical CNT film brazed to Ti after silicon lift-off is shown in figure 5(a). A fillet is seen on the edge of the CNT film similarly to what was observed for the Cu-Sn-Ti-Zr braze, however the metal matrix composite region is now separated from the top CNT region by a thin diffusion zone as shown in figure 5(b). Cu and Ag especially are known to be highly mobile on graphene. Again, the bare CNTs in region 1 were removed mechanically and revealed extensive bundling leading to a porosity of ∼48% as shown in figure 5(c). A high magnification HeIM image of the top of one of the metal matrix bundles reveals individual metal-sheathed CNTs protruding from the matrix ( figure 5(d)). Evidently, the CNTs were not fully converted to TiC here. This is due to the reduced Ti content and lower brazing temperature. Slight microstructural differences are observed when brazing CNTs on Ti/Ni. The fillet height is reduced and bundling is less pronounced with the metalized substrate (figure 5(e)). Furthermore, a region a few micrometers in length with metalcoated bundles is now seen below the diffusion zone (figure 5(f)). Additional EDX elemental mappings led to very similar results as in the case of brazing with the Cu-Sn-Ti-Zr alloy. Overall, the CNT brazing process with the Cu-Sn-Ti-Zr and Ag-Cu-Ti alloy, respectively, can be described as follows: as the temperature is progressively raised above the solidus temperature, the brazing alloys will begin to melt and the Ti will start reacting with the CNTs to form a TiC interlayer. The resulting liquid will spread along the CNTs on this interlayer as well as laterally into the film leading to bundling. Solidification close to the substrate will lead to the formation of a metal matrix composite. The metal atoms that have diffused on the surface of the CNT walls from the braze foil into region 1 will eventually coalesce into nanoparticles. No significant difference, apart from fillet height, was remarked when brazing CNTs to the bare and metalized substrates with this alloy. Electrical and field emission properties It was demonstrated that both alloys can be used to join CNT films to titanium substrates. The joint properties were measured to confirm the applicability of such assemblies. The electrical resistances across the joints were determined by four-probe electrical measurements. Two gold contact pads were produced on the side of the CNT film (region 1) while the other two were on the substrate. Two probes were used to supply current while the other two measured the voltage drop across the joint. The results are shown in figure 6 with schematic representations of each measurement. The current versus voltage (I-V) curve across the Si/CNT interface for the as-grown film is provided in figure 6(a). The nonlinearity of the I-V curve in combination with the polarity of the applied bias is consistent with a Schottky diode-like junction consisting of a p-doped Si substrate and metallic CNTs. Fitting the linear portion of the curve yield a resistance of 40 Ω with a positive voltage and 125 Ω with a negative voltage. The I-V curves for the brazed films are shown in figure 6(b). The linearity indicates an ohmic contact with the substrate across both joints. The Ag-Cu-Ti joint shows slightly lower resistance of 0.35 Ω than the Cu-Sn-Ti-Zr joint with 0.86 Ω. The electrical conductivity for the Ag-Cu-Ti alloy is 23×10 6 Ω −1 m −1 according to the supplier while conductivity values of~7×10 6 Ω −1 m −1 are typical for bronzes with 11 wt% Sn [29]. It is clear that the presence of the braze alloy significantly reduces the contact resistance between the nanotubes and the substrate when compared to when they are grown on Si. Again, the presence of the braze layers improves the interfacial transport properties by reducing the thermal contact resistance when compared to CNTs grown on Si. The nonlinear temperature profile in the CNT film is indicative of an anisotropic solid with varying physical properties. This is consistent with the anisotropic microstructure observed. So far, the joints were shown to possess superior interfacial transport properties when compared to the as-grown CNT films on Si. One application that would clearly benefit from low electrical and low thermal resistance contacts is CNT cold electron sources. It was recently demonstrated how thermionic electron sources in commercial x-ray tubes can be replaced by CNT-based cathodes to produce x-rays without requiring any further modification to the device design [3]. In spite of this demonstration, several challenges remain and limit the widespread use of CNTs as cold electron sources. The maximum current that can be drawn per emitter and the contact resistance between the CNTs and the substrate were identified as the most crucial parameters affecting macroscopic emission behavior [30]. It is possible to reduce the contact resistance by employing metallization layers between the nanotube growth catalyst and the Si substrate and by carrying out post-treatments on the emitters [30]. Brazing is another approach to reduce emitter contact resistance, as demonstrated in this work. The field-emission behavior of the brazed CNT films on Ti/Ni was measured with a SAFEM and compared to the emission of a CNT film grown on Si. The instrument allows an accurate determination of the CNT apex height by means of the voltage versus anode-CNT distance plots which are shown in figure 7(a). From the resulting linear plot, the location of the emitter apex can be extrapolated as the height for V = 0. This is a very important aspect, since the real anode-CNT apex distance can be accurately determined for every measurement, obtaining a direct measurement of the applied electric field. In addition to the CNT height determination, the slope of the curve gives information related with the so called field enhancement factor (β) caused by the accumulation of the electric field lines at the CNT apex due to their high aspect ratio (see inset in figure 7(b)). The β value for an individual CNT is uniquely related with the geometry of the emitter and can be calculated in first approximation (i.e. floating sphere model) from the equation β = h/r, with h and r the height and radius of the CNT, respectively. However, dense CNT forest samples present drastically reduced β values due to the screening from neighbor tubes (inset in figure 7(b)) which emission is usually limited by randomly distributed ones that stick out from the sample. The determination of β can be calculated from the slope of the voltage versus anode-CNT distance curves assuming that the electric field needed at the CNT apex to achieve an emission current of 50 nA is around 4000 V μm −1 [31]. It is remarkable that the slopes obtained from the V versus anode-CNT distance are very low (between 0.38 and 0.2 V μm −1 ) giving rise to extremely high β values ranging from around 10 000 to 20 000. Such high values are obtained for both brazed and as-grown CNT with a radius of around 10 nm as determined by SEM images in figure 2. The calculated β indicates that the height of tubes which stick out from the forest surface is around 100-200 μm, which is in good agreement with the SEM images. The emission current (I) versus applied electric field (E) plots are presented in figure 7(b)) and were recorded after several cycles applying a maximum field of 0.6 V μm −1 . After several measurements, stable and reproducible curves were obtained. For the case of the as-grown CNT, a significant and continuous degradation was observed after every measurement. Due to that, the applied field was increased to a maximum of 1 V μm −1 to reach significant emission currents. The emission behavior of the CNTs on Si is consistent with the well-known Fowler-Nordheim (FN) model (I(E) = f FN (E)) that describes the ideal emitter behavior up to currents of around 1 μA. The deviation from the FN model can be explained by considering the presence of a voltage drop along the nanotube representing a resistance, at the nanotube/substrate interface, which is in series with the emitter. The data can be fitted by solving numerically: where R is the contact resistance parameter [32]. This is referred to as the resistor-limited FN fit. An equivalent resistance of 4 MΩ is obtained from the curve in figure 7(b)) for the CNTs grown on Si which is consistent with the values of 5 MΩ previously reported [32]. A much lower contact resistance of 10 kΩ is obtained for the CNTs brazed with the Cu-Sn-Ti-Zr alloy and 100 kΩ is obtained for the Ag-Cu-Ti joint. It should be noted that the resistance values extracted from the correction to the FN characteristic cannot be directly compared to the measured electrical resistances since the modified FN relation expresses the link between a voltage drop and a change in field-enhancement [21]. The turn-on field for a detectable emission of 0.5 pA is reduced from 0.4 V μm −1 for the CNTs on Si to 0.2 V μm −1 for the brazed films. The field-enhancement factors, that can be estimated from the emitters' height-to-radius aspect ratio, can also be extracted from the FN fits [21] and are around 5800 for the as-grown film and 11 000 for the brazed CNT films. The initial beta values extrapolated from the voltage versus anode-CNT distance are higher than the ones calculated from the FN fit. This is likely caused by a partial degradation of the tubes due to the high current achieved during the measurements. A maximum current of 150 μA at 0.6 V μm −1 was drawn from the Cu-Sn-Ti-Zr brazed nanotubes and 42 μA for the Ag-Cu-Ti braze. Only 0.1 μA was drawn from the as-grown sample at this field while 30 μA was obtained at 1 V μm −1 . Individual CNT emitters typically provide maximum 10-100 μA [30] and can be pushed to yield up to 120 μA when annealed in vacuum [21]. We thus conclude on the basis of the measured current that only a limited number of high field-enhancement emitters, randomly distributed over the cathode area contribute to the measured currents in figure 7(b)). An accurate determination of the current density would require knowledge of the exact location of the dominant field-emitters. Although the current density provided by the CNTs cannot be calculated, the area measured with a 1 mm diameter spherical tip is around 0.0016 cm 2 [3]. This indicates that the minimum current density provided is around 93 and 26 mA cm −2 for the Cu-Sn-Ti-Zr and the Ag-Cu-Ti brazed samples, respectively. Figure 7(c) shows some representative current density versus applied electric field curves obtained from the literature and the Cu-Sn-Ti-Zr brazed sample [3,6,[33][34][35][36]. From the comparison with the literature it can be concluded that the brazed samples studied here present outstanding field emission properties among which the following can be highlighted. (i) Extremely high field enhancement noticeable by the low turn-on field (ca. 0.2 V μm −1 ). The improvement in emitted current results mainly from the improved contact with the substrate which reduces the electrical resistance and promotes heat dissipation away from the CNT/substrate interface. This allows the nanotube emitters to be operated at higher currents before the onset of degradation. The power dissipated at the base of the nanotube at a mere 1 μA can reach several hundreds of W cm −2 . Resistive heating can lead to substrate melting and explosive damage of CNT bundles as was experimentally observed [30]. The slight increase in nanotube graphitization during brazing may also have contributed to improve the emission current by reducing the intrinsic resistance of the nanotubes. Although out of the scope of this work, we are convinced that the combination of the brazing technique developed here with catalyst structuring will be ideal candidates for field emission applications. As a final remark, the fact that the nanotubes are brazed rather than soldered leads to joints with high re-melting temperatures. The joints will retain their integrity at temperatures at least up to the solidus temperatures of the braze alloys used. This directly translates into the possibility of using the brazed films as components in devices that require harsh downstream processing steps such as vacuum sealing for commercial x-ray source manufacturing carried out at 780°C [3]. More importantly, braze alloy contacts allows for operating CNT devices at performance levels previously unachievable due to the inability of low melting point solder contacts, especially indium alloy contacts, to cope with the heat generated during device operation. Conclusions The joining of macroscopic films of vertically aligned multi wall CNTs to bare Ti and Ni-metalized Ti substrates was demonstrated by active vacuum brazing at 820°C with the Ag-Cu-Ti braze and at 880°C with the Cu-Sn-Ti-Zr braze. The formation of a TiC interphase on the nanotubes is credited for the wetting and spreading of the filler alloy inside the porous nanotube film, leading to a mechanically strong bond. The resulting joint microstructures are anisotropic with complex metallurgies involving the formation of carbides, intermetallic phases and solid solutions. Brazing leads to a slight increase in nanotube graphitization and to low electrical and thermal resistance contacts with the substrate which greatly improve the electron field-emission properties. The described brazing methodology is applicable for joining macroscopic CNT films to several other substrate material such as steel, copper and nickel. Moreover, it prevails the vertically aligned CNT structure which is important for the field emission properties. This greatly expands the application potential of CNT beyond vias and electrical interconnects. The brazed CNT films could make excellent cold electron cathodes for x-ray sources or could be alternative materials to graphitic foams and carbon-carbon composites for thermal management applications in various land, space and aerospace applications. The joints have high remelting temperatures; at least up to the solidus temperatures of the respective filler alloys, which means that they can survive most processing steps required for e.g. encapsulation or vacuum-tight sealing of x-ray sources.
6,851.6
2015-02-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Black holes in 4D Einstein-Maxwell-Gauss-Bonnet gravity coupled with scalar fields Einstein-Maxwell-Gauss-Bonnet-axion theory in 4-dimensional spacetime is studied in this paper with a"Kaluza-Klein-like"process. The dyonic black hole solution coupled with higher derivative terms is obtained. The behaviour of shear viscosity to entropy density ratio of uncharged black holes is found to be similar with that in 5-dimensional spacetime, violating the bound as well. In addition, the main features of this ratio remains almost unchanged in 4 dimensions, which is characterised by $(T/\Delta)^2$, at low temperature T with $\Delta$ proportional to the coefficient from scalar fields. Introduction Lovelock's theory suggests that Einstein gravity can be modified with higher derivative terms, with second order equations of motion [1,2]. One example of such a theory is the well-known Einstein-Gauss-Bonnet (EGB) gravity. Increasing interest has been put on this sort of gravity in 4-dimensional spacetime. Recent research [3] gives a method to realise it through rescaling Gauss-Bonnet coupling constantα → α/(D − 4), and taking D → 4 to obtain spherically symmetric 4D black hole solutions with non-vanishing Gauss-Bonnet term. The strategy is quite straightforward. Considering an action with the contribution from the Gauss-Bonnet term after the rescaling of Gauss-Bonnet constantα, one obtains The action then yields the equations of motion: where H µν is the Gauss-Bonnet tensor: As we know, H µν vanishes in 4 dimensions and the theory will reduce to Einstein's gravity. After being rescaled, however, the infinity caused by (D − 4) seems will leave us with a non-vanishing term. So it was suggested that taking D → 4 limit will give Gauss-Bonnet gravity in 4 dimensions: This work sheds a light on the investigation into higher derivative gravity in four-dimensional spacetime. However, arguments are also raised, claiming that the method performed [3] cannot actually provides Gauss-Bonnet gravity in four dimensions [4][5][6][7], i.e. it is not a "novel Einstein-Gauss-Bonnet gravity". Since the strategy proposed above is unable to give topologically non-trivial solutions, another method is supposed with "Kaluza-Klein-like" procedure [8,9], compactifying Ddimensional EGB gravity which is on a maximally symmetric (D − 4)-dimensional space. With similar rescaling and D → 4 limit, one obtains a purely 4-dimensional EGB theory [8,10]. The resulting theory can also be viewed as a special Hondeski gravity or generalised Galileons [8,[11][12][13]. Meanwhile, as black holes have a good feature of thermodynamical properties [14], great attention has been paid in this area. During past decades, the AdS/CFT dictionary provides a powerful tool to investigate strongly coupled gauge theories, which are dual to black holes in AdS space. For such systems, there exists a universal bound, known as Kovtun-Starinets-Son (KSS) bound for shear viscosity to entropy density ratio [15][16][17][18][19]: KSS bound has been found to be violated when small corrections are added to Einstein's gravity. Coupled with Gauss-Bonnet term, a modified version of this bound reads [20] η s where α GB = −Λα/3, with Λ the cosmological constant. It has been shown that when considering non-vanishing electric charge, bound (1.6) will be violated for 4D Gauss-Bonnet gravity, and constraint on Gauss-Bonnet coupling could be obtained by analysing the causal structure in bulk [21]. It is of our research interest to check whether violation will happen and will there be a new constraint for the coupling constant if we include two scalar fields linear in all the spatial directions. In this paper, 4D black hole solution coupled with higher derivative terms with both electric and magnetic charges as well as axions will be given. The "Kaluza-Klein-like" procedure introduced in [8] will be used to find the solution. Then, shear viscosity to entropy density ratio for a neutral black hole with scalar fields is studied, which violates the KSS bound. This paper is organised as follows: In section 2, the Einstein-Maxwell-Gauss-Bonnet-axion gravity is found through "Kaluza-Klein-like" method, and a dyonic black hole solution coupled with scalar fields and higher derivative terms is obtained. The thermodynamics of this black hole will be briefly mentioned as well. Removing electric and magnetic charges, in section 3 we will calculate the shear viscosity to entropy density ratio for the neutral black holes in 4D Gauss-Bonnet gravity with axions. It will be shown that the KSS bound will be violated because of the existence of scalar fields. Section 4 focuses on the difficulties one may meet when finding constraints on Gauss-Bonnet constant through causality analysis when black hole is uncharged. Finally, conclusion and outlook will be represented in section 5. 2 Dyonic black holes in four dimensions with Gauss-Bonnet coupling Reduced action Working in four dimensions, we first introduce two scalar fields. The general action in Ddimensional spacetime for Einstein-Maxwell-Gauss-Bonnet-axion (EMGBA) gravity reads where ϕ i are the scalar fields, and Gauss-Bonnet term takes the form Now we parameterise a D-dimensional metric: where φ is called "breathing scalar", depending only on external p-dimensional coordinates. Line elements dΣ 2 D−p,λ describs the internal maximally symmetric space, and λ gives the curvature of the internal spacetime. This is a diagonal reduction along Σ D−p,λ . When λ = 0, the "internal" space is flat. Here we only consider Abelian isometry group for the "internal" space, so the massive modes could be truncated. Now let us rescale Gauss-Bonnet coupling constant, and where = (D − p). The next step is to consider the cases where p < 5, and take the limit D → p. The resulting reduced p-dimensional action then reads [8,9] where G µν are Einstein tensors. This action is claimed to be the EMGBA theory in 4 dimensions, and it also works for D < 4. Constructed in a mathematically more rigorous way, (2.5) does not suffer from defects of the naïve D → 4 limit such as being ill-defined [4,5,10]. Dyonic Black hole solution The next step is to find the solution in spherically symmetric spacetime. First of all, one could get equation of motion from the action (2.5), and in four dimensions (p=4) it can be written as where E µν is the equation of motion from metric variation, and E φ is that for "breathing scalar". In such a way, the dependence on φ can be removed. We assume that φ = φ(r), and apply planar symmetric ansatz Also, we choose the scalar fields to be linearly dependent on spatial coordinates, such that where β is a constant and x i = {x, y}. Electric and magnetic charges are added to the black hole in such a way: where q and h are electric and magnetic charges respectively, and Please find more details on equations of motion in the appendix. Now combining (2.6) together with (2.7), one obtains This equation is, however, not enough for us to find the solution for f (r). Substituting the metric (2.7) into eqn.(2.5), and removing total derivative terms, one finds the effective Lagrangian to be [8] where flat "internal" space is considered, which means λ = 0 and the theory is invariant under a constant shift of φ. Doing variation of (2.12) with respect to f (r), and making χ(r) to be zero, one finds and inserting (2.14) into δχ equation yields Take r H to be the black brane horizon, that is, f (r H ) = 0. One is now able to give the exact solution from (2.11) and (2.15): Planar black brane in AdS space Now rescale the parametres, where (2.20) The line element for planar black brane in AdS space can be written as with l the AdS radius, and where we use the fact that when D = 4, To find the value of constant N 2 , one needs to note that the geometry would reduce to the flat Minkowski metric conformally at the boundary. With r → ∞, one has As a result, This solution is a dyonic black hole in 4D EMGB gravity with linear axions. Through the "Kaluza-Klein-like" method, this sort of black hole now contains the contribution from higher derivative terms. Thermodynamics Since Gauss-Bonnet term only contains curvature terms, its black hole thermodynamics is the same as that of Schwarzchild-AdS. This property is preserved by our reduction frame. The Hawking temperature at the event horizon reads The black brane approaches extremal when T → 0, that is, The entropy density of the horizon is given by [24,25] where V 2 = dxdy represents the spatial 2-volume (or area), and the free energy F reads 3 Shear viscosity for neutral black holes Weaker horizon formula Before the shear viscosity is investigated, let us change the coordinates where u = r H /r, and (3.1) From now on, for simplicity, only black holes without electric or magnetic charge are considered, i.e. H = Q = 0, and The tensor type perturbation is considered where and Usually Kubo formula gives the shear viscosity in general cases: where T xy is the momentum-stress tensor. If translation invariance is not broken by any sources, then the momentum T ty is conserved, whose current corresponds to T xy . The KSS bound is true in momentum-conserving situation. It is weakened and becomes (1.6) when higher derivative terms are non-vanishing. Now with scalar fields, the momentum is no longer conserved, and the translation invariance is broken. As a consequence, the shear viscosity η does not have a hydrodynamic interpretation any more, but η/s is still closely associated with entropy production. In this situation, one cannot simply apply Kubo formula to compute shear viscosity η here, and it will not work if one tries to find it only via horizon data directly. Instead, a "weaker horizon formula" has been suggested to study shear viscosity to entropy density ratio η/s in such cases [22]. While the perturbation is massive, one has Substituting (3.4) into (2.6) and taking ω → 0, one obtains with boundary conditions We rewrite Hawking temperature (2.26) as As (3.7) is too complicated to be solved, neither analytically nor numerically, it needs to be solved at high and low temperatures separately. High temperature expansion At high temperature,β 2 → 0. Therefore, (3.7) could be perturbatively expanded around β 2 ∼ 0. In this way, h(u) will also be expanded as At 0th order, h 0 (u) turns out to be a constant function. With boundary condition (3.8), one has At second order, one finds At this order, one has The numerical solution for (3.14) is found and the ratio η/s as a function of ξ/T is illustrated, which is shown in Fig.(1) as a log-log plot. One finds that at high temperature, the bound (1.6) is violated when the temperature is getting lower. Low temperature expansion With (3.10), one hasβ 2 → 3 at low temperature. Similarly, As the equations of motion are rather complicated, and what is needed is only the value of h(1), we are not going to give the explicit form of it in this paper. However, one could find that at 0th order, and at the first order, According to formula (3.6), to first order, the shear viscosity to entropy density ratio at low temperatures reads Therefore, the problem reduces to finding out the value of h 0 (1). Though the equation at 0th is too cumbersome to be solved directly, one could follow the similar step performed in the previous work, solving h 0 (u) at u = 0 and u = 1 respectively, and matching the solutions to get h 0 (1) [27]. The strategy is 1) Solve h 0 (u) at u = 0 and u = 1, and these two solutions are labeled as h 00 (u) and h 01 (u) respectively. At this stage, the integral constant in h 01 (u) is still not fixed. Following the steps, one finds that at u = 0, Similarly, at horizon where u = 1, Next thing to do is exactly the same as that in high-temperature case: write η/s as a function of ξ/T according to (3.10). Asβ 2 → 3, Therefore, at low temperature, (3.31) One finds from Fig.(2) that the KSS bound is violated as well. In spite of the fact that "Kaluza-Klein-like" process gives us really different equation of motion from that in 5-dimensional spacetime, the behaviour η/s is quite similar with what has been found in five dimensions [26]. In our case, η/s ∼ (T /ξ) 2 when T /ξ → 0, which satisfies the conjecture [23] that η/s ∼ (T /∆) 2 as T /∆ → 0 with ∆ being some scale to be chosen. Here we take ξ to be ∆. Discussion It is obvious that at high temperature, the KSS bound in higher derivative gravity η/s ≥ (1− 4α GB ) is hardly violated. This is because high temperatures correspond to very smallβ 2 , where the contribution from axions is nearly neglectable. While low-temperature behaviour of the ratio violates the bound dramatically, since in this situation,β 2 is a large number comparing with that in high-temperature cases. Thus for fixed Gauss-Bonnet constant, the larger the mass of graviton, the bolder the violation would be, which evinces one's intuitive postulation. One may find that (3.31) looks rather different with that in 5 dimensions [27], who contains confluent hypergeometric limit function related with Bessel functions. This results from the fact that our theory in four-dimensional spacetime is rather different from those in higher dimensions, since it also includes "breathing scalar" φ that brings a completely different Einstein equations. All calculation is based on these equations of motion, so it is natural that in this paper one would get a quite different form of the ratio. Nevertheless, the critical characteristics of η/s is almost the same in 4− and 5− dimensional spacetime. They both violates KSS bound markedly. Furthermore, in both cases, η/s ∼ (T /∆) 2 , and ∆ ∼ β, only differing by a coefficient that is influenced by the dimensionality. Although governed by different actions and therefore different equations, the main features characterising the shear viscosity to entropy density ratios in four and higher dimensions are actually very similar to each other. Inability of "Kaluza-Klein-like" process in causality analysis When introducing Gauss-Bonnet terms, one would find that the causality will be violated, and charge as well as scalar fields have effects on such violation [23][24][25][26][27]. Through analysis of causal structure, one is capable of finding restrictions on α GB . For example, in 5 dimensions, causality will be violated if α GB > 0.09 [23,24]. We would study the causal structure of the bulk, and here we continue the research on neutral black holes. According to the AdS/CFT correspondence, 4D AdS gravity we study here is dual to a 3D quantum field theory on the boundary. Usually, the procedure to study the causality in dimension D is: 1) Start with a D-dimensional metric Write the perturbation (which is the wave function of the transverse graviton) as 2) Then take large momentum limit, where k µ → ∞. The x m x n -component of equation of motion will reduce to k µ k ν g eff µν 0, where is the effective metric. 3) Find c g and the constraint by letting c 2 g − 1 ≤ 0. It is important to mention that c g can be interpreted as the local speed of graviton on a constant r-hypersurface. Its dependence on dimensionality is given by [26] c 2 g (r) = Before doing so, let us come back to the "Kaluza-Klein-like" procedure [8,10] performed in this paper. It is expected that we could also make a similar perturbation like (4.2), and directly get the momentum term from (2.6), or just from E µν . Since we only have 4 dimensions, one should either take x i in (4.2) to be x or y. As a consequence, the momentum k will be in x (or y) direction. More precisely, for instance, one has h(x, u) = e −iωt+ikuu+ikx . However, if one tries to substitute (4.6) into (2.6), no momentum term could be found, neither in (2.6) nor in E µν . That is, it is impossible to analyse causal structure through "Kaluza-Klein-like" procedure as far as we are concerned currently. Vanishing momentum terms in four dimensions Turing back to (4.5), on the other hand, one is able to study the causality with this formula at the first sight. But one has to be careful when performing such a limitation. There is a rather simple way to express (3.3) more generally, which also tells why things get more complicated when we deal with 4 dimensions. It will be useful to rewrite the metric (4.1) as where z = 1 when D ≥ 5, and z = 0 for D ≤ 4. With (4.7), one obtains . (4.8) From (4.8), one finds that if one works with dimensions higher than 5, then z = 1, she or he will definitely recover (4.5). But if we take D = 4, we should also make z = 0 at the same time, where (4.5) works no more. What will be got is an infinite graviton velocity. In short, Moreover, the momentum term in D-dimensional equation of motion reads c 2 g,z k 2 /(N 2 f (r)). Obviously, z appears in denominator. When D is 4, k has to vanish in order that the momentum term would not diverge. One could turn back to the very beginning to see why this happens, or what makes 4D cases so special. The answer is quite simple: the momentum term containing k vanishes if x i = x m or x i = x n , in spite of how many dimensions one has. As a result, at least three spatial coordinates, i.e. five dimensions in total are required to construct the perturbation in (4.1) that leads to non-zero momentum. For example, in 5-dimensional spacetime, we often choose x m x n to be xy, and thus x i is z. However, this is just the cases for neutral black holes with axions. One may have another story when adding magnetic and electric charges back, where H = 0 and Q = 0. For charged black holes in 4D EGB gravity, there in a constraint such that α GB < 0 [21,28]. Causality is not only one way to get this result [21], and investigation into the completeness of the spacetime also yields a similar upshot [28]. Thus, it is reasonable to expect that based on "Kaluza-Klein-like" method, one could get a similar constraint on α GB . Conclusion In this paper, we obtained the dyonic black hole solution with linear axions in 4-dimensional higher derivative gravity through "Kaluza-Klein-like" process. Shear viscosity to entropy density ratio η/s is investigated after electric and magnetic charges are removed. It turns out that violation still happens when D = 4. The behaviour of η/s is rather similar with that in 5 dimensions, such that η/s ∼ (T /ξ) 2 when T /ξ is very small. One important outcome is that the main feature of the ratio is almost the same with what has been found in 5 dimensions [27]. The only difference comes from the different equations of motions brought by "breathing scalar", which is inevitable if one applies "Kaluza-Klein-like" process to get the four-dimensional theories. While the bulk causal structure of uncharged black holes is studied, it is found that the momentum term vanishes in equations of motion, while the velocity formula in D dimensions is only valid when D > 4. Therefore, neither "Kaluza-Klein-like" process nor naïve D → 4 limit can help in causality analysis, since the construction itself is only well-defined in dimensions no lower than 5. As is mentioned, it has been shown that non-vanishing electric or magnetic charge may lead to a different result. Since we only consider neutral black hole with tensor type perturbations in this paper, our next task may focus on charged black holes and different types of perturbations as well. The research work done previously implies a possibility that the momentum term will exist in these cases, which means that causality could be studied. We expect a constraint similar with α GB < 0 for charged black holes obtained from "Kaluza-Klein-like" procedure. The dyonic black hole solution derived may be used as a tool to study the transport properties of the normal state of high-temperature superconductors. It is of our further interest as well to go further and explore more of its properties, such as transport behaviour like electric and thermal conductivity. Much interesting upshot is expected to appear for this is the first time for a 4d dyonic black hole to contain contributions from higher-derivative terms, which may bring an insight into the study on high-temperature superconductivity. A Equations of motion from reduced action Starting from (2.5), there are four sets of equations. The first is Klein-Gordon equation: These equations are naturally satisfied by choosing ϕ i = βx i . Then is the Maxwell equations which read implying A t = −q/r, and A y = hx. The variation with "breathing scalar" φ yields the equation [10] where G µν are Einstein tensors. Finally is the Einstein equation Combining the last two equations by g µν E µν + αE φ /2, one obtains an equation independent of φ.
4,937.8
2020-11-17T00:00:00.000
[ "Physics" ]
The Leray-G{\aa}rding method for finite difference schemes Leray and G{\aa}rding have developed a multiplier technique for deriving a priori estimates for solutions to scalar hyperbolic equations in either the whole space or the torus. In particular, the arguments in Leray and G{\aa}rding's work provide with at least one local multiplier and one local energy functional that is controlled along the evolution. The existence of such a local multiplier is the starting point of the argument by Rauch for the derivation of semigroup estimates for hyperbolic initial boundary value problems. In this article, we explain how this multiplier technique can be adapted to the framework of finite difference approximations of transport equations. The technique applies to numerical schemes with arbitrarily many time levels, and encompasses a somehow magical trick that has been known for a long time for the leapfrog scheme. More importantly, the existence and properties of the local multiplier enable us to derive optimal semigroup estimates for fully discrete hyperbolic initial boundary value problems, which answers a problem raised by Trefethen, Kreiss and Wu. Throughout this article, we use the notation We let M n (K) denote the set of n × N matrices with entries in K = R or C. If M ∈ M n (C), M * denotes the conjugate transpose of M .We let I denote the identity matrix or the identity operator when it acts on an infinite dimensional space.We use the same notation x * y for the Hermitian product of two vectors x, y ∈ C n and for the Euclidean product of two vectors x, y ∈ R n .The norm of a vector x ∈ C n is |x| := (x * x) 1/2 .The induced matrix norm on M n (C) is denoted • . The letter C denotes a constant that may vary from line to line or within the same line.The dependence of the constants on the various parameters is made precise throughout the text. In what follows, we let d ≥ 1 denote a fixed integer, which will stand for the dimension of the space domain we are considering.We shall also use the space 2 of square integrable sequences.Sequences may be valued in C k for some integer k.Some sequences will be indexed by Z d−1 while some will be indexed by Z d or a subset of Z d .We thus introduce some specific notation for the norms.Let ∆x i > 0 for i = 1, . . ., d be d space steps.We shall make use of the 2 (Z d−1 )-norm that we define as follows: for all v ∈ 2 (Z d−1 ), The corresponding scalar product is denoted •, • 2 (Z d−1 ) .Then for all integers m 1 ≤ m 2 , we set Some motivations and a brief reminder The ultimate goal of this article is to derive semigroup estimates for finite difference approximations of hyperbolic initial boundary value problems.Up to now, the only available general stability theory for such numerical schemes is due to Gustafsson, Kreiss and Sundström [GKS72].It relies on a Laplace transform with respect to the time variable, and the corresponding stability estimates are thereby restricted to zero initial data.A long standing problem in this line of research is, starting from the GKS stability estimates, which are resolvent type estimates, to incorporate nonzero initial data and to derive semigroup estimates, see, e.g., the discussion in [Tre84,section 4].This problem is delicate for the following reason: the validity of the GKS stability estimate is known to be equivalent to a slightly stronger version of the resolvent estimate sup z∈U where T is some bounded operator on 2 (N) that incorporates both the discretization of the hyperbolic equation and the numerical boundary conditions.Deriving an optimal semigroup estimate amounts to showing that T is power bounded.In finite dimension, the equivalence between power boundedness of T and the resolvent condition (1) is known as the Kreiss matrix Theorem, but the analogous equivalence is known to fail in general in infinite dimension.Worse, even the strong resolvent condition does not imply in general that T is power bounded, see, e.g., the review [SW97] or [TE05] for details and historical comments.Optimal semigroup estimates have nevertheless been derived for some discretized hyperbolic initial boundary value problems.More specifically, the first general derivation of semigroup estimates is due to Wu [Wu95], whose analysis deals with numerical schemes with two time levels and scalar equations.The results in [Wu95] were extended by Gloria and the author in [CG11] to systems in arbitrary space dimension, but the arguments in [CG11] are still restricted to numerical schemes with two time levels.The present article gives, as far as we are aware of, the first systematic derivation of semigroup estimates for fully discrete hyperbolic initial boundary value problems in the case of numerical schemes with arbitrarily many time levels.It generalizes the arguments of [Wu95,CG11] and provides new insight for the construction of "dissipative" numerical boundary conditions for discretized evolution equations.Let us observe that the leap-frog scheme, with some specific boundary conditions, has been dealt with by Thomas [Tho72] by using a multiplier technique.It is precisely this technique which we aim at developing in a systematic fashion for numerical schemes with arbitrarily many time levels.In particular, we shall explain why the somehow magical multiplier u n+2 j +u n j for the leap-frog scheme, see, e.g., [RM67], follows from a general theory that is the analogue of the Leray-Gårding method for partial differential equations, which we briefly recall now. The method by Leray and Gårding [Ler53,Går56] provides with suitable multipliers for scalar hyperbolic operators of arbitrary order.Namely, given an integer m ≥ 0, we consider a partial differential operator of the form where t ∈ R stands for the time variable, x ∈ R d stands for the space variable1 , and each operator P k (∂ x ) is a linear combination of spatial partial derivatives of order k: In the above formula, the p k,α 's are real numbers2 .Well-posedness of the Cauchy problem in Sobolev spaces is known to be linked with hyperbolicity of L. Namely, if L is strictly hyperbolic, meaning that for all ξ ∈ R d \ {0}, the (homogeneous) polynomial has m + 1 simple purely imaginary roots with respect to τ , then the Cauchy problem (2) is well-posed in In particular, there exists a constant C > 0, that is independent of the solution u and the initial data u 0 , u 1 , . . ., u m , such that there holds: The method by Leray and Gårding gives a quick and elegant way to derive the estimate (4) assuming that the solution u to (2) is sufficiently smooth.By standard duality arguments, the validity of the a priori estimate (4) yields well-posedness -meaning existence, uniqueness and continuous dependence on the data-for (2).Hence the main point is to prove (4) assuming that u is sufficiently smooth and decaying at infinity so that all integration by parts arising in the computations are legitimate.The main idea is to find a suitable quantity M u, which we call a multiplier and that will be linear with respect to u, such that when integrating the quantity 0 = (M u) (L u) on the slab [0, T ] × R d , one gets the estimate (4) for free (negative times are obtained by changing t → −t).Following [Ler53, Chapter VI] and [Går56, Section 3], one possible choice of a multiplier is given by L u where L stands for the partial differential operator of order m whose symbol is ∂ τ P , with P given in (3).Why L u is a good multiplier is justified in [Ler53,Går56].A well-known particular case is the choice of 2 ∂ t u as a multiplier for the wave equation.Here P (τ, ξ) = τ 2 + |ξ| 2 and therefore ∂ τ P = 2 τ , hence the choice 2 ∂ t u.The latter quantity is indeed a suitable multiplier for the wave operator because of the formula3 : The important fact here is that the energy: is a positive definite quadratic form of the first order partial derivatives of u.Let us observe that the multiplier L u is local, meaning that its pointwise value at (t, x) only depends on u in a neighborhood of (t, x).This is important in view of using this multiplier in the study of initial boundary value problems. Another important remark is that the above energy is also local, and the arguments in [Ler53,Går56] show that this property is not specific to the wave operator.The fact that both the multiplier and the energy are local is crucial in the arguments of [Rau72, Lemma 1].In our framework of discretized equations, the multiplier will be local but the energy will not necessarily be so.We shall not exactly follow the arguments of [Rau72] which use time reversibility, but rather construct dissipative boundary conditions which will yield the optimal semigroup estimate we are aiming at. The main result We first set a few notations.We let ∆x 1 , . . ., ∆x d , ∆t > 0 denote space and time steps where the ratios, the so-called Courant-Friedrichs-Lewy parameters, λ i := ∆t/∆x i , i = 1, . . ., d, are fixed positive constants.We keep ∆t ∈ (0, 1] as a small parameter and let the space steps ∆x 1 , . . ., ∆x d vary accordingly.The 2 -norms with respect to the space variables have been previously defined and thus depend on ∆t and the CFL parameters through the mesh volume (∆x We always identify a sequence w indexed by either N (for time), Z d−1 or Z d (for space), with the corresponding step function.In particular, we shall feel free to take Fourier or Laplace transforms of such sequences.For all j ∈ Z d , we set j = (j 1 , j ) with j := (j 2 , . . ., j d ) ∈ Z d−1 .We let p, q, r ∈ N d denote some fixed multi-integers, and define p 1 , q 1 , r 1 , p , q , r according to the above notation.We also let s ∈ N denote some fixed integer.We consider a recurrence relation of the form: where the operators Q σ and B j 1 ,σ are given by: In ( 6), the a ,σ , b ,j 1 ,σ are real numbers and are independent of the small parameter ∆t (they may depend on the CFL parameters though), while S denotes the shift operator on the space grid: (S v) j := v j+ for j, ∈ Z d .We have also used the short notation The numerical scheme (5) is understood as follows: one starts with 2 initial data (f 0 j ), ..., (f s j ) defined for j 1 ≥ 1 − r 1 .Assuming that the solution has been defined up to some time index n + s, n ≥ 0, then the first and second equations in (5) should uniquely determine u n+s+1 j for j 1 ≥ 1 − r 1 , j ∈ Z d−1 .The meshes associated with j 1 ≥ 1 correspond to the interior domain while those associated with j 1 = 1 − r 1 , . . ., 0 represent the discrete boundary.We wish to deal here simultaneously with explicit and implicit schemes and therefore make the following solvability assumption. The first and second equations in (5) therefore uniquely determine u n+s+1 j for j 1 ≥ 1 − r 1 , and one then proceeds to the following time index n + s + 2. Existence and uniqueness of a solution (u n j ) to (5) follows from Assumption 1, so the last requirement for well-posedness is continuous dependence of the solution on the three possible source terms (F n j ), (g n j ), (f n j ).This is a stability problem for which several definitions can be chosen according to the functional framework.The following one dates back to [GKS72] in one space dimension and was also considered by Michelson [Mic83] in several space dimensions.It is specifically relevant when the boundary conditions are non-homogeneous ((g n j ) ≡ 0): Definition 1 (Strong stability).The finite difference approximation (5) is said to be "strongly stable" if there exists a constant C such that for all γ > 0 and all ∆t ∈ (0, 1], the solution (u n j ) to (5) with The main contributions in [GKS72,Mic83] are to show that strong stability can be characterized by a certain algebraic condition, which is usually referred to as the Uniform Kreiss-Lopatinskii Condition, see [Cou13] for an overview of such results.We do not pursue such arguments here but rather assume from the start that (5) is strongly stable.We can thus control, with zero initial data, 2 type norms of the solution to (5).Our goal is to understand which kind of stability estimate holds for the solution to (5) when one now considers nonzero initial data (f 0 j ), . . ., (f s j ) in 2 .Our main assumption is the following. Assumption 2 (Stability for the discrete Cauchy problem).For all ξ ∈ R d , the dispersion relation has s + 1 simple roots in D. (The von Neumann condition is said to hold when the roots are located in D.) In (8), we have used the classical notation From Assumption 1, we know that Q s+1 is an isomorphism on 2 , which implies by Fourier analysis that Q s+1 (e i ξ 1 , . . ., e i ξ d ) does not vanish for any ξ ∈ R d .In particular, the dispersion relation (8) is a polynomial equation of degree s + 1 in z for any ξ ∈ R d .We now make the following assumption, which already appeared in [GKS72,Mic83] and several other works on the same topic. Then a −r 1 and a p 1 do not vanish on U × R d−1 , and they have nonzero degree with respect to z for all η ∈ R d−1 . Our main result is comparable with [Wu95, Theorem 3.3] and [CG11, Theorems 2.4 and 3.5] and shows that strong stability (or "GKS stability") is a sufficient condition for incorporating 2 initial conditions in (5) and proving optimal semigroup estimates.The main price to pay in Assumption 2 is that the roots of the dispersion relation (8), which are nothing but the eigenvalues of the so-called amplification matrix for the Cauchy problem, need to be simple.This property is satisfied for instance by the leap-frog and modified leap-frog schemes in several space dimensions, under an appropriate CFL condition, see Paragraph 1.3.Our main result reads as follows. Theorem 1.Let Assumptions 1, 2 and 3 be satisfied, and assume that the scheme (5) is strongly stable in the sense of Definition 1. Then there exists a constant C such that for all γ > 0 and all ∆t ∈ (0, 1], the solution to (5) satisfies the estimate: In particular, the scheme (5) is "semigroup stable" in the sense that there exists a constant C such that for all ∆t ∈ (0, 1], the solution (u n j ) to (5) with (F n j ) = (g n j ) = 0 satisfies the estimate The scheme (5) is also 2 -stable with respect to boundary data, see [Tre84,Definition 4.5], in the sense that there exists a constant C such that for all ∆t ∈ (0, 1], the solution Theorem 1 gives the optimal semigroup estimate (11), and is therefore an improvement with respect to our earlier work [Cou14] where in one space dimension, and under an appropriate non-glancing condition4 , we were able to derive the estimate (here r 1 = r, p 1 = p since d = 1): The latter estimate does not incorporate on the left hand side the quantity: and was unfortunately still not sufficient for deriving the semigroup estimate (11).Our main contribution in this article is to exhibit a suitable multiplier for the multistep recurrence relation in (5).With this multiplier, we can readily show that, for zero initial data, the (discrete) derivative of an energy can be controlled, as in [Rau72], by the trace estimate of (u n j ) and this is where strong stability comes into play. This first argument gives Theorem 1 for zero initial data (and even for nonzero initial data if the nonglancing condition of [Cou14] is satisfied).By linearity we can then reduce to the case of zero forcing terms in the interior and on the boundary.The next arguments in [Rau72] use time reversibility, which basically always fails for numerical schemes5 .Hence we must find another argument for dealing with nonzero initial data.Hopefully, the properties of our multiplier enable us to construct an auxiliary problem, where we modify the boundary conditions of (5), and for which we can prove optimal semigroup and trace estimates by "hand-made" calculations.In other words, we exhibit an alternative set of boundary conditions that yields strict dissipativity.Using these auxiliary numerical boundary conditions, the proof of Theorem 1 follows from a standard superposition argument, see, e.g., [BGS07, Section 4.5] for partial differential equations or [Wu95,CG11] for numerical schemes. Remark 1. Assumption 3 excludes the case of explicit two level schemes for which s = 0 and Q 1 = I, for in that case a −r 1 and/or a p 1 do not depend on z.However, this case has already been dealt with in [Wu95, CG11], and we shall see in Section 3 where the assumption that a −r 1 and a p 1 are not constant is involved, and why the proof is actually simpler in the case s = 0 and Q 1 = I. One space dimension Our goal is to approximate the outgoing transport equation (d = 1 here): with t, x > 0 and a < 0. The latter transport equation does not require any boundary condition at x = 0.However, discretizing (12) usually requires prescribing numerical boundary conditions, unless one considers an upwind type scheme with a space stencil "on the right" (meaning r 1 = 0 in (5)).We now detail two possible multistep schemes for discretizing (12).Both are obtained by the so-called method of lines, which amounts to first discretizing the space derivative ∂ x u and then choosing an integration technique for discretizing the time evolution, see [GKO95]. The leap-frog scheme.It is obtained by approximating the space derivative ∂ x u by the centered difference (u j+1 − u j−1 )/(2 ∆x), and by then applying the so-called Nyström method of order 2, see [HNW93, Chapter III.1].The resulting approximation reads Recall that λ > 0 denotes the fixed ratio ∆t/∆x.Even though (12) does not require any boundary condition at x = 0, the leap-frog scheme stencil includes one point to the left, and we therefore need to prescribe some numerical boundary condition at j = 0.One possibility6 is to prescribe the homogeneous or inhomogeneous Dirichlet boundary condition.With general source terms, the corresponding scheme reads Assumption 1 is trivially satisfied because (13) is explicit.The leap-frog scheme satisfies Assumption 2 provided that λ |a| < 1.In that case, the two roots to the dispersion relation are simple and have modulus 1 for all ξ ∈ R. Assumption 3 is satisfied as long as the velocity a is nonzero, for in that case a 1 (z) = −a −1 (z) = λ a z.The scheme ( 13) is known to be strongly stable, see [GT81]. In particular, Theorem 1 shows that (13) is semigroup stable.An illustration of this stability property is given in the numerical simulation of a bump function, propagating at speed a = −1 towards the left.Homogeneous Dirichlet boundary conditions are enforced at j = 0.The reflection of the bump generates a highly oscillatory wave packet that propagates with velocity +1 towards the right.The envelope of this wave packet coincides with the profile of the initial condition, which indicates that the 2 -norm is roughly preserved by the evolution.This numerical observation is in agreement with semigroup boundedness.Other choices of numerical boundary conditions for the leap-frog scheme or its fourth order extension are discussed, e.g., in [Oli74,Slo83,Tho72,Tre84]. The main discussion in [Oli74,Slo83,Tre84] is to verify strong stability for a wide choice of numerical boundary conditions, and if strong stability holds, then Theorem 1 automatically gives semigroup boundedness, which was not achieved in these earlier works.A scheme based on the backwards differentiation rule.We still start from the transport equation (12), approximate the space derivative ∂ x u by the centered finite difference (u j+1 − u j−1 )/(2 ∆x), and then apply the backwards differentiation formula of order 2, see [HNW93, Chapter III.1].The resulting scheme reads: u n j = 0 .This corresponds to s = 1 and The operator Q 2 is an isomorphism on 2 (Z) since Q 2 is an isomorphism for any small λ a (as a perturbation of 3/2 I), Q 2 depends continuously on λ a, and there holds (uniformly with respect to λ a): The operator Q 2 is therefore an isomorphism on 2 (Z) for any λ a > 0 (see, e.g., [Cou09, Lemma 4.3]). Let us now study the dispersion relation (8), which reads here It is clear that the latter equation has two simple roots in z for any ξ ∈ R.Moreover, if sin ξ = 0, the roots are 1 and 1/3 which belong to D. In the case sin ξ = 0, none of the roots belongs to S 1 and examining the case λ a sin ξ = 1, we find that for sin ξ = 0, both roots belong to D (which is consistent with the shape of the stability region for the backwards differentiation formula of order 2, see [HW96, Chapter V.1]).Assumption 2 is therefore satisfied.Assumption 3 is satisfied as long as a is nonzero since there holds p = r = 1 and a 1 (z) = a −1 (z) = λ a z 2 /2.Theorem 1 therefore yields semigroup boundedness as long as one uses numerical boundary conditions for which the numerical scheme is well-defined (this is at least the case for λ a small enough) and strong stability holds. Two space dimensions Here we wish to approximate the two-dimensional transport equation (d = 2): in the space domain {x 1 > 0 , x 2 ∈ R}.When a 1 is negative, the latter problem does not necessitate any boundary condition at x 1 = 0. Following [AG76], we use one of the following two-dimensional versions of the leap-frog scheme, either or so Assumption 3 is valid as long as a 1 = 0.For the scheme (15), we have again r 1 = p 1 = 1, and so Assumption 3 is valid as long as both a 1 and a 2 are nonzero.We refer to [AG79] for the verification of strong stability depending on the choice of some numerical boundary conditions for ( 14) or (15).Once again, if strong stability holds, then Theorem 1 yields semigroup boundedness and 2 -stability with respect to boundary data. The Leray-Gårding method for fully discrete Cauchy problems This section is devoted to proving stability estimates for discretized Cauchy problems, which is the first step before considering the discretized initial boundary value problem (5).More precisely, we consider the simpler case of the whole space j ∈ Z d , and the recurrence relation: where the operators Q σ are given by (6).We recall that in (6), the a ,σ are real numbers and are independent of the small parameter ∆t (they may depend on the CFL parameters λ 1 , . . ., λ d ), while S denotes the shift operator on the space grid: (S v) j := v j+ for j, ∈ Z d .Stability of ( 16) is defined as follows. Definition 2 (Stability for the discrete Cauchy problem).The numerical scheme defined by ( 16) is ( 2 -) stable if Q s+1 is an isomorphism from 2 (Z d ) onto itself, and if furthermore there exists a constant C 0 > 0 such that for all ∆t ∈ (0, 1], for all initial conditions (f 0 j ) j∈Z d , . . ., (f s j ) j∈Z d in 2 (Z d ), there holds Let us quickly recall that stability in the sense of Definition 2 is in fact independent of ∆t ∈ (0, 1] (because (16) does not involve ∆t and (17) can be simplified on either side by i ∆x i ), and can be characterized in terms of the uniform power boundedness of the so-called amplification matrix where the Q σ (κ)'s are defined in (8) and where it is understood that A is defined on the largest open set of C d on which Q s+1 does not vanish.Let us also recall that if Q s+1 is an isomorphism from 2 (Z d ) onto itself, then Q s+1 does not vanish on (S 1 ) d , and therefore does not vanish on an open neighborhood of (S 1 ) d .With the above definition (18) for A , the following well-known result holds: Proposition 1 (Characterization of stability for the fully discrete Cauchy problem).Assume that Q s+1 is an isomorphism from 2 (Z d ) onto itself.Then the scheme (16) is stable in the sense of Definition 2 if and only if there exists a constant C 1 > 0 such that the amplification matrix A in (18) satisfies In particular, the spectral radius of A (e i ξ 1 , . . ., e i ξ d ) should not be larger than 1 (the so-called von Neumann condition). The eigenvalues of A (e i ξ 1 , . . ., e i ξ d ) are the roots to the dispersion relation (8).When these roots are simple for all ξ ∈ R d , the von Neumann condition is both necessary and sufficient for stability of (16), see, e. g., [Cou13, Proposition 3].Assumption 2 is therefore a way to assume that ( 16) is stable for the discrete Cauchy problem.Our goal is to derive the semigroup estimate (17) not by applying Fourier transform to (16) and using uniform power boundedness of A , but rather by multiplying the first equation in ( 16) by a suitable local multiplier.The analysis relies first on the simpler case where one only considers the time evolution and no additional space variable. Stable recurrence relations In this Paragraph, we consider sequences (v n ) n∈N with values in C. The index n should be thought of as the discrete time variable, and we therefore introduce the new notation T for the shift operator on the time grid: (T m v) n := v n+m for all m, n ∈ N. We start with the following elementary but crucial Lemma, which is the analogue of [Går56, Lemme 1.1]. Lemma 1 (The energy-dissipation balance law).Let P ∈ C[X] be a polynomial of degree s + 1 whose roots are simple and located in D. Then there exists a positive definite Hermitian form q e on C s+1 , and a nonnegative Hermitian form q d on C s+1 , that both depend in a C ∞ way on P , such that for any sequence (v n ) n∈N with values in C, there holds In particular, for all sequence (v n ) n∈N that satisfies the recurrence relation The fact that there exists a Hermitian norm on C s+1 that is nonincreasing along solutions to the recurrence relation is not new.In fact, it is easily seen to be a consequence of the Kreiss matrix Theorem, see [SW97].However, the important point here is that we can construct a multiplier that yields directly the "energy boundedness" (or decay).The fact that the coefficients of this multiplier are integer multiples of the coefficients of P will be crucial in the analysis of Section 3, see also Proposition 2 below. Proof.We borrow some ideas from [Går56, Lemme 1.1] and introduce the interpolation polynomials: where x 1 , . . ., x s+1 denote the roots of P , and a = 0 its dominant coefficient.Since the roots of P are pairwise distinct, the P k 's form a basis of C s [X] and they depend in a C ∞ way on the coefficients of P .We have We then consider a sequence (v n ) n∈N with values in C and compute The conclusion follows by defining: ∀ (w 0 , . . ., w s ) ∈ C s+1 , q e (w 0 , . . ., w s ) := q d (w 0 , . . ., w s ) := The form q e is positive definite because the P k 's form a basis of C s [X].The form q d is nonnegative because the roots of P are located in D. Both forms depend in a C ∞ way on the coefficients of P because the roots of P are simple. Lemma 1 shows that the polynomial P yields the good multiplier T P (T) v n for the recurrence relation P (T) v n = 0. Of course, P is not the only possible choice, though it will be our favorite one in what follows.As in [Går56, Lemme 1.1], any polynomial of the form7 provides with an energy balance of the form with suitable Hermitian forms q e , q d that have the same properties as stated in Lemma 1. The energy-dissipation balance for finite difference schemes In this Paragraph, we consider the numerical scheme (16).We introduce the following notation: Thanks to Fourier analysis, Lemma 1 easily gives the following result: Proposition 2 (The energy-dissipation balance law).Let Assumptions 1 and 2 be satisfied.Then there exist a continuous coercive quadratic form E 0 and a continuous nonnegative quadratic form D 0 on 2 (Z d ; R) s+1 such that for all sequences (v n ) n∈N with values in 2 (Z d ; R) and for all n ∈ N, there holds In particular, for all initial data f 0 , . . ., f s ∈ 2 (Z d ; R), the solution to (16) satisfies and ( 16) is ( 2 -)stable. Proof.We use the same notation v n for the sequence (v n j ) j∈Z d and the corresponding step function on R d whose value on the cell [j 1 ∆x 1 , (j where v n denotes the Fourier transform of v n , and where we have let where q e,ζ , q d,ζ depend in a C ∞ way on ζ ∈ R d and are 2 π-periodic in each ζ j .Furthermore, q e,ζ is positive definite and q d,ζ is nonnegative.The conclusion of Proposition 2 follows by a standard compactness argument for showing coercivity of E 0 . Examples The first basic example corresponds to the case s = 0 for which the multiplier provided by Proposition 2 is Q 1 v n+1 j .In that case, the energy E 0 reads |||Q 1 v||| 2 −∞,+∞ (recall that Q 1 is an isomorphism) and the energy-dissipation balance law is nothing but the trivial identity The second line of this algebraic identity can be rewritten as and 2 -stability for the Cauchy problem amounts to assuming that the operator norm of Let us now consider the leap-frog scheme in one space dimension, for which we have s = 1 and The corresponding dispersion relation (8) reduces to For λ |a| < 1, the latter equation has two simple roots x 1 (ξ), x 2 (ξ) of modulus 1.Following the previous analysis, see ( 20)-( 21), the form q e,ζ is given by and q d,ζ is zero.The associated forms in Proposition 2 are D 0 ≡ 0 and (recall here d = 1): The latter energy functional E 0 is coercive under the condition λ |a| < 1, which is the necessary and sufficient condition of stability for the leap-frog scheme, and E 0 is conserved for solutions to the leap-frog scheme.The conservation of E 0 is usually proved by starting from the recurrence relation using the multiplier u n+2 j + u n j , and summing with respect to j.This is equivalent, for solutions to the leap-frog scheme, to what we propose here, since our multiplier reads However, it will appear more clearly in Section 3 why our choice for M u n j has a major advantage when considering initial boundary value problems. Let us observe here that the energy functional E 0 is associated with a local energy density . This is very specific to the leap-frog scheme.In general, the coefficients of the Hermitian forms q e,ζ , q d,ζ are not trigonometric polynomials of ζ and therefore E 0 , D 0 do not necessarily admit local densities.This is one main difference with [Ler53,Går56]. Semigroup estimates for fully discrete initial boundary value problems We now turn to the proof of Theorem 1 for which we shall use the results of Section 2 as a toolbox.By linearity of (5), it is sufficient to prove Theorem 1 separately in the case (f 0 j ) = • • • = (f s j ) = 0, and in the case (F n j ) = 0, (g n j ) = 0.The latter case is the most difficult and requires the introduction of an auxiliary set of "dissipative" boundary conditions.Solutions to (5) are always assumed to be real valued, which means that the data are real valued.For complex valued initial data and/or forcing terms, one just uses the linearity of (5). The case with zero initial data We first assume (f 0 j ) = • • • = (f s j ) = 0.By strong stability, we already know that (7) holds with a constant C that is independent of γ > 0 and ∆t ∈ (0, 1].Therefore, proving Theorem 1 amounts to showing the existence of a constant C, that is independent of γ > 0 and ∆t ∈ (0, 1] such that the solution to (5) with We thus consider a parameter γ > 0 and a time step ∆t ∈ (0, 1], and focus on the numerical scheme (5) with zero initial data (that is, (f 0 j ) = • • • = (f s j ) = 0).For all n ∈ N, we extend the sequence (u n j ) by zero for j 1 ≤ −r 1 : We use Proposition 2 and compute: Due to the form of the operator L, see ( 22), and the fact that v n j vanishes for j 1 ≤ −r 1 , there holds: and we thus get We multiply the latter equality by exp(−2 γ (n + s + 1) ∆t), sum with respect to n from 0 to some N and use the fact that D 0 is nonnegative.Recalling that the initial data in (5) vanish, we get with and Let us now estimate the two source terms S 1,N , S 2,N in (24).We begin with the term S 2,N defined in (26).Let us recall that the ratio ∆t/∆x 1 is fixed.Furthermore, the form of the operators L and M in (22) gives the estimate (recall that v n j vanishes for j 1 ≤ −r 1 ): for a constant C that does not depend on N , γ nor on ∆t.We thus have, uniformly with respect to N ∈ N, γ > 0 and ∆t ∈ (0, 1]: where we have used the trace estimate (7) that follows from the strong stability assumption. Let us now focus on the term S 1,N in (24), see the defining equation ( 25).We use the Cauchy-Schwarz inequality and derive (using now the interior estimate in (7) that follows from the strong stability assumption): Ignoring the nonnegative term on the left hand-side of (24) and using the coercivity of E 0 , we have proved that there exists a constant C > 0 that is uniform with respect to N, γ, ∆t such that: which yields (23) and therefore the validity of Theorem 1 in the case of zero initial data. Construction of dissipative boundary conditions In this paragraph, we consider an auxiliary problem for which we shall be able to prove simultaneously an optimal semigroup estimate and a trace estimate for the solution.More precisely, we shall prove the following result. Theorem 2. Let Assumptions 1, 2 and 3 be satisfied.Then for all P 1 ∈ N, there exists a constant C P 1 > 0 such that, for all initial data (f 0 j ), . . ., (f s j ) ∈ 2 (Z d ) and for all source term (g n j ) j 1 ≤0,n≥s+1 that satisfies there exists a unique sequence (u n j ) j∈Z d ,n∈N solution to Moreover for all γ > 0 and ∆t ∈ (0, 1], this solution satisfies Theorem 2 justifies why we advocate the choice M u n j = 2 u n+2 j + λ a (u n+1 j+1 − u n+1 j−1 ) rather than the more standard u n+2 j + u n j as a multiplier for the leap-frog scheme.Despite repeated efforts, we have not been able to prove the estimate (28) when using the numerical boundary condition u n+2 j + u n j on j 1 ≤ 0, in conjunction with the leap-frog scheme on j 1 ≥ 1. Proof.Let us first quickly observe that the solution to ( 27) is well-defined since, as long as we have determined the solution up to a time index n + s, n ≥ 0, then u n+s+1 is sought as a solution to an equation of the form where F belongs to 2 (Z d ) (this is due to the form of L and M , see ( 22)).Hence u n is uniquely defined and belongs to 2 (Z d ) for all n ∈ N. The proof of Theorem 2 starts again with the application of Proposition 2. Using the nonnegativity of the dissipation form D 0 , we get8 By the Young inequality, we get We multiply the latter inequality by exp(−2 γ (n + s + 1) ∆t), sum from n = 0 to some arbitrary N and already derive the estimate (here we use again the fact that ∆t/∆x 1 is a fixed positive constant): Using the coercivity of E 0 and the inequality we have therefore derived the estimate where the constant C is independent of γ, ∆t and on the solution (u n j ).In order to prove (28), the main remaining task is to derive the trace estimate for (u n j ).This is done by first dealing with the case where γ ∆t is large. • From the definition of the operator L, see ( 22), there exists a constant C > 0 and an integer J such that Since Q s+1 is an isomorphism, there exists a constant c > 0 such that Multiplying by exp(−2 γ (n + s + 1) ∆t) and summing with respect to n ∈ N, we get n≥s+1 ∆t e −2 γ n ∆t Choosing γ ∆t large enough, that is γ ∆t ≥ ln R 0 for some numerical constant R 0 > 1 that depends only on the (fixed) coefficients of the operator L, we have derived the estimate . It remains to use (29) and we get an even better estimate than (28) which we were originally aiming at: This gives a control of infinitely many traces and not only finitely many (this restriction to finitely many traces will appear in the regime where γ ∆t can be small). • From now on, we have fixed a constant R 0 > 1 such that (28) holds for γ ∆t ≥ ln R 0 and we thus assume γ ∆t ∈ (0, ln R 0 ].We also know that the estimate (29) holds, independently of the value of γ ∆t, and we now wish to estimate the traces of the solution (u n j ) for finitely many values of j 1 .We first observe from (29) that for all γ > 0 and ∆t ∈ (0, 1], there exists a constant C γ,∆t such that In particular, for any j 1 ∈ Z, the Laplace-Fourier transforms u j 1 of the step functions The dual variables are denoted τ = γ + i θ, γ > 0, and η = (η 2 , . . ., η d ) ∈ R d−1 .It will also be convenient to introduce the notation η ∆ := (η 2 ∆x 2 , . . ., η d ∆x d ). We now recall that γ ∆t is restricted to the interval (0, ln R 0 ], and we use (29) to derive Similarly, we have which we can again uniformly estimate by the right hand side of (30). Going back to the right hand side terms in (31) and (32), we find that there only remains for proving (30) to estimate the integral (here there are finitely many values of σ and σ ): where we have applied Fubini Theorem.Applying first Plancherel Theorem with respect to the d − 1 last space variables, we get The conclusion then follows by computing and by recalling that γ ∆t belongs to (0, ln R 0 ].We can eventually bound the integrals on the left hand side of (30) by estimating separately the integrals of each term on the right hand side of (31) and (32). The conclusion now relies on the following crucial result. Lemma 3 (The trace estimate).Let Assumptions 1, 2 and 3 be satisfied.Let R 0 > 1 be fixed as above and let P 1 ∈ N. Then there exists a constant C P 1 > 0 such that for all z ∈ U with |z| ≤ R 0 , for all η ∈ R d−1 and for all sequence (w j 1 ) j 1 ∈Z ∈ 2 (Z; C), there holds Recall that the functions a 1 , 1 = −r 1 , . . ., p 1 , are defined in (9). The proof of Lemma 3 is rather long.Before giving it in full details, we indicate how Lemma 3 yields the result of Theorem 2. We apply Lemma 3 to z = exp(τ ∆t), τ = γ + i θ with γ ∆t ∈ (0, ln R 0 ], η ∆ ∈ R d−1 and the sequence ( u j 1 (τ, η)) j 1 ∈Z .We then integrate (33) with respect to (θ, η) and use Lemma 2 to derive It remains to apply Plancherel Theorem and we get Recalling that γ ∆t is restricted to the interval (0, ln R 0 ], we have thus derived the trace estimate n∈N ∆t e −2 γ n ∆t Combined with the semigroup and interior estimate (29), this gives the estimate (28) of Theorem 2 for γ ∆t ∈ (0, ln R 0 ]. Proof of Lemma 3. Let us recall that the functions a 1 are 2 π-periodic with respect to each coordinate of η.We can therefore restrict to η ∈ [0, 2 π] d−1 rather than considering η ∈ R d−1 .We argue by contradiction and assume that the conclusion to Lemma 3 does not hold.This means the following, up to normalizing and extracting subsequences; there exist three sequences (indexed by k ∈ N): • a sequence (w k ) k∈N with values in 2 (Z; C) such that (w k −r 1 −p 1 , . . ., w k P 1 ) belongs to the unit sphere of C P 1 +r 1 +p 1 +1 for all k, and (w k −r 1 −p 1 , . . ., w k P 1 ) converges towards (w −r 1 −p 1 , . . ., w P 1 ) as k tends to infinity, and these sequences satisfy: We are going to show that (34) implies that (w −r 1 −p 1 , . . ., w P 1 ) must be zero, which will yield a contradiction since this vector must have norm 1. • Let us first show that each component (w k j 1 ) k∈N , j 1 ∈ Z, has a limit as k tends to infinity.This is already clear for j 1 = −r 1 − p 1 , . . ., P 1 .For j 1 > P 1 , we argue by induction.From (34), we have and by Assumption 3, we know that a p 1 (z, η) is nonzero.Hence (w k P 1 +1 ) k∈N converges towards which we define as w P 1 +1 .We can argue by induction in the same way for all indices j 1 > P 1 + 1, but also for indices j 1 < −r 1 − p 1 because the function a −r 1 also does not vanish on U × R d−1 . Using (34), we have thus shown that for each j 1 ∈ Z, (w k j 1 ) k∈N tends towards some limit w j 1 as k tends to infinity, and the sequence w, which does not necessarily belong to 2 (Z; C), satisfies the induction relations: ∀ j 1 ≤ 0 , • The induction relation (35) is the one that arises in [GKS72,Mic83] and all the works that deal with strong stability.The main novelty here is to use simultaneously (35) for controlling the unstable components of (w −r 1 −p 1 , . . ., w −1 ) and (36) for controlling the stable components of (w −r 1 −p 1 , . . ., w −1 ).The fact that w satisfies simultaneously (35) and (36) for j 1 ≤ 0 automatically annihilates the central components.This sketch of proof is made precise below. We define the source terms: which, according to (34), satisfy lim We also introduce the vectors (here T denotes transposition) and the matrices: The matrix L is well-defined on U × R d−1 according to Assumption 3. The matrix M is also well-defined on U × R d−1 because for any η ∈ R d−1 , Assumption 3 asserts that a p 1 (•, η) is a nonconstant polynomial whose roots lie in D. From the Gauss-Lucas Theorem, the roots of ∂ z a p 1 (•, η) lie in the convex hull of those of a p 1 (•, η).Therefore ∂ z a p 1 (•, η) does not vanish on U .In the same way, ∂ z a −r 1 (•, η) does not vanish on U .With our above notation, the vectors W k j 1 , W j 1 , satisfy the one step induction relations: • From Assumption 3 and the above application of the Gauss-Lucas Theorem, we already know that both matrices L(z, η) and M(z, η) are invertible for (z, η) ∈ U × R d−1 .Furthermore, Assumption 2 shows that L(z, η) has no eigenvalue on S 1 for (z, η) ∈ U × R d−1 .This property dates back at least to [Kre68].However, central eigenvalues on S 1 may occur for L when z belongs to S 1 .The crucial point for proving Lemma 3 is that Assumption 2 precludes central eigenvalues of M for all z ∈ U .Namely, for all z ∈ U and all η ∈ R d−1 , M(z, η) has no eigenvalue on S 1 .This property holds because otherwise, for some (z, η) ∈ U × R d−1 , there would exist a solution κ 1 ∈ S 1 to the dispersion relation For convenience, the coordinates of η are denoted (η 2 , . . ., η d ).Using the definition (9) of a 1 , and defining κ := (κ 1 , e i η 2 , . . ., e i η d ), we have found a root z ∈ U to the relation but this is not possible because the s + 1 roots (in z) to the dispersion relation (8) are simple and belong to D. The Gauss-Lucas Theorem thus shows that the roots to the relation (42) belong to D (and therefore not to U ).At this stage, we know that the eigenvalues of M(z, η), (z, η) ∈ U × R d−1 , split into two groups: those in U , which we call the unstable ones, and those in D, which we call the stable ones.For (z, η) ∈ U ×R d−1 , for P 1 = max(p 1 , q 1 + 1) gives (recall the definition (45) of gn+s+1 We can apply Theorem 1 to the solution (w n j ) to (44) because the initial data in (44) vanish.We get: Conclusion and perspectives Let us first observe that in [Wad90], Wade has constructed symmetrizers for deriving stability estimates for multistep schemes, even in the case of variable coefficients.His conditions for constructing a symmetrizer are less restrictive than Assumption 2. However, the symmetrizer in [Wad90] is genuinely nonlocal and it is therefore not clear that it may be useful for boundary value problems.The main novelty here is to construct a local multiplier whose properties allow for the design of an auxiliary dissipative boundary value problem.This is our key to Theorem 1. In this article we have always discarded the dissipation term provided by the nonnegative form D 0 .For the approximation of parabolic equations, this term may give some extra dissipation, but a crucial point to keep in mind is that the coefficients of the numerical scheme are assumed to be constant (which may in turn yield rather severe CFL conditions for implicit approximations of parabolic equations).Hence it does not seem very clear that our approach will yield stability estimates with "optimal" CFL conditions when approximating parabolic equations.This extension is left to further study in the future. The main possible improvement of Theorem 1 would consist of assuming that only the roots to (8) that lie on S 1 are simple.Here we have assumed that all the roots, including those in D are simple.If we could manage to deal with multiple roots in D, then Theorem 1 would be applicable to numerical approximations of the transport equation ( 12) that are based on Adams-Bashforth methods of order 3 or higher (such methods have 0 as a root of multiplicity 2 or more at the zero frequency). The results in this paper achieve the proof of a "weak form" of the conjecture in [KW93] that strong stability, in the sense of Definition 1, implies semigroup stability.However, an even stronger assumption was made in [KW93], namely that the sole fulfillment of the interior estimate when both the initial and boundary data for (5) vanish, does imply semigroup stability.The analogous conjecture for partial differential equations seems to be still open so far, but we do hope that our multiplier technique may yield some insight for dealing with the strong form of the conjecture in [KW93]. Figure 1 : Figure 1: Reflection of a bump by the leap-frog scheme with homogeneous Dirichlet condition at four successive times.
11,999.4
2015-05-22T00:00:00.000
[ "Mathematics" ]
Non-Wilson-Fisher kinks of O(N) numerical bootstrap: from the deconfined phase transition to a putative new family of CFTs It is well established that the O(N) Wilson-Fisher (WF) CFT sits at a kink of the numerical bounds from bootstrapping four point function of O(N) vector. Moving away from the WF kinks, there indeed exists another family of kinks (dubbed non-WF kinks) on the curve of O(N) numerical bounds. Different from the O(N) WF kinks that exist for arbitary N in 2 < d < 4 dimensions, the non-WF kinks exist in arbitrary dimensions but only for a large enough N > Nc(d) in a given dimension d. In this paper we have achieved a thorough understanding for few special cases of these non-WF kinks, which already hints interesting physics. The first case is the O(4) bootstrap in 2d, where the non-WF kink turns out to be the SU(2)1 Wess-ZuminoWitten (WZW) model, and all the SU(2)k>2 WZW models saturate the numerical bound on the left side of the kink. We further carry out dimensional continuation of the 2d SU(2)1 kink towards the 3d SO(5) deconfined phase transition. We find the kink disappears at around d = 2.7 dimensions indicating the SO(5) deconfined phase transition is weakly first order. The second interesting observation is, the O(2) bootstrap bound does not show any kink in 2d (Nc = 2), but is surprisingly saturated by the 2d free boson CFT (also called Luttinger liquid) all the way on the numerical curve. The last case is the N = ∞ limit, where the non-WF kink sits at (∆φ,∆T ) = (d− 1, 2d) in d dimensions. We manage to write down its analytical four point function in arbitrary dimensions, which equals to the subtraction of correlation functions of a free fermion theory and generalized free theory. An important feature of this solution is the existence of a full tower of conserved higher spin current. We speculate that a new family of CFTs will emerge at non-WF kinks for finite N , in a similar fashion as O(N) WF CFTs originating from free boson at N =∞. 1 ar X iv :2 00 5. 04 25 0v 1 [ he pth ] 8 M ay 2 02 0 Introduction Conformal field theory (CFT) is of fundamental importance and has applications in various fields of physics, ranging from AdS/CFT in string theory to phase transitions in condensed matter physics. Bootstrap [1,2], a technique utilizing intrinsic consistencies and constraints from the conformal symmetry, is one of most powerful tools in the study of conformal field theories. In two dimensions, thanks to the special Virasoro symmetry and Kac-Moody symmetry, bootstrap provides exact solutions of many CFTs including the 2d Ising CFTs and minimal model in 1980s [3]. However, for decades there was little progress of applying bootstrap to higher dimensional (d > 2) CFTs until the seminal work [4], which initiated the modern revival of the bootstrap method aiming at solving known CFTs (e.g. Wilson-Fisher (WF), QED, QCD, etc.) in higher dimensions, as well as exploring the uncharted territory of CFTs. In certain examples, the bootstrap method was used to extract the world's most precise predictions of critical exponents [5][6][7][8][9][10][11][12] of known CFTs. Many other successful applications were summarised in a recent review [13]. It is also possible that the bootstrap method can help us make progress on another frontier, namely discovering new CFTs. Interesting CFTs usually sit at"kinks" of the bootstrap curve, such as the Ising model [14], the three dimensional O(N ) vector models [15] and many Wilson-Fisher CFTs with flavor symmetry groups to be subgroups of O(N ) [16][17][18][19][20]. Sometimes bootstrap curves shows more than one kink [19][20][21][22][23] 1 . For example, on the O(N ) bootstrap curve there are at least two kinks, the first one was successfully identified as O(N ) WF CFTs, while the nature of the second kink (we dub non-WF kink) remains an open question 3 . For a given space-time dimensions d, typically the non-WF kinks only appear when N is larger than a critical N c [22]. In this paper, we focus on the study of the physics of non-WF kinks, and in some special cases we have achieved a thorough understanding analytically and numerically. These include the O (4) bootstrap kink in two space time dimensions, and the N → ∞ limit in arbitrary dimensions. Even though the O (2) bootstrap curve in two dimensions does not develop a kink, we find that the numerical bound is saturated by the free boson theory, which is also called Luttinger liquid in condensed matter literatures. The 2d O(4) non-WF kink turns out to be the SU(2) 1 Wess-Zumino-Witten (WZW) theory [24], and we find its dimensional continuation shows an interesting connection to the deconfined quantum critical point (DQCP) [25,26]. The DQCP was originally proposed to describe a phase transition between two different symmetry breaking phases, namely Neel magnetic ordered state and valence bond state. Its critical theory has many dual descriptions [27], one of which is 3d SO (5) non-linear sigma model (NLσM) with level-1 WZW term. There is a long debate on whether DQCP is continuous or weakly first order [28][29][30][31][32][33]. Monte Carlo simulations are consistent with a continuous phase transition, but also show abnormal finite size scaling behaviors [32,33]. More importantly, the critical exponent η from Monte Carlo violates the rigorous bound from conformal bootstrap [13,34], which dashes the hope of a continuous phase transition if SO(5) symmetry is emergent. An interesting proposal to reconcile these inconsistencies is, DQCP is slightly complex (non-unitary) [27,35,36], hence shows pseudo-critical (weakly first order) behaviors. More concretely, a way to study the pseudocritical behaviors is through dimensional continuation from 2d to 3d [37,38]. The scheme of this dimensional continuation is motivated by the connection between DQCP and SU(2) 1 WZW theory: the former can be described by a 3-dimensional SO(5) NLσM with a level-1 WZW therm, while the latter is a 2-dimensional SO(4) NLσM with a level-1 WZW term. The action in integer dimensions can be written as The scalar field n has d + 2 conponents, and satisfies the constraint n · n = 1. Here Γ W ZW is the standard Wess-Zumino-Witten term. Notice π 2+1 (S 3 ) = π 3+1 (S 4 ) = Z, the level k takes integer values. Naively, a physically plausible (though may not be mathematically concrete) way of dimensional continuation is to consider d = 2 + ε dimensional SO(4 + ε) NLσM with a level-1 WZW therm. This maybe seems impossible in the action level, it is however not hard to study this scheme using numerical bootstrap. We study O(4 + ε) bootstrap in d = 2 + ε, and observe that the kinks disappear at around d * = 2.7. This agrees reasonably with the one-loop value d * = 2.77 [37] and supports the scenario that the SO(5) DQCP is weakly first order (pseudo-critical). The solution of the O(N = ∞) non-WF kink is more exotic. It turns out to be equal to the superposition of two physical four point function, for example, in d = 3 dimensions, where ψ i are N free Majorana fermions carrying O(N ) vector index, η is another Majorana fermion that is neutral under O(N ) transformation, and φ i is a scalar operator with scaling dimension ∆ φ = 2 4 . The bracket 〈. . .〉 GF F means the four point function of generalised free field (GFF) theory, or in other words, the four point function is calculated using Wick contraction. The exotic structure of subtracting two four point functions at N = ∞ limit makes it difficult to interpret finite-N non-WF kinks as known CFTs. An important property of the solution at N = ∞ limit is, there exists a full tower of conserved higher spin current, a feature reminiscent of the free fermion theory. Therefore, it is possible that the non-WF kinks at finite N become a new family of CFTs in a similar manner of O(N ) WF CFTs originating from the free boson theory. The paper is organised in the following way. In Sec. 2 we discuss the general features of the non-WF kinks. In Sec. 3, we discuss the dimension continuation of the 2d O(4) non-WF kink which corresponds the SU(2) 1 WZW model and its dimensional continuation. In the subsequent section, we discuss the O(2) bootstrap bounds in two dimensions and the infinite-N limit of O(N ) bootstrap. The plots in the paper are all calculated with Λ = 27 (the number of derivatives included in the numerics). For the definition of Λ and other bootstrap parameters, we refer to [39]. Note added. After the completion of this work, we became aware of a parallel paper [40] which has some overlap with ours. We start by considering the 4-point correlation function Here S, T and A refer to the operators in the O(N ) singlet, symmetric rank-2 tensor, and antisymmetric rank-2 tensor. The superscript "±" denotes the spin selection: the S and T sectors contain even spin operators, while the A sector contains only odd spin operators. The 4-point function from the s-channel decomposition is [41,42], Similarly, by considering the four point in the crossed channel one can get another conformal block decomposition of the 4-point correlation function, which is Eq. (4) with i ↔ k and x 1 ↔ x 3 . Equating two different channels one obtains a non-trivial crossing symmetric equation [41,42]. with By demanding the OPE coefficients λ φφO to be real, from the bootstrap equation one can obtain numerical bounds of scaling dimensions of operators in the φ × φ OPE, in terms of φ's scaling dimension ∆ φ [4]. Typically one will bound the lowest scaling dimensions (e.g. ∆ S , ∆ T ) of scalar operators in different channels of group representations. It is well known that the O(N ) WF CFT appears at kinks on the curve of numerical bounds of ∆ S and ∆ T in d = 3 dimensions [15]. The result can be easily generalized to 2 < d < 4. Besides the O(N ) WF there are also other kinks (i.e. non-WF kinks), which, for example, are shown in Fig. 1 and Fig. 2. These non-WF kinks exist on both the ∆ φ − ∆ S and ∆ φ − ∆ T curve, and it seems that the kinks on two curves have identical ∆ φ . We find that the bounds of ∆ T converge faster than those of ∆ S . Also as will be clear later, in most cases it is more physically meaningful to study the ∆ φ − ∆ T curve rather than the ∆ φ − ∆ S curve. Different from the O(N ) WF kinks which only occur in 2 < d < 4 dimensions, the non-WF kinks seem to exist in arbitrary dimensions (2 ≤ d ≤ 6 at least). Also in 2 < d < 4 dimensions, the positions of non-WF kinks are quite far away from WF kinks. For example, in d = 3 dimension (see Fig. 2 Another crucial feature is, for a given space-time dimension d the non-WF kinks only appear when N is larger than a critical N c [22]. In d = 2 dimension N c = 2, and N c seems to increase with d 5 . Also the kink becomes sharper as N increases (see Fig. 1 and Fig. 2), and in the N → ∞ limit the kink evolves into a sudden jump at (∆ φ , ∆ T ) = (d − 1, 2d). In general, except for a few cases, it is unclear if these non-WF kinks as well as the numerical bounds have any relation to CFTs or any physical theories. The rest of the paper will discuss several special cases where we have good understanding, through which we hope to inspire the understanding of non-WF kinks in general cases. which is denoted as blue dots connected by a solid line. The k = 1 theory, located at (∆ φ , ∆ T , ∆ S ) = (0.5, 2, 4), is denoted as the red dot. The SU(2) k WZW theory has a SO(4) ∼ = global symmetry, and a special parity which flips one space direction and the two SU(2) groups simultaneously. It turns out that a subset of the crossing equation which equals (5) at N = 4 is already sufficient for detecting SU(2) k WZW models (see Appendix A for more discussion on this). Fig. 3 shows the numerical bound for the leading singlet (S) and rank-2 tensor (T ), which has a kink at (∆ φ , ∆ S ) = (0.5, 4) and (∆ φ , ∆ T ) = (0.5, 2), respectively. They match the theoretical values of SU(2) k=1 WZW theory. More interestingly, SU(2) k≥2 WZW theory seems to saturate the numerical bound of ∆ T on the left side of SU(2) k=1 WZW theory. This phenomena is a mirror version of well-known observation of the Z 2 bootstrap in 2d, in which the 2d Ising CFT appears at the kink and all the minimal models saturate the numerical bound on the right hand side of Ising CFT [4,43]. The reason that SU(2) 1 WZW appears as a kink is the leading operator in the T -channel of SU(2) k WZW gets decoupled from the theory at k = 1. On the other hand, the numerical bound of ∆ S seems to be larger than the SU(2) k≥2 WZW theory. It is unclear whether it is a convergence issue, although we do not see a visible improvement from Λ = 19 to Λ = 27. One can further read out the spectrum of S, T and A channel operators from the extremal functional method [44]. It is also possible to numerically study the OPE's of the leading operators of each channel. We found that the spectrum and OPE coefficients of the solution at the kink agrees with the SU(2) 1 WZW theory. The dimensional continuation of the WF kinks has been explored before [45], and it was found the scaling dimensions at the WF kinks in fractional dimensions 2 < d < 4 are in agreement with the ε-expansion calculation. In this section we will study an exotic way of dimensional continuing the non-WF kink, motivated by recent papers [37,38] that studied the deconfined quantum critical point (DQCP) [25,26]. Dimensional continuation to SO(5) DQCP As shown in previous section, the SU(2) 1 WZW theory appears as a kink in the curve of O(4) bootstrap bounds in d = 2 dimensions, so we can further bootstrap O(4 + ε) symmetry in d = 2 + ε dimensions. As shown in Fig. 4, for small ε the kink still exists, but becomes weaker and weaker as ε increases, and finally disappears around ε * ≈ 0.7 6 This reasonably agrees with the one-loop value ε * = 0.77 [37]. η = 2∆ φ − d + 2 and ∆ S − d decreases with ε, which is also consistent with the expectation of pseudo-critical behavior. Theoretically, the CFT can become complex when the lowest singlet operator becomes relevant [27,35,36]. In our numerical data, however, ∆ S seems to be larger than d when ε = 0.7. This might be an artifact of numerical convergence, also it is hard to locate the precise critical ε * as the kink becomes very weak. The numerical bounds from O(2) bootstrap does not show any kink 7 , but it indeed detects 2d CFTs, namely a 2d free boson (also called Luttinger liquid in condensed matter literatures). 6 The critical ε * is read out from the ∆ φ −∆ S curve as the kink is sharper there. Notice a subtlety when we apply bootstrap method to study conformal field theories in factional dimensions is that these theories are intrinsically non-unitary, due to negative norm states [46,47]. Such non-unitary states have high scaling dimensions and the bootstrap results are insensitive to them. The disappearance of the kink at ε * ≈ 0.7, on the other hand, should be explained by the fixed point annihilation mechanism proposed in [37,38]. 7 In 2d the non-WF kink appears only when N > 2. It is well known that the 2d free boson is a CFT with an exact marginal operator. Its global symmetry is U(1) L × U(1) R , but we can just consider its diagonal U(1), i.e. the charge conservation symmetry. The charge creation operator (i.e. vertex operator), e iαΦ , can be written as a O(2) vector (φ 1 , φ 2 ) = (Re(e iαΦ ), Im(e iαΦ )). Its scaling dimension ∆ φ can be continuously tuned from 0 to ∞ by deforming the compactification radius of bosons. The lowest scaling dimension in the T -channel is ∆ T = 4∆ φ , while in the S-channel one has ∆ S = 2 independent of ∆ φ . The four point function is, Fig . 5 shows the numerical bounds of ∆ T and ∆ S in terms of ∆ φ . In the ∆ φ − ∆ T curve it is clear that the 2d free boson saturates the numerical bounds. For large ∆ φ there is a small discrepancy due to the numerical error of finite Λ. The ∆ φ − ∆ S curve, on the other hand, is only saturated by the 2d free boson at small ∆ φ . At large ∆ φ the numerical bounds approach the point (∆ φ , ∆ S )=(1,4), which corresponds to the four point function (7) to be discussed later. This result again suggests that the ∆ φ − ∆ T curve is more intrinsic for understanding the non-WF physics in the O(N ) bootstrap calculation. Infinite-N limit The infinite-N limit can be studied directly by taking 1/N = 0 in the bootstrap equation. In d dimensions the kink sits at (∆ φ , ∆ T ) = (d − 1, 2d), and on the left of the kink the GFF saturates numerical bounds ∆ T = 2∆ φ . The S-sector spectrum of φ×φ OPE is very exotic: the scalar channel (l = 0) is totally empty with no operator present (except the identity operator), while in other spin (l > 0) channel only one operator, i.e. the higher spin conserved current (∆ S,l = l + d − 2), is present for each l. The 4-point correlation function at the kink turns out to be 8 , This four point function is unitary 9 . Surprisingly, The above four point function equals to the subtraction of correlation functions of two different theories, namely a free fermion theory (FFT) and a GFF theory: in d = 3 dimensions, where (7) equals The FFT contains N free Majorana fermions ψ i and a single free Majorana fermion η, so the fermion bilinearψ i η is a O(N ) vector. Using Wick contraction, the above expression reduces 8 Notice this four point function can also be viewed as a solution to the O(N) bootstrap equations even at finite N. The point (∆ φ , ∆ T ) = (d − 1, 2d) almost saturates the finite N bootstrap bound. 9 In two and four dimensions, by expanding the four point function in conformal blocks, we have proven that for each channel, the operators have positive OPE 2 . to products of two point functions With a few lines of algebra one can show that (7) and (8) are identical. The solution (7) has quite a few exotic features. First of all, if we bound the λ 2 φφ T µν OPE coefficient numerically, we will get that the central charge c = c f . Here c f is the central charge of a single Majorana fermion. Its spectrum also contains conserved higher spin currents. This poses a puzzle that the theory seemingly contradicts a theorem [48] saying that CFTs with conserved higher spin currents are free theories which have a central charge proportional to N . From (8), the solution to this puzzle is clear. The theory contains more than one conserved spin-2 current, while the theorem in [48] assumed a single spin-2 current. Notice Only the contribution of λ 2 φφ T 2 survives in the large N limit. We can also think about what kind of 1 N corrections that will turn (7) into a "good" CFT. By "good" we mean CFTs with a single conserved current and order N central charge. This is possible if T 2 µν acquires anomalous dimension. The second exotic feature is the minus sign in front of the GFF four point function in (7), this makes the interpretation of it as known CFTs really difficult. Another exotic feature is that if we decompose the four point function (7) into conformal blocks, we will find that there is no spin-0 block in the S-channel. This is also observed numerically. It turns out that the OPE coefficients of S-channel scalars of both FFT and GFF scales as O(1/N ), therefore disappears at the strict N = ∞ limit. We also observe that the spectrum of GFF is a subset of the spectrum of FFT. Consequently, a four point correlation function c 1 〈4pt〉 F F T − c 2 〈4pt〉 GF F is consistent with bootstrap as long as the OPE coefficients c 1 λ 2 F F T − c 2 λ 2 GF F are positive for all the operators in GFF. More importantly, by choosing c 1 , c 2 properly (c 1 = 1/2, c 2 = 1), many operators disappear in the block expansion. These includes the (∆, l) = (2d − 2, 0) operator in the T -channel and many other operators. After this superposition, the leading scalar operator in the T -channel has scaling dimension to be (∆, l) = (2d, 0). This explains why the numerical bound follows ∆ T = 2∆ φ (GFF) for small ∆ φ , and has a sudden jump at ∆ φ = d − 1 from ∆ T = 2d − 2 to ∆ T = 2d. Since FFT and GFF are present in arbitrary dimension, we expect the non-WF kink in the infinite-N limit to also exist in arbitrary dimensions, and we have numerically verified it for 2 ≤ d ≤ 6 dimensions. This teaches us an important lesson, a kink on the bootstrap curve can correspond to the subtraction of four point functions of two different theories. The key requirement for this to happen is the spectrum of one theory is a subset of the spectrum of the other theory. This requirement is apparently very stringent in d > 2 dimensions, namely except for (generalized) free theories there is no known pair of theories satisfying it. On the other hand, the non-WF kinks at finite N obviously do not correspond to free theories. Therefore, it would be interesting and exotic if the appearance of non-WF kinks at finite N are also due to the subtraction of four point functions of two theories. The other possibility is that non-WF kinks detect a single theory rather than the subtraction of two theories. The four point function in the infinite-N limit, on the other hand, just happens to be identical to the subtraction of FFT and GFF. Previous identification of 2d SU(2) 1 WZW theory as the O(4) non-WF kink seems to favor this scenario. Conclusion We study the non-WF kinks in the O(N ) bootstrap curves. This family of kinks, different from the WF kink, exists in arbitrary dimension. In a given dimension, there exists a critical N c below which the kink disappears. In general, we do not understand the physics of this new family of kinks except for few cases. In the infinite-N limit, the kink sits at (∆ φ , ∆ T ) = (d − 1, 2d) with d being the space-time dimensions. The four point function at the kink equals to the subtraction of correlation functions of a free fermion theory and generalized free theory. One lesson from this example is, subtracting two theories (whose spectrum are similar) could also generate a kink in the curve of bootstrap bounds. However, it seems that the kink at finite N cannot be interpreted in this way. For example, the O(4) kink in 2d corresponds to the SU(2) 1 WZW theory. We further study the dimensional continuation of the SU(2) 1 WZW kink to 3d and discuss its relation with deconfined phase transitions. Besides the kink, the numerical bounds in 2d also have a few intriguing properties. The O(2) curve does not have a kink, but is saturated by the free boson theory (∆ T = 4∆ φ ), a CFT with continuously tunable scaling dimensions due to an exact marginal operator. On the O(4) curve, the SU(2) 1 WZW theory appears at the kink and SU(2) k>1 WZW theories (∆ T = 8 3 ∆ φ ) saturate the numerical bounds on the left side of the kink. For a general N , the numerical bounds on the left side of the kink seems to obey a simple algebraic relation ∆ T = 2N N −1 ∆ φ . It will be interesting to know if there exists an analytical four point function giving this relation for a general N . Except for few cases it is rather unclear which physical theories the non-WF kinks correspond to. A major challenge is that there is no known CFT whose symmetry and operator contents are similar to what we observed numerically at the non-WF kinks 10 . There was one proposal that the intrinsic symmetry of 3d non-WF kinks is SU(N * ) rather than O(N ) (with N ∼ (N * ) 2 − 1, and one should bootstrap the four point function of SU(N * ) adjoint operators instead of O(N ) vector operators [22] 11 . In the 3d SU(N * ) adjoint bootstrap, there appear two adjacent kinks on the bound of leading SU(N * ) singlet operator, and they were interpreted as QED 3 -Gross-Neveu and QED 3 CFTs respectively, while the SU(N * ) adjoint scalar field φ is interpreted as the fermion bilinear operator. 12 This proposal is interesting however one should be particularly careful about the following: Firstly, the scaling dimension of SU(N * ) singlet at the kink is way larger than that of QED 3 (e.g. the kink of SU (15) has ∆ S ∼ 10 but the N f = 15 QED 3 has ∆ S < 4). A plausible but unsettling possibility is the numerical convergence is extremely slow due to that the OPE coefficient is small. Secondly, at large enough N , QED 3 -Gross-Neveu has a relevant singlet (i.e. the mass term of Yukawa field φ 2 with ∆ S = 2 + O(1/N * )) while the leading S-channel scalar operator of QED 3 is irrelevant (with ∆ S = 4 + O(1/N * )). Their the fermion bilinear operators have similar scaling dimensions, it is hard to imagine that they both saturate the bootstrap bound. It would be interesting to study the large N limit so as to improve our understanding. Although a thorough understanding of the non-WF kinks remains elusive, we think many of these kinks would have contact with physical theories given the presented results of O(2), O(4) at 2d and O(∞) at arbitrary dimensions. An exciting possibility is that they correspond to a new family of CFTs that were unknown before. To make progress it is necessary to obtain precise spectra of the putative CFTs, which might be achieved by studying the mixed correlator bootstrap of O(N ) vector V and symmetric rank-2 tensor T . The SO(4) projectors are defined as The conformal blocks are Notice the parity symmetry (12) does the following interchanges therefore fix the last term in (14). Let us rewrite (14) into the following form where G i jkl (z,z) = 1 4 δ i j δ kl O∈(0,0) + λ 2 φφO g ∆,l (z,z) + g ∆,−l (z,z) and G i jkl (z,z) is invariant under the usual parity transformation, while G (t) i jkl (z,z) is only invariant under the twisted parity (12). As is clear from the invariant tensor, they satisfy the crossing equation independently. The parity even combination of the block can be dimensional continued to d > 2, while the parity odd combination can not. This can be shown by solving the Casimir equation directly. Another way to understand this is that in d = 2, the rotation group is SO (2), the spin l state and spin −l state are two independent irreducible representation of the conformal group. Their blocks g ∆,l (z,z) and g ∆,−l (z,z) (hence (22) and (23)) appear independently in the four point function. In higher dimensions, however, they belong to the same irreducible representation. There is a unique block. We can derive the crossing equation from (14), The last row of the crossing equation (24) comes from the twist parity invariant part G (t) i jkl . Since we do not know how to dimensional continue it to higher dimension, we will discard this line when doing numerical bootstrap. This truncation can also be viewed as originated form the fact that the invariant tensor ε i 1 ...i N of SO(N) group can not appear in the four point function 〈φ i 1 (x 1 )φ i 2 (x 2 )φ i 3 (x 3 )φ i 4 (x 4 )〉 as long as N = 4. (The ε i 1 ...i N tensor appears in the N-point function.) After rescaling the S-channel OPE, the first three lines of the above crossing equation becomes exactly the N = 4 case of (5). As we show in the main text, the constraints form the first three lines of crossing equation already allows detect the two dimensional SU(2) k WZW model. As a final remark, the four point function of SU(2) 1 WZW model is, In literature [49] it is often written in terms four point function of SU(2) group elements, with g being a SU(2) group element and a, b = 1, 2.
7,084
2020-05-08T00:00:00.000
[ "Mathematics" ]
A numerical approach to model chemistry of complex organic molecules in a protoplanetary disk Multiphase astrochemical modeling presents a numerical challenge especially for the simulation of objects with the wide range of physical parameters such as protoplanetary disks. We demonstrate an implementation of the analytical Jacobian for the numerical integration of the system of differential rate equations that govern chemical evolution in star-forming regions. The analytical Jacobian allowed us to greatly improve the stability of the code in protoplanetary disk conditions. We utilize the MONACO code to study the evolution of abundances of chemical species in protoplanetary disks. The chemical model includes 670 species and 6,015 reactions in the gas phase and on interstellar grains. The specific feature of the utilized chemical model is the inclusion of low-temperature chemical processes leading to the formation of complex organic molecules (COMs), included previously in the models of chemistry of COMs in prestellar clouds. To test the impact of analytical Jacobian on the stability of numerical simulations of chemical evolution in protoplanetary disks, we calculated the chemical composition of the disk using a twophase model and four variants of the chemical reaction network, three values of the surface diffusion rates, and two types of the initial chemical composition. We also show a preliminary implementation of the analytical Jacobian to a three-phase model. Introduction Studies of the chemical composition of objects in the interstellar medium, especially for the content of complex organic molecules (COM), is an important prerequisite for understanding the origin of life in the Universe. Protoplanetary disks are dust-and gas-rich objects that could possibly form planetary systems. They are ubiquitous around young low-mass stars (e.g., Manara et al. 2016, Kim et al. 2017. A study of the chemical composition in the disks around Sun-like stars will provide an idea of the origin of organic molecules in the early Solar System, which in turn can serve as a key to understanding the early chemical composition of the Earth and other planets. To date, complex organic molecules (which are defined to have six or more atoms, including carbon, Herbst and van Dishoeck 2009) Favre et al. 2018) molecules were also found in the disks. Other molecules representative of the COMs content in the ISM, such as methyl formate HCOOCH 3 and dimethyl ether CH OCH 3 3 , are still not found in the Class II protoplanetary disks, but are widely observed in star-forming regions representing earlier stages of low-mass protostellar evolution, such as prestellar cores (Jiménez-Serra et al. 2016) and hot cores/corinos (Jorgensen andPILS Team 2020, Manigand 2020) and FU Ori type young stars (Lee et al. 2019). Aikawa et al. (1997) were among the first to study the evolution of the molecular composition of protoplanetary disks. They considered stationary minimum mass solar nebula (MMSN) without radial mixing; density and temperature did not change over time. Their chemical model included gas-phase reactions, adsorption onto dust grains, and thermal desorption from dust particles. Ionization and dissociation by interstellar and stellar ultraviolet radiation  were neglected. The chemical network of reactions was based on the UMIST94 database (Millar et al. 1991). Over the next 20 years, protoplanetary disk models became more sophisticated (e.g., see review by Henning and Semenov 2013). However, the applied chemical models remained mostly two phase, that is, only gas-grain interactions were considered. Ruaud and Gorti (2019) were able to apply the three-phase chemical model to protoplanetary disks for the first time. In this article, for the first time, we apply a scenario of the formation of complex organic molecules in cold gas of prestellar cores proposed by Vasyunin and Herbst (2013) and further developed by Vasyunin et al. (2017) to a protoplanetary disk around a Sun-like star. The evolution of the chemical composition was calculated for 1 Myr assuming that the disk structure is in a quasi-stationary mode for a given time period (Akimkin et al. 2013). To numerically solve the system of differential equations that determine chemical evolution, the three-phase MONACO code uses the DVODE integrator (Brown et al. 1989). In the current state, the application of the MONACO code to protoplanetary disks is challenging due to a wide range of physical conditions typical for disks. Numerical integration of a system of ordinary differential equations requires the calculation of the Jacoian matrix of the system. The DVODE can work in two regimes: with internally generated numerical Jacobian and with user-supplied analytical Jacobian. The latter option typically results in much higher numerical stability of integration. On the other hand, it requires additional efforts from researcher aimed at derivation and implementation of the analytical expressions for the Jacobian matrix into the numerical code. To solve this problem, we added to the code the implementation of specifying the analytical Jacobian of the system of differential equations. In this study, we set the following goals: by supplying the analytical Jacobian, to increase the stability of the MONACO code for efficiently calculating the evolution of the chemical composition of the protoplanetary disk under the wide range of physical parameters and conditions typical of protoplanetary disks. Also, we aim to study the formation of COMs in the disk, especially midplane, using the model suggested by Vasyunin and Herbst (2013) tested on prestellar cores, the conditions that are close to the conditions in midplane. Models 2.1 Physical model of the protoplanetary disk As a physical model of a protoplanetary disk (PPD), we used the model presented by Molyarova et al. (2017). This model is the PPD model around a T Tauri type star with the mass of M 1 ⊙ and utilizes the 1+1D approach to calculate disk density and temperature. The disk is considered quasi-stationary, axisymmetric, and hydrostatic in the vertical direction. The protoplanetary disk model used in this article is a grid of 4,400 points (55 radial and 80 vertical points). The model parameters are presented in Table 1. Dust temperature in the upper disk is calculated using multifrequency ray tracing (RT) procedure for the stellar and background radiation similar to the study by Molyarova et al. (2018). RT is done in 2D in r φ , ( )-plane and includes four directions (to and from the central star, upward and downward relative to the disk midplane). The corresponding angle-averaged intensity is used to calculate radiation field strength and rates of photoreactions. We assume M 1 ⊙ for the stellar mass, stellar effective temperature (T * ), and radius (R * ) are taken from the evolutionary tracks in the study by Baraffe et al. (2015). In Figure 1, the distributions of disk grid points (top left), the strength of UV radiation (top right), gas temperature (bottom left), and gas density (bottom right) in the disk (at each grid point) are also presented as a function of radial distance from the central star (r) and vertical distance from disk midplane z r ( ) . Chemical model In this study, we utilized a chemical model with the network of gas phase and surface chemical reactions, which was used in the study by Vasyunin and Herbst (2013) with The chemical reaction network used in this model contains 670 species and 6,015 gas-phase and surface reactions, as well as 198 species and 880 reactions in the ice mantle of dust particles depending on the simulation mode (see details in Section 2.2.2). Following Vasyunin et al. (2017), we utilize five types of desorptions in the model: thermal evaporation, photodesorption, desorption by cosmic ray particles (CRP), CRP-driven photodesorption, and reactive desorption. We do not consider CRP attenuation inside the disk. Cosmic-ray ionization rate in our model is ζ 1.3 10 s CR 17 1 = × − − (Glassgold and Langer 1974). Ionization by short-living radionuclides is ζ 6.5 10 s RN 19 1 = × − − (Umebayashi and Nakano 2009). We also consider thermal hopping across the grain surface for species and quantum tunneling through potential barriers for light species (atomic and molecular hydrogen), depending on the simulation mode (see details in Section 2.2.2). In Table 2, the atomic initial fractional abundances of elements with respect to the total number of hydrogen nuclei used in the model are presented (according to Wakelam and Herbst 2008). The molecular initial composition is calculated as a result of the chemical evolution of a cold dark cloud at 10 6 years by MONACO with next parameters: density of gas n 10 cm We modified the chemical model to take into account the radiation fields from the central star and interstellar radiation according to the protoplanetary disk model. The rate constants of photoionization reactions K i are calculated as follows: where α is the photoreaction rate in the field of unshielded ultraviolet radiation; G UV and A V are factors characterizing the radiation field and attenuation of the radiation field, respectively (they are the parameters of the protoplanetary disk model, G UV is utilized in units of the Draine's field Draine 1978), and γ is a parameter that takes into account the increased field attenuation in the ultraviolet range compared to the visible one (McElroy et al. 2013). The grid points of the protoplanetary disk model are not independent in terms of calculating the evolution of the chemical composition because the self-shielding of the molecules H 2 and CO has to be taken into account when calculating the photodissociation of those molecules. In our case of 1+1D physical disk model, chemistry in each radial column of the model grid must be calculated in a specific order, namely, starting from the outermost point in disk atmosphere and toward the disk midplane. This is needed because one has to calculate self-shielding factors for CO and H 2 molecules along the radiation propagation path, that is, 1+1D model is in the vertical direction at each radial column, and for that, it is necessary to know the corresponding vertical column densities of the molecules. H 2 and CO self-shielding are calculated based on the study by Visser et al. (2009). For the top points in each radial column (points with a maximum height above the disk midplane), the column densities H 2 and CO (N 80 col ) are assumed to be zero. The column densities at the other points N i col are calculated as follows: In the case of a three-phase model, the evolution of the molecular composition is determined by the following system of differential equations change in the abundances in the gas phase, on the surface of dust grains and in the mantle of dust particles, respectively. In the case of a two-phase model, this system of equations is noticeably simplified: the terms of the equations describing abundances in the mantle and the transition of molecules between the mantle and the surface are not considered. For the numerical integration of the system of differential rate Eq. (3), the Adams method is used, implemented in the DVODE integrator. The integration time is 1 Myr. Supplying the analytical Jacobian When using a two-phase model (gas-grain) and a chemical network containing~660 reactants, including~200 The fractional abundances are given with respect to the total number of hydrogen nuclei. species on the surface and~6,000 chemical reactions, the Jacobian contains~435,000 elements, among which 6,000 are nonzero, which is about 1.4%. In the case of a three-phase model (gas-surface-mantle), the Jacobian con-tains~739,600 elements, and the number of nonzero terms is rather difficult to determine in advance, but it is very large, much more than in the case of a two-phase model. The two-phase Jacobian is well described analytically with the exception of reactive desorption (R i des term in Eq. (3)) that violates this harmony of the analytical Jacobian. In different models, we have considered three types of reactive desorption. The first type is based on the probability that a molecule formed as a result of a reaction on the grain surface has energy exceeding its binding energy and on the assumption of a rapid loss of this energy, for example, the transfer of energy to a dust particle more massive than the molecule itself (Garrod et al. 2007). This type of desorption is used for reactions with one product, which adds~700 more nonzero elements of the Jacobian ( 0.16% + ). In the second type of reactive desorption, a fixed value of the desorption efficiency is considered, but for all products of surface reactions (Vasyunin and Herbst 2013), which adds~1,000 nonzero Jacobian elements ( 0.23% + ). The third type of reactive desorption takes into account both the features of the reaction and the properties of the dust grain surface, namely, the effective mass of the surface element from which the molecule is desorbed into the gas phase (Minissale et al. 2016). This most complex type of reactive desorption considered in our model adds~25,000 nonzero elements to the Jacobian ( 5.7% + ). The two-phase Jacobian, modified to take into account reactive desorption, is still sparse, but not as simple. To obtain the symbolic Jacobian of such a system of ordinary differential equations, the SymPy symbolic computation package for the Python language was chosen. The process of numerically solving the system of equations is as follows. On the basis of the chemical reaction network, a system of differential equations in Fortran is formed, since the numerical solution is performed by means of this language. A system of differential equations in Python is also formed, but in a symbolic form. Then symbolic partial derivatives are calculated using SymPy. Then symbolic expressions are simplified. After that, the Python code forms the Fortran source code, and the system of differential equations is solved numerically. Simulation setups So far we have applied the following simulation setups: -Two phase (gas-grain) with reactive desorption taken into account, without specifying the Jacobian; -Two phase with reactive desorption taken into account, with specifying the Jacobian; -Three phase (gas-surface-mantle) with reactive desorption taken into account, without specifying the Jacobian; -"Incomplete" three phase with reactive desorption taken into account, with specifying the Jacobian. In two-phase approaches, no distinction is made between the surface and the ice mantle of dust particles, which is a noticeable simplification in comparison with the structure of real multilayer ice. In threephase approaches, we allow surface molecules to bury deep into the ice mantle. Thus, such molecules become inaccessible for direct desorption into the gas phase. In the three-phase approach, the change of the mantle abundances occurs due to adsorption to the surface ("tran" terms in Eq. (3)), chemical reactions ("chem" terms), and physical diffusion between the inert bulk and the surface ("diff" terms). In the "incomplete" three-phase approach, physical diffusion is not implemented yet. In all setups, we used the type of reactive desorption according to Minissale et al. (2016). Each simulation approach we tested using four different chemical network options: network used in the study by Vasyunin and Herbst (2013); network with refined binding energies for some species; network that additionally includes reactant CH OCH 3 2 and chemical reactions with its participation; updated network used in the study by Vasyunin et al. (2017). Also, for each simulation approach, we used three values of the diffusion/desorption surface ratio: (with tunneling for light species) and 0.5 and 0.8 (with no tunneling), as well as two variants of the initial chemical composition: atomic and molecular. Two-phase setups In total, in each two-phase approach (with and without Jacobian), we calculated 24 models of protoplanetary disks (4 variants of chemical networks, 3 different values of the surface chemistry parameter, and 2 variants of the initial chemical composition). Thus, 105,600 runs of the numerical integration of the differential equations system were performed in each of the two-phase modes under different physical conditions and different parameters of chemistry. When using the Jacobian, the calculation speed increased by four to five times on average. The number of unsuccessfully completed calculations has decreased significantly, namely, by 150 times. Now that number is eight unsuccessful calculations out of 105,600 runs. Thus, we consider it reasonable to use the symbolic Jacobian in two-phase models with reactive desorption. Distribution of organic molecules in the disk In this section, we present two-dimensional distributions of the fractional abundances of selected organic molecules and some chemically related species with respect to the total number of hydrogen nuclei in the disk in the gas phase and on the surface of dust particles at time sition. In all figures in this section, the abscissa shows the radial distance in logarithmic scale, and the ordinate shows the ratio of vertical and radial distances from midplane and central star, respectively. In those coordinates, the grid of model points is uniform spatially (see Figure 1(a)). The color scale displays the decimal logarithm of the fractional abundances of the molecule. The g prefix in front of the name of the molecule denotes molecule on the grain surface. Species without "g", in contrast, reside in the gas phase. To plot the two-dimensional distribution, only the fractional abundances of the molecules greater than 10 15 − were taken into account. As shown in Figure 2(a) and (b), the relative hydrogen abundance of the hydroxyl group OH reaches~10 7 − in the gas phase in the outer region of the disk and~10 9 − on the grain surfaces in the outer disk. Figures 2(c)-3(f) shows that organic molecules such as formaldehyde H CO 2 , methanol CH OH 3 , methyl formate HCOOCH 3 , dimethyl ether CH OCH 3 3 , and formic acid HCOOH, are more abundant on grain surfaces than in the gas phase in the midplane. The maximum fractional abundances of these molecules on the grain surfaces are in the midplane in intermediate and outer regions of the disk and reach values from~10 8 − to~10 5 − , whereas the maximum fractional abundances of these molecules in the gas phase are several orders of magnitude lower (from~10 13 − to~10 7 − ). The maximum fractional abundance on the grain surfaces of H CO 2 is reached within the radial distances from 60 to~200 au (see Figure 2( In our model, COMs are more efficiently formed in the midplane on grains. At 5-30 K and weak UV field, due to attenuation in the depth of the disk, CO in sufficiently large quantities can freeze out to grains. At high altitudes, CO is destroyed under the stronger UV field (CO C O photon → + ) and COMs are no longer formed in such quantities as in the midplane. However, as shown in Figure 2(d) and (f), in the midplane at 300-400 au, there is a region with a reduced abundances of gH CO 2 and gCH OH 3 on grain surfaces. We attribute this to a slight increase in the UV field in this region in the physical model of the protoplanetary disk (see Figure 1(b)). Total column densities of organic molecules We also present the column densities profiles for selected molecules as a function of the disk radius (total number of molecules in the column) in cm 2 ( ) − . In Figure 4, total column densities for molecules in the gas phase with Figure 4(c) shows that in our model gas-phase methanol reaches the same value between~0.6 and~1.5 au for atomic initials. In the work by Walsh et al. (2014), the model column densities of H CO 2 and CH OH 3 were estimated as 10~10 cm 12 13 2 − − (see Figure 8 in the study by Walsh et al. 2014). We understand that our profiles of column densities do not reproduce the observational data and previous models completely; however, we believe that there are sections of the profiles that satisfy the existing data. We do not pay detailed attention to the reasons for the peaks or troughs of our profiles since the priority for us is the modification of the three-phase model for stable calculations of the protoplanetary disk. We started with our two-phase model to work out the technicalities of the Jacobian assignment and are aiming to apply the Jacobian to our full three-phase model. Note that among the possible reasons for such results may indeed be a low value of the disk mass. In our model, we use the value of 0.01 M ⊙ . While estimates of disk mass, for example, TW Hydra, referred to by Walsh et al. (2016), have an upper limit of 0.1 M ⊙ and a best-fit of 0.023 M ⊙ . As shown in Figure 4, the use of different variants of the initial chemical compositions has a significant impact on the simulation results, especially with regard to complex organic molecules. The approach in which the protoplanetary disk inherits its initial composition from the previous evolutionary stages of the protostar is more fair. Three-phase setups Work on adding the Jacobian to the three-phase model is ongoing. Currently, we have implemented the supplying of analytical Jacobian for the processes of instantaneous surface redefinition due to the adsorption/desorption of molecules from/to the gas phase from the surface. This added another 230,104 (31%) non-zero Jacobian elements. Such Jacobian obviously ceases to be sparse. The use of the Jacobian of the differential equations system significantly affected the operation of the threephase mode. In the three-phase mode without the Jacobian, the calculations for most of the grid points of the protoplanetary disk physical model do not complete successfully in a reasonable CPU time. The addition of the Jacobian to the "incomplete" three-phase regime significantly improved the stability and speed of calculations, but still does not allow us to calculate the chemical evolution in the inner disk. At the current moment in this mode, we were able to calculate the chemical composition for the outer regions of the disk with the radial distance from the central star of more than 25 au. In the three-phase model, the molecules from the deep of ice mantle cannot be directly desorbed into the gas phase. "Incomplete" three-phase mode is an intermediate step toward "full" three-phase mode. In the "incomplete" mode, there is no physical transfer of molecules between the mantle and the surface. Mantle molecules can become surface molecules only by emptying the upper layers of the surface. Therefore, the results obtained in this mode must be treated with caution. Nevertheless, using the example of methanol ( Figure 5), it can be seen that the abundances in the gas phase and on the grain surfaces in the area accessible to us (especially in midplane) have become several orders of magnitude lower compared to the two-phase regime (Figure 2(e) and (f)). Obviously, this is achieved due to the settling of species deep into the bulk. Hence, the presence of the third solid phase in the chemical model has a significant effect on the abundance of molecules in the gas phase. Summary We have implemented an option to supply the analytical Jacobian in the MONACO code for the chemical evolution of interstellar objects, which was previously applied to cold dark clouds and applied it for the first time to the physical model of the protoplanetary disk around a Sunlike star. Specifying the Jacobian for the two-phase (gasgrain) model made it possible to reduce the time of numerical integration of the differential equations system, as well as to increase the stability and accuracy of calculations as applied to the protoplanetary disk. We also present preliminary results on adding the Jacobian to the three-phase (gas-surface-bulk) model, the intermediate results of which also demonstrated the justification and the necessity of using the Jacobian in modeling the formation of complex organic molecules in protoplanetary disks using the MONACO code. This update is crucial to allow numerically effective modeling of threephase chemistry in protoplanetary disk conditions. Funding information: MYK and AIV acknowledge the support Ministry of Science and Higher Education of the Russian Federation via the State Assignment contract FEUZ-2020-0038. VVA is grateful to the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS" for financial support (20-1-2-20-1). Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission. Conflict of interest: The authors state no conflict of interest. Data availability statement: The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
5,623.8
2022-01-01T00:00:00.000
[ "Physics", "Geology" ]
Process Kinetics of the Carbonation of Fly Ashes: A Research Study The aim of the paper is to present the results of research on the carbonation process kinetics of coal combustion ashes originating from fluidized bed boilers used in power plants. Based on the thermogravimetric analysis (TGA), the hypothesis that carbon dioxide is bounded by the mineral substances (calcium compounds) in the fly ashes was confirmed. Determining the kinetic parameters of the carbonation of fly ashes requires simultaneously taking into consideration the kinetics of the drying process of the sample. The drying process of the sample masks the effect of the reaction of CO2 with calcium compound. Unlike the ashes generated in pulverized fuel boilers, fly ashes contain irregular amorphic mineral components or poorly crystalized products of complete or partial dehydroxylation of claystone substance present in shale formations constituting the gangue as well as anhydrite (CaSO4), a desulfurization product. The content of free calcium oxide (CaO) in such ashes ranges from a few to several percent, which is a significant obstacle considering their use in cement and concrete production as type II admixtures understood to be inorganic grained materials of pozzolanic or latent hydraulic properties. The paper presents effective mechanisms which reduce the content of free CaO in ashes from Fluidized Bed Combustion (FBC) boilers to a level that allows their commercial utilization in the cement industry. Introduction The interest in circulating fluidized bed (CFB) boilers, including atmospheric circulating fluidized bed (ACFB) and pressurized circulating fluidized bed (PCFB) biolers, has been growing steadily due to their increasing combustion efficiency and considerably lower emissions of NO x and SO 2 . In order to limit the emission of SO 2 , the admixture of limestone is required. Most commonly, two or three-fold surplus of CaCO 3 in relation to the stoichiometric content of sulfur in coal is applied, enabling us to achieve over 90% reduction of SO 2 emission [1,2]. At the same time, part of the CO 2 generated in the combustion process can be utilized [3]. Thanks to the sufficient content of silica, alumina, calcium, and iron oxides, coal fly ash (CFA) is characterized by pozzolanic properties. Therefore, PFA has a wide range of applications in cement and concrete production [4,5]. In contrast, the CFA generated in pulverized fuel boilers is characterized by irregular amorphic mineral components or poorly crystalized products of complete or partial dehydroxylation of claystone substance present in shale formations constituting the gangue in addition to anhydrite (CaSO 4 ), which is a desulfurization product. They also contain unreacted calcite (CaCO 3 ), free CaO and Ca(OH) 2 -the product of its hydration as well as unreacted coal. While the management of CFA coming from pulverized fuel boilers does not pose a challenge, the utilization of CFA from fluidized bed boilers is problematic due to the high content of free CaO. The content of free calcium oxide in such CFAs ranges from a few to several percent, which results from the sorbent surplus. The application of the CFAs in cement and concrete production such as type II admixtures, understood as inorganic grained materials of pozzolanic or latent hydraulic properties, must comply with the European Standard EN 450-1:2005 [6] which specifies that they should not contain more than 2.5% of free CaO [6,7]. Moreover, calcium oxide is classified as a substance with irritating properties and has been labeled with GHS hazard statements H315 and H318 [8]. Waste is considered to be hazardous due to its irritating properties if the content of free CaO exceeds 1.0%. Waste from fluidized bed boilers which contains above 1.0 wt.% of free CaO may be classified as hazardous waste. It is also well known that the disposal of the waste at dumping facilities poses the risk of self-heating in the case of contact with water (a strong exothermic reaction) and, as a consequence, health and environmental hazards [9][10][11][12]. The content of CaO in FBC CFAs constitutes one of the most significant characteristics limiting their use as cement and concrete admixtures on a large scale. Relatively high content of free CaO as well as calcium sulfates (two components which have a considerable impact on the properties of cement mortar) coupled with the lack of sufficient amounts of silica, alumina, calcium, and iron oxides is the reason why they are not classified as ASTM admixtures of class F or C [13,14]. The occurrence of free CaO in in FBC coal fly ashes is connected with the mechanism of the calcination and desulfurization processes. It can also occur in the form of free grains or it may be built in the grain structure in connection with calcium sulfate [15]. There are two major procedures applied to remove CaO from the FBC coal fly ashes; one is the hydration reaction to calcium hydroxide [16], and the other is carbon dioxide sequestration including solid phases and aqueous solutions [17][18][19][20][21][22][23][24]. In recent years, there has been a growing interest in the application of limestone as filler [25]. Calcium carbonate is considered to be the key component influencing the processes of cement mortar binding. Treated as filler, calcite (CaCO 3 ) (above 5%) performs the function of an active reagent. The mechanisms of the reactions are thoroughly discussed in previous researche [26][27][28]. Łączny suggested a new approach to the carbonation process [29]. In his study, the process of carbonation was used to achieve a product with a controlled content of free CaO. As a result, a three-component micro aggregate was obtained containing CaO w -CaCO 3 -CaSO 4 . The research confirmed that the aggregate may be used as an active component of cement mortars [30]. In addition, the study demonstrated a positive impact of carbonated FCB coal fly ashes on the strength of cement mortar. Łączny also demonstrated that the compressive strength of mortar made of FCB coal fly ashes is improved in comparison to non-carbonated ashes, which probably results from the positive impact of the carbonate on ettringite (Ca 6 Al 2 (SO 4 ) 3 (OH) 12 ·26H 2 O) transformation [31]. The aim of this paper is to present the results of research on the kinetics of carbonation process of coal combustion ashes from fluidized bed boilers used in power plants. Materials and Methods During the course of the study, the content of free calcium oxide in CFB coal fly ash was determined. The kinetics of the carbonation process occurring in the fluidized bed reactor was examined. The thermogravimetric analysis (TGA) was conducted by means of a TA Instruments differential scanning calorimeter (TA Instruments, New Castle, DE USA). Grain-size and mineralogical analyses of the waste were carried out using Malvern Morphologi G3S-ID analyzer. The determined sample volume was put into a ceramic container with a spatula and next placed in the chamber of the analyzer. The measurements were conducted under the following conditions: the speed of heating-5 • C/min, gas-60% CO 2 in air, the temperature was kept constant for 15 min. The analyses were made at the temperatures of 40, 60, 80 and 100 • C. Within the framework of the research, the chemical content of the CFA coming from fluidized bed boilers used in power plants was analyzed (see Table 1). In order to determine free CaO w , standard glycol method was applied. Next, grain-size analysis (granulometric analysis) of the waste samples was conducted using Malvern Morphologi G3S-ID analyzer with Raman chemical identification in order to examine the particle number, size and shape distributions. The dispersion parameters were as follows: dispersion pressure-0.8 bar, dispersion time-20.0 ms, setting time-60 s. Table 2 shows the contents of free CaO w in the analyzed waste samples. The content of free calcium oxide in CFA was determined as a function of time. On the day of the delivery of the analyzed waste, the content was 4.18% wt., whereas after 36 days of storage-0.5% wt. of CaO w was missing, which led to the conclusion that the content of free calcium oxide changes during storage time, most probably as a consequence of the reaction of CaO with carbon dioxide present in the air. The results of the grain-size analysis of the examined samples of CFA demonstrated that the content of coarse grains (100.0-1000.0 µm) in the sample was 0.011%, while the maximum grain diameter was 136.24 µm. The sample of CFA contained a greater amount of fine grains (1-10 µm)-88.4%, whereas the population of the finest grains (0.1-1 µm) was 5.38%. The content of 10.0-100.0 µm grains in the CFA samples was 6.18% (see Table 3). Figure 1 presents the results of thermogravimetric analyses. A few stages can be noticed in the thermograms; the first one lasts a few seconds and encompasses the initial phase of the research. It is connected with the streaming of the gas of a predetermined composition into the TGA chamber. This area of the thermogram does not include any data which would be essential for the analyses considering the discussed phenomena. The data gathered after the first maximum following the abrupt fluctuations of the sample mass should be taken into account. In the thermograms below, these values are assumed as reference values which serve to determine the sample mass loss during the second stage. Depending on a given sample, the loss ranged from about 0.06% (for the sample examined at the temperature of 40 • C) to 0.17% (for the sample examined at the temperature of 100 • C). This stage should be interpreted as the evaporation of surplus moisture from the sample. After this stage, for the samples examined in the lower range of temperatures (40, 60 and 80 • C, respectively), an increase in sample mass is observed. However, measurable values were recorded only for the first two cases. The increase may be regarded as the binding of CO 2 by calcium compounds present in the structure of the analyzed material. In the case of the sample examined at the temperature of 100 • C, the mass increase is not observed, yet a significant decrease in the speed of the mass loss is noticeable. Such a course of the experiment suggests that, at lower temperatures, the rate of water evaporation is so low that it is possible to observe an increase in the sample mass resulting from the binding of CO 2 in mineral structures. This interpretation is confirmed by the course of the process observed for the highest temperature. Because with the increase in temperature the binding of CO 2 takes place with an increasingly higher rate, at a certain point this effect outweighs the mass loss resulting from the evaporation of water contained in the sample. In addition, poorly bound moisture is removed from the sample at the beginning of the process. When the process is progressing, the evaporation of another amount of water is connected with the necessity of spending additional energy to overcome the resistance of mass transport and the forces binding a water particle with mineral matter. The conducted thermogravimetric analysis confirms the hypothesis that carbon dioxide is bound by the mineral substance (calcium compounds) contained in the analyzed waste samples. However, determining the kinetic parameters of the process based on TGA requires simultaneously taking into consideration the kinetics of the drying process of the sample because it may be masking the reaction of CO 2 with calcium compounds. The Use of a Dynamic Model Describing the Carbonation Process The results shown in Figure 1 enables to present the carbonation process from a mathematical point of view. In each of the thermograms, the values of sample mass in the function of time can be observed. The charts represent a few consecutive phases, during which mass changes may be interpreted as the result of the physico-chemical transformations taking place in a given sample. Particular phases are especially noticeable for the temperatures of 40 and 60 • C, to a lesser extent for the temperature of 80 • C and only to a very little extent for 100 • C. The first phase of rapid mass changes taking place at the beginning of the experiment may be interpreted as an artefact indicating the occurrence of transient state which stabilizes after a short time. In the consecutive phase, a loss of mass occurs, which is explained by the evaporation of the sample moisture. The next phase, being a growth phase, is a manifestation of CO 2 absorbance. This is the key phase for the discussed process and the research connected with the identification of model parameters has been limited to this very phase. Taking into account the obtained results, this phase is practically impossible to observe with regard to the data set for the temperature of 100 • C; therefore, this data set has been excluded from further considerations. The lack of a distinctive growth phase of the mass may be connected with a considerable acceleration of the carbonation process resulting from increased temperature. The complete transformation of CaO into CaCO 3 takes place at the same time as the process of water evaporation; in the chart, a cumulative effect of both the phenomena is observed. A section related to the growth of sample mass was isolated from the charts representing temperatures of 40, 60 and 80 • C. The borders of the isolated areas were selected in such a way so as not to include the transient phases, i.e., the gradual flattening of the curve approaching the peak. Figures 2-4 show the obtained charts. The following carbonation equation was applied to formulate dynamic equations describing the loss of CaO at the time of the growth phase of sample mass resulting from the reaction with CO 2 : It was also assumed that the process takes place at the molecular level according to the accepted stoichiometry, and that there is a significant surplus of CO 2 (its concentration does not change as a result of the reaction; it is a value constant in time). Based on this assumption, it is possible to formulate the following differential equation [32]: where k is the constant of reaction rate, t denotes time, whereas N CaO is the number of CaO moles. This equation can be easily solved analytically to obtain the following relation: Constant A may be determined from the initial condition: In the course of the carbonation reaction, CO 2 molecule is bound to CaO. The resulting change of the sample mass may be described by means of the following reaction: M CaCO 3 denotes the molar mass of CaCO 3 , M CaO stands for the molar mass of CaO, dm is the mass increase resulting from the pending carbonation process. Having applied Equation (3), we obtain: While deriving the above equation, it was assumed that at the time t = 0 mass increase dm is zero. The number of CaO moles at the initial time can be associated with the initial CaO mass; thus, we obtain the final form of the equation: The unknown quantities occurring in the equation, i.e., k and m 0 CaO may be determined using the experimental data. Because of k and m 0 CaO , the following quality indicator was minimized: The fitting of the experimental data to the model was performed by means of Levenberg Marquardt algorithm. The results are presented in Figures 5-7, while the values of the determined parameters are compiled in Table 4. With regard to the temperature of 80 • C, the outliers may be attributed to the small number of measurement points in the third phase of the discussed process. The process of carbonation is a two-stage process which runs according to the following two exothermic reactions: Ca(OH) 2 + CO 2 = CaCO 3 + H 2 O Adopting the assumption that the process takes place at the solid/gaseous phase contact seems to be justified based on the fact that the occurrence of free calcium oxide is associated with the mechanisms of the calcination and desulfurization; additionally, calcium oxide may assume the form of free grains or it may occur with calcium sulfate within the grain structure [33]. This is the reason why it is practically impossible to apply classic equations in order to describe the kinetics in a similar way to the approaches discussed in the field literature on the hydration and carbonation of pure sorbents [34][35][36][37]. The direct application of the approach according to which the process takes place in an aqueous environment is also impossible due to the prevalence of physical dissolution phenomena and the ionic reaction in liquid phase [38]. Bauer studied the carbonation process of lignite coal fly ashes under semi-dry conditions and at low temperatures and pressures [18]. His conclusion was that under such conditions, increasing the mixing intensity or the amount of CO 2 also increases the carbonation rate. The reaction with CO 2 depends on three factors, namely the dispersion of CO 2 in the solid phase, the rate of Ca and Mg release from the mineral surface as well as the rate of the precipitation of carbonate solids; however, the reaction times in the semi-dry process route are considerably shorter. These observations confirm the results presented in this study. The presence of water in the sequence of follow-up reactions may be considered as one of the most significant factors influencing the rate of the process. In terms of practical applications as well as chemical engineering processes, it may facilitate the design of more efficient reactors in the future. A similar effect may be expected with reference to the impact of the bound CO 2 on cement mortar hardening [39]. Our study confirmed that the application of a fluidized bed reactor filled with ceramic balls has a positive effect on the course of the carbonation process. As a result of the abrasion, the surface of the solid/gaseous phase contact enlarges additionally promoting the removal of products blocking the access to the reactants. The same effect was achieved in studies with the use of sonochemical treatment for the carbonation of circulating fluidized bed combustion (CFBC) coal fly ashes. Finally, CFA reactivity demonstrates strong dependence on temperature coupled with almost complete saturation of carbon dioxide at higher temperature ranges. Conclusions The conducted research confirmed the possibility of reducing the content of free calcium oxide in CFA from FBC boilers by treating them with carbon dioxide in the presence of water. In such a case, a reaction of carbon dioxide with free calcium oxide to calcium carbonate takes place and the progress of the reaction may be controlled so as to achieve a particular degree of calcium oxide conversion to calcium carbonate. CFA reactivity demonstrates strong dependence on temperature coupled with almost complete saturation of carbon dioxide at higher temperature ranges. To determine the kinetic parameters of the carbonation of fly ashes, simultaneous analysis of the drying process of the ash sample must be analyzed. The drying process of the sample is masking the effect of the reaction of CO 2 with calcium compound. Moreover, the most optimum conditions for the conversion of CFB waste to a product of controlled free CaO content were achieved using a fluidized bed reactor. With relatively short times of the reaction (30 min), a 2.7% wt. reduction of the calcium oxide content was obtained in comparison with the initial value.
4,487
2021-01-01T00:00:00.000
[ "Materials Science" ]
THE EFFECTS OF MOLECULAR AND PROCESSING PARAMETERS ON ENERGY HARVESTING CAPABILITY OF PVDF-BASED NANOGENERATORS In the research of the alternative vibration sensing systems, the major potential is attributed to piezoelectric materials that can generate the electrical output from the waste vibration sources of the industrial machines. Poly(vinylidene fluoride) (PVDF), mostly in the electroactive β-phase, is a great option due to its excellent piezoelectric properties and good flexibility. The β-phase PVDF can be obtained by simple stretching of the α-phase PVDF films, and the conditions of this process are well documented. Surprisingly, the implications of molecular parameters of the PVDF have not been addressed yet. This study investigates the effect of the molecular weight (Mw) of the PVDF on the β-phase development and consequential vibration sensing capabilities after uniaxial stretching. The successful phase transformation was confirmed using FTIR and XRD. In the FTIR spectra, a typical α-phase peak at 762 cm –1 diminished giving rise to the β-phase peak at 840 cm –1 after the stretching. The results also showed a remarkable impact of the Mw on d33 coefficient making Mw an important parameter that should not be overlooked in designing the PVDF-based sensing elements. The obtained data are highly important for the optimization of the PVDF-based vibration sensors applicable in the efficient structural and health monitoring nanosystems. INTRODUCTION At the present time, the increasing demand for the electrical energy is one of the most serious challenges that world faces. To cover this demand, fossil fuels, such as petroleum, coal and natural gas, are used as the main primary-energy source. The use of these traditional fuels is however connected to many environmental problems including air pollution, water pollution, or accidental oil spills [1,2]. From these reasons, there is an urgent need for the alternative energy sources. Probably, the most fascinating alternative concept to obtain energy is represented by the energy harvesting systems. Such devices utilize waste mechanical energy, such as vibrations, air/fluid movement, body motion etc., and converts it into the electrical form [3]. To construct the energy harvesting nanogenerator, various inorganic semi-conducting materials (ZnO, InN, GaN, CdS) or free ceramics (NaNbO3, KNbO3) have been studied [4]. Although these materials demonstrated high efficiency, they are brittle, heavy and difficult to process, which limits the breadth of applications [3,5]. To eliminate these drawbacks, the materials of choice can be found in ferroelectric types of polymers, mainly poly(vinylidene fluoride) (PVDF) and its copolymers [6]. PVDF is a polymorphic material existing in five phases (α, β, γ, δ and ε) depending on the processing methods. The α and ε-phases are non-polar due to the antiparallel packing of dipoles. On the other hand, the β, γ and δ-phase are polar, and hence electroactive [7]. The γ-phase of PVDF is referred to as the mixture of the α and β-phases, while δ-phase is regarded as polar version of αphase. Amongst all conformations, the β-phase PVDF is the most desirable one for the construction of the energy harvesting and vibration sensing devices, since it exhibits the best piezoelectric properties [8,9]. The significant research efforts have been therefore targeted to facilitate formation of the β-phase content in PVDF. The β-phase can be obtained directly from the polymer melt, however, the process requires high temperature, high pressure and other specific conditions [9]. Conversely, the α-phase can be obtained very easily, because it is the most stable primary phase of PVDF [10]. And subsequently, it can be transferred into the β-analogue by simple stretching (uniaxial or biaxial) at the elevated temperatures [11]. Other method to increment the β-phase content is through poling of the PVDF films in high external electric fields [12]. Also, the fabrication of films through electrospinning, which is also a form of mechanical elongation, was successfully applied [13]. Certain improvements in the α-β-phase transformation were observed after the addition of graphene oxide or cellulose particles [14,15].The β-phase is also achievable from the γ and δ-phase using a plethora of other processing techniques [9,16]. However, from the technological point of view, the uniaxial stretching is the most convenient method to convert the α-phase into its electroactive β-counterpart. The stretching process of the PVDF was performed within wide temperature range (50-150 °C), with different stretching speeds (1-1000 µm/s) and various (1-5) stretching ratios [10]. Although the effects of these parameters on the formation of β-phase PVDF have been described, the implications of its molecular weight (MW) on the β-phase development, energy harvesting and vibration sensing capabilities have not been, surprisingly, addressed yet. Moreover, the MW of the applied PVDF is hardly ever mentioned in the literature. This paper therefore aims to clarify the correlations between the MW of the PVDF and the formation of the electroactive β-phase and consequential d33 coefficient after stretching at various uniaxial ratios. Fabrication and stretching process The PVDF films with thickness of 0.8 mm were prepared using compression molding. In a typical procedure, the calculated mount of PVDF beads was placed into a mold, pre-heated (5 minutes) and compressed with a pressure of 10 MPa (for additional 5 minutes), while the temperature was set to 210 °C. Afterwards, the mold was cooled down in a controlled manner to ensure repeatability of the process. The characterization/testing of the samples was performed at least 24 h after their successful fabrication. The rectangular strips (dimensions of 10 mm × 100 mm) were cut out from PVDF films. Stretching of asprepared PVDF elements was performed using a tensile device M350-5 CT (Testometric, Lancashire, UK) coupled with a heat chamber (Omron) operating at a temperature of 65 °C. The cross-head speed of the clamps was set to 10 mm/min, with the relative elongations of 50, 100, 200, 300 and 500 %. In the case of M71, the elongation of 500 % could not be achieved, since it was above its breaking point. Therefore, for this sample, we have selected the maximum elongation at 400 %. General characterization The crystalline phase development of PVDF films was studied via X-Ray diffractions (XRD) using Miniflex 600 (Rigaku, Japan) diffractometer with a Co-Kα radiation source (λ = 1.789 Å) operating within 2θ range of 10-95° with a scan speed of 3°/min. The Fourier transform infrared spectroscopy (FTIR) was performed on Nicolet 6700 (Thermo-Scientific, USA) spectrometer equipped with ATR accessory and a germanium crystal. The spectra were recorded in a wavenumber range of 4000-500 cm -1 with a spectral increment of 2 cm -1 . Both analyzes were performed at laboratory conditions. Energy harvesting setup Firstly, the poling of the stretched PVDF strips was performed using a custom-build apparatus at the electric field strength of 7 kV/mm, and the temperature of 110 °C. Secondly, a thin conductive (silver) layer was deposited on the samples and their piezoelectric charge coefficient, d33, was measured within one hour after the poling process. The d33 was analyzed in the transversal mode using the electrometer 6517b (Keithley, USA). Each sample was placed between two copper electrodes; the lower electrode had a diameter of 20 mm, while the upper one had a diameter of 10 mm. A mechanical force of 0.49 N was applied onto the upper electrode and the electric charge generated by the sample was recorded. Each sample was analyzed at ten positions, the presented d33 values represent the average values from the obtained data. RESULTS AND DISCUSSION The PVDF strips of different MW were stretched at the temperature of 65°C to generate the piezoelectric βphase with subsequent poling process performed at 110 °C. The efficiency of phase transformation was determined by means of the FTIR spectroscopy. Figure 1 compares the FTIR spectra recorded for the M71, M107 and M530 samples. As clearly seen, the mechanical stretching significantly altered the FTIR profile indicating changes in the polymer chain conformations. In detail, the α-phase PVDF can be identified through a large number of characteristic peaks. In the studied wavenumber region, the relevant peaks of the α-phase were centered at 762, 795, 855, 974 and 1210 cm -1 [2,17]. After stretching, these peaks diminished giving rise to their electroactive β-phase counterparts centered at 840 and 1276 cm -1 [11]. Figure 1 Comparison of the FTIR spectra for the PVDFs of different MW The interpretation of peak at 840 cm -1 is not always straightforward; in some papers this peak is classified as pure β-phase [17], while in others it is considered as the β-γ-phase mixture [18]. However, the signature of the γ-phase in terms of peaks at 776, 812, 833 and 1233 cm -1 was not identified. Also, the peak at 840 cm -1 was sharp without any distinct shoulder that would be coming from the γ-phase. Based on these findings, we tend to believe that the content of the γ-phase in the PVDF was rather minimal. The XRD is usually used as a complementary technique to determine the crystal phase of the PVDF. As shown in Figure 2, all samples exhibited typical XRD diffractograms (Kα1, λ = 1.790 Ǻ). Prior to stretching, the αphase was the most prevailing and exhibited typical double-peak corresponding to diffractions in planes (100) and (020), respectively. After the stretching, the β-phase was successfully developed, which was reflected in the single sharp peak that is assigned to the total diffraction in (110) and (200) planes [13]. The phase transformation was remarkably affected by the MW of the samples. For M71, the manifestation of the β-peak changed without any trend in peak intensity. In the samples M107 and M530, the intensity of the β-peak generally increased with the relative elongation, which implies a greater efficiency of the α-β-phase transformation. These results indicate that the MW of the PVDF has a significant influence on the structural changes during the stretching process. The electromechanical performance of the PVDF mainly depends on the β-phase content [12]. In our case, the formation of the β-phase was facilitated by stretching in a combination with poling process. Table 1 summarizes the d33 data for the investigated samples. It can be claimed that the applied stretching positively affected the d33 values, since they increased by a factor of 4-5 for the M71 and M530, when stretched from 50 to 500 % relative elongation. The highest d33 value was however detected for the M107, which exhibited the d33 of 9.10 pC/N at maximal relative elongation. This value is below the theoretical d33 maximum that is over 25 pC/N for pure β-phase PVDF. Such difference indicates the persistent co-existence of the α-phase, and the presence of trapped charges induced during the processing [19]. It should be mentioned that samples M107 and M530 have potential for further improvements since they have capability to achieve even higher relative elongations, and thus better conditions for the β-phase development. To conclude, we have shown that the MW of the PVDF is highly important parameter affecting the d33 performance, and thus, it is of high relevancy for the fabrication of vibration sensing and energy harvesting elements. CONCLUSION In this study, the PVDF strips of different MW were stretched and subjected to poling in order to facilitate formation of the piezoelectric β-phase. The phase transformation was monitored using FTIR spectroscopy demonstrating the changeover between the dominating peaks of the α and β-phase at 762 cm -1 and 840 cm -1 , respectively. Also, the XRD data showed the successful β-phase development. In particular, samples M107 and M530 exhibited increasing peak-signal intensity with the relative elongation. The implications of different MW of the samples were reflected in the d33 values that exhibited significant variations. Samples M71 and M530 experienced several-fold increase in the electromechanical performance, however, the highest value of the d33 was 9.10 pC/N generated by the M107. Based on these preliminary findings, further research is clearly required to better understand the interrelations among the Mw, relative elongation, crystallinity, poling conditions and d33 value of the PVDF-based nanosystems.
2,739.8
2020-01-01T00:00:00.000
[ "Materials Science" ]
Isolation of Three Novel Senecavirus A Strains and Recombination Analysis Among Senecaviruses in China Senecavirus A (SVA), an emerging swine picornavirus of swine, is one of the causative agents of vesicular disease which is clinically indistinguishable from foot-and-mouth disease in pigs. Here, 3 cases of vesicular disease were reported which was caused by SVA in November 2018 in Henan, China. Three new SVA strains were identified and conducted a genetically evolutionary analysis. The isolates shared 98.1–99.0% genomic pairwise identity to each other and had the highest similarity, of 98.3–98.7%, with the American strain KS15-01, respectively. Phylogenetic analysis indicated that the Chinese prevalent strains could be clearly divided into cluster 1, cluster 2, and cluster 3. Furthermore, one isolate (HeNNY-1/2018) and two previously reported strains (HB-CH-2016 and SVA/CHN/10/2017) were identified as recombinants using several algorithms. It revealed that the recombination among SVA strains has occurred in China since 2016 or earlier. The findings of studies updated the prevalent status of SVA in China. Besides, the genetic evolution and recombinant events of SVA should be attracted more attentions in the future. INTRODUCTION Senecavirus A (SVA), also known as Seneca Valley virus (SVV), is the only member of the genus Senecavirus in the family Picornaviridae (1). SVA is a non-enveloped, single-strand and positive-sense RNA virus. The genome size is about 7.3 kb consisting of a single open reading frame (ORF) encoding a polyprotein that is flanked by 5 ′ and 3 ′ untranslated regions (UTRs). The polyprotein is subsequently cleaved in a typical picornavirus L4-3-4 layout, namely Leader (Lpro)-P1 region (VP1 to Vp4)-P2 region (2A to 2B)-and P3 region (3A to 3D) (2). SVA was first isolated as a contaminant of the PER.C6 cell line in 2002 and infrequently associated with porcine vesicular disease (1). However, beginning in late 2014, multiple cases of porcine vesicular disease were reported in Brazil and the America in which SVA was detected in those samples (3)(4)(5). Then, SVA is considered to be one of the causative agents of vesicular disease in pigs (5)(6)(7)(8)(9). The vesicular disease caused by SVA is clinically indistinguishable from foot-and-mouth disease virus (FMDV), vesicular stomatitis virus (VSV), and swine vesicular disease virus (SVDV) (2,8). Currently, this virus has been reported in Canada, China, Colombia, Thailand, Viet Nam, and elsewhere, suggesting that SVA-induced disease has already become a worldwide problem (7,(10)(11)(12). In China, the vesicular disease caused by SVA was first reported in Guangdong province in 2015 (12). Since then, increasing cases of SVA infection have been reported in other provinces, including Heilongjiang, Hubei, Henan, Fujian, Hebei, and Anhui etc. (13)(14)(15)(16)(17)(18). However, the genomic information is still very limited in these regions except Guangdong province which account for over 70% of Chinese isolates (19). Here, we report 3 apparently unrelated cases of vesicular disease in November 2018 in Henan province, China. Three novel SVA strains were genetically characterized and phylogenetically analyzed. Further, one of the isolates and two strains reported before were all identified as recombinants with unique recombination patterns. MATERIALS AND METHODS In November 2018, typical vesicular disease outbreaks were reported on three apparently unrelated pig farms (Farm A, B, and C) in Henan province, China in spite of the fact that all pigs had been previously compulsorily vaccinated 2 or 3 times with commercial FMDV vaccine. The geographical distribution of farms and the details of swine herds status were showed in Table 1. Diseased pigs exhibited similar clinical symptoms including lameness, vesicles, and ulcerative lesions on hooves and snouts. The outbreak on farm A was observed in gilts with >125 kg body weight. Pigs in farm B and farm C are commercial pigs with a body weight about 110-120 kg which were ready to market. Morbidity was 20.0% on farm A, 45.6% on farm B, and 18.8% on farm C, with no mortality observed on any farm ( Table 1). The infected pigs took about 10 days to recover. The vesicular lesion swabs, vesicular fluids or tissues were sampled to differential diagnosis using specific primers for detection of SVA, FMDV, VSV, and SVDV (15). For virus isolation, the vesicular fluid was diluted with sterile phosphate-buffered saline (PBS) and clarified at 12,000 rpm for 2 min. The supernatant was filtrated by 0.45 µm filters and then incubated with the PK-15 cells. Typical cytopathic effects (CPE) could be observed after 2 or 3 blind passages. Furthermore, the immunofluorescence assay (IFA) was performed with porcine SVA positive serum which was described previously (a kind gift from Dr. Haixue Zheng) (20). The 5th passaged virus was used to do the plaque assay and one-step growth curve as described previously (15,20). Genome sequences were further determined using primers reported before (15,21). Nucleotides sequence alignments for three isolates and other 73 strains (up to December in 2018) available from GenBank were performed using the Multiple Alignment using Fast Fourier Transform (MAFFT) program [ Table S1; (22)]. The DNAstar package (DNASTAR, Inc., Madison, WI, USA) was used to conduct homology analysis. The phylogenetic tree was constructed by MEGA 6.0 software using neighbor-joining method and the Kimura-2-parameter nucleotide substitution model with 1,000 bootstrap replicates (23). For recombinant analyses, multiple genome alignment was submitted to screen the potential recombination events by Recombination Detection Program 4 (RDP4). Seven different methods, including RDP, GENECONV, BootScan, Maxchi, Chimera, Siscan, and 3Seq, were employed (24). The recombinant strains were further confirmed by SimPlot 3.5.1 version (25). RESULTS AND DISCUSSION SVA was diagnosed as the causative agent and FMDV, VSV, and SVDV were ruled out by RT-PCR tests. Viral isolation was performed after propagation in PK-15 cells. Three representative SVA strains were isolated and designated as HeNZMD-1/2018, HeNNY-1/2018, and HeNKF-1/2018 (GenBank no. MK357115, MK357116, and MK357117). Typical cytopathic effects (CPE), IFA with specific porcine positive serum against SVA and obvious plaques were observed after infection with SVA at indicated time points (Figures 1A,B and Figure S1). One-step growth curves were obtained after virus infection of PK-15 cells at a multiplicity of infection (MOI) of 0.1. The infected cells were collected at 3, 6, 9, 12, 24, and 36 h post-infection (hpi). The viral loads were titrated by 50% tissue culture infective dose (TCID 50 ) assay with maximum viral titers obtained that were about 10 6.21 Figure 1C). The genome size of these isolates is 7,285 nucleotides (nt) consisting of a long 5 ′ UTR of 668 nt, an ORF encoding a 2,181 amino acid polyprotein and a short 3 ′ UTR of 71 nt, which exhibited similar genome organization to other SVA strains. Genome sequence alignment showed that the isolates shared 98.1-99.0% nucleotide identity to each other, but diverged by 3.6-3.9% from the first reported strain CH-01-2015 (GenBank no: KT321458) in China and by 6.4-6.6% from the prototype strain SVV-001 (GenBank no: NC_011349). Surprisingly, the 3 isolates showed the highest similarity, of 98.3-98.7%, with the 2015 American strain KS15-01 (GenBank no: KX019804). Sequences of the three isolates described here and other 73 GenBank strains were compared using MEGA6.0 software [ Table S1; (23)]. As shown in Figure 2A, the prevalent strains in China could be clearly divided into 3 clusters (clusters 1-3), revealing a high genetic diversity and sequence complexity among SVA strains prevalent in China. The new isolates described here were grouped into cluster 2, are closely related to the strains CH-FJ-2017 and AH02-CH-2017 (GenBank no. KY747510, MF460449) and distant from the early reported strain CH-01-2015, CH-02-2015, and CH-03-2015 (GenBank no. KT321458, KX173339, and KX173338) in China. It is still unknown how the SVA was introduced into China. One hypothesis is the international trading (breeding, feed ingredients, pork, and pork products etc.) or international communication of swine industry practitioners. Further researches and retrospective studies may answer when the virus starts to circulate in pig herds in China. Genomic recombination is a genetic feature of picornaviruses, which has been reported in enteroviruses, aphthoviruses, parechoviruses, and cardioviruses (26). However, recombination events among Senecaviruses are still poorly understood. Here, three new isolates and another 43 Chinese prevalent strains previously submitted to GenBank (up to December in 2018) were screened by RDP4.0 using several algorithms (24). The statistical results strongly supported that HeNNY-1/2018 (isolated in this study), HB-CH-2016 (GenBank no. KX377924, isolated in Hubei in 2016), and SVA/CHN/10/2017 (GenBank no. MG765559, isolated in China in 2017) are three recombinants exhibiting unique genetic recombination patterns (P < 0.001, recombinant score >0.7) (Table S2). Meanwhile, recombination events were further confirmed using SimPlot 3.5.1 software (25). The detailed beginning and ending breakpoints and parental strains were shown in Figure 2B. The minor fragments of HeNNY-1/2018 (region 4,190-5,808 nt) and SVA/CHN/10/2017 (region 4,145-5,620 nt) showed a similar recombination pattern (crossover the P2 and P3 regions of the genome), including the partial 2C, 3A, 3B, and partial 3C genes. While the recombination within the HB-CH-2016 (region 1-1563 nt) mainly occurred within the 5 ′ of the genomic region containing the complete 5 ′ UTR, Lpro, and partial P1 region (VP4 and partial VP2 genes). Recently, Wang et al. also described a mosaic strain, HeN-1/2018, that exhibited a recombination region (960-2354 nt) within the P1 genome region containing VP4 (partial), VP2, and VP3 (partial) genes (27). Combined with our studies, the recombination breakpoints were mapped to P1, P2, and P3 regions. However, the lack of SVA sequence data prevents estimating recombination frequencies. More research needs to be done to map the recombination hotspots over the SVA genome. In China, the first SVA infection was reported in 2015 in Guangdong Province (12). Since then, other cases have been sporadically reported in several regions with a significant increase in numbers and geographical distributions (15,16,18). The high density of pig farms and frequent movement of live pigs through different regions will contribute to the SVA spread in China. Moreover, the key role that recombination plays in the microevolution of picornaviruses and emergence of novel variants is of great concern, especially since it sometimes leads to severe pathogenicity (26). Therefore, SVA recombination events should be monitored carefully. In conclusion, we reported 3 cases of vesicular disease caused by SVA in November 2018 in China. Three new SVA strains were identified and conducted a genetical evolutionary analysis. Our studies demonstrated that high levels of genetic diversity among SVA strains in China. Furthermore, one isolate and two previously reported strains were identified as recombinants with unique recombination patterns. These results suggest that SVA recombination events have been occurring in China since as early as 2016. The frequent recombination incidences will lead to the emergence of novel variants and increase the complexity of SVA transmission, which pose a challenge to the prevention and intervention of SVA infection in future. DATA AVAILABILITY STATEMENT All data generated or analyzed during this study are included in this published article/Supplementary Material. ETHICS STATEMENT This animal study was reviewed and approved by the Institutional Animal Care and Use Committee of Henan Academy of Agricultural Sciences. Written informed consent was obtained from the owners for the participation of their animals in this study. AUTHOR CONTRIBUTIONS ZG performed the experiments. ZG and XC wrote the manuscript. HR, SQ, and RD analyzed the data. GZ designed and supervised the experiments. All authors read and approved the final manuscript. ACKNOWLEDGMENTS We are grateful to Dr. Haixue Zheng (Lanzhou Veterinary Research Institute, Chinese Academy of Agricultural Science) for the porcine SVA positive serum.
2,620.8
2020-01-22T00:00:00.000
[ "Biology" ]
A New Approximate Method for Earthquake Behaviour of Worship Buildings Turkey is in seismically active region, so many earthquakes occur in this country in the last decades. Ancient worship buildings are vulnerable to seismic activity, as many historical buildings. So, it is important to understand that building’s behavior under seismic actions. In this paper, fifteen masonry worship building has been selected which are located and built-in different region in Antalya. The main reason for the paper is to assess the seismic vulnerability of worship building by using a new approximate method. The method which is proposed in this paper aims at a simple and fast procedure based on a simplified geometric approach for immediate screening of masonry buildings at risk. Introduction Historical structures have a very important role to carry cultural inherit of the country and they are one of the most valuable pieces of cultural accumulation [1]. There are many historical buildings, religious monuments and ruins of our ancestors [2]. Many historical buildings are quite vulnerable because they were built with low resistance materials. However, these buildings have insufficient connections between the various construction parts; masonry walls, floors, etc. [3]. These problems of historical masonry buildings lead to an overturning collapse of the perimeter walls under seismic horizontal acceleration. For this reason, seismic vulnerability assessments are very important and essential to care for historic masonry structures [4]. Turkey is located on one of the most active several tectonic plates that name is he Alpine-Himalayan earthquake belt. This plate is still active, and many earthquakes occur each month. The city center of Antalya, lying in the second seismic zone of Turkey. When the province is considered in general, the western part of Antalya located in the 1 st and 2 nd -degree seismic zone, the eastern part of located within the 3 rd and 4 th -degree seismic zone [5][6]. Antalya is the fifth biggest city in Turkey according to the population. The population of Antalya is approximately 1.2 million. Besides, Antalya is the first rank according to the population growth rate in Turkey. So it is very important to know seismicity of Antalya [7,8]. Turkey Earthquake Regions Map and Seismic zones map of Antalya is shown in Figure 1. Figure 1. Turkey Earthquake Regions Map and Seismic zones map of Antalya [9] The approach followed by [10] suggested here is simple and fast being based on a simplified geometric approach for immediate screening of a large number of historical buildings at seismic risk. The aim of the approximate method is to evaluate historical buildings at possible seismic risk, using structural characterization and screening of a large number of historical worship buildings under risk, immediately. The approximate method is applied for historical worship buildings in Antalya, providing lower bound formulas for 10 different simplified geometrical indexes. In this paper, six worship buildings from Antalya have been selected and analyzed considering ten indexes of the approximate method. Approximate Method of Worship Buildings The approximate method, which is based on the study of Lourenço and Roque (2006), proposed in aims at much more fast and simple procedure for immediate screening of the worship buildings [11]. The analysis and preservation of historical worship buildings are complex phenomena which include many studies. Because of lack of various information such as geometry data and formation about the inner core of the structural elements, existing damage, regulations, and codes about masonry buildings. Moreover, materials that are used in the construction of masonry buildings, that exhibit large variability due to workmanship and that use of natural materials. Therefore, more reliable and better results achieved are not related to more complex and accurate methods [4,12,13]. This approximate method is based on a simplified analysis of the structural characteristics of the worship buildings. Each building is inspected individually by its geometry and data is collected which can use for analysis. The usage of the approximate method usually requires that the worship building is regular and symmetric, that the floors act as rigid diaphragms, and that the dominant collapse mode is an in-plane shear failure of the walls. Generally, these last two conditions are not verified by ancient masonry structures [4]. The proposed method consists of ten parameters, and every index has a limit value. Besides ten parameters, defined a total parameter value in this proposed method. Parameters of the approximate method are given below.  Parameter 1 (γ1)-In-plan area ratio This parameter relates to (being associated with) the area of the earthquake-resistant walls in each main direction (transversal y and longitudinal x, with respect to the central axis of the worship building) and the total in-plan area of the building. Parameter 1 is non-dimensional and the simplest one among the other parameters. The formula of the first parameter, (γ1), is as follows: Where Awi is the in-plan area of earthquake-resistant walls in direction "i" and S is the total in-plan area of the building.  Parameter 2 (γ2)-Area to weight ratio This parameter is defined as the ratio between the in-plan area of earthquake-resistant walls in each main direction (again, Y and X) and the total weight of the construction. The formula of the second parameter, (γ2), is as follows: Where Awi is the in-plan area of earthquake-resistant walls in direction "i" and G is the quasi-permanent vertical action.  Parameter 3 (γ3)-Base shear ratio The total design base shear for rigid structures in a given direction shall be determined from the formula is as follows: Where Z is the seismic zone for the building site, I is structure importance coefficient, w is the total seismic dead load. The total base shear for seismic loading (VSd, base = FE) can be obtained from an analysis with horizontal static loading equivalent to the seismic action (FE = βG), where β is an equivalent seismic static coefficient related to the peak ground acceleration. The shear strength of the structure (VRd, base = FRd) can be obtained from the contribution of all earthquake-resistant walls FRd,i = Σ Awi fvk, fvk = fvk0 + 0.4σd. Here, fvk0 is the cohesion, which can be assumed equal to a low value or zero in the absence of more information, σd is the design value of the normal stress, and 0.4 describes the tangent of a constant friction angle φ, equal to 22º. The index, γ3, is as follows: If a zero cohesion is assumed (fvk0 = 0), γ3,i is as follows: For a non-zero cohesion, which is most relevant for low height buildings, γ3,i is as follows: Where Awi is the in-plan area of earthquake-resistant walls in direction "i," Aw is the total in-plan area of earthquakeresistant walls, h is the (average) height of the building, γ is the volumetric masonry weight, φ is the friction angle of masonry walls, and β is an equivalent static seismic coefficient. Here, it is assumed that the normal stress in the walls is only due to their self-weight, i.e. σd = γ × h, which is on the safe side and is a very reasonable approximation for historical masonry buildings, usually made of very thick walls. Here, it was assumed that all the masonry materials were similar, the volumetric weight of masonry was 20 kN/m 3 .  Parameter 4 (γ4)-Slenderness ratio of columns Parameter 4 is related to the geometric ratio of columns and main wall. γ4,i is as follows: Where hcol is the free height of the columns, I and A are the inertia and the cross-section area of the columns.  Parameter 5 (γ5)-Thickness to height ratio of columns Parameter 5 is related to Thickness to height ratio of columns. γ5,i presented: Where dcol and tcol are the (equivalent) diameter and thickness of the columns, respectively.  Parameter 6 (γ6)-Thickness to height ratio of perimeter walls Parameter 6 is related to Thickness to height ratio of perimeter walls. γ6,i is as follows: Where twall and hwall are the thickness and the (average) height of the perimeter walls, respectively.  Parameter 7 (γ7)-dome area to structure area Parameter 7 is related to the dome area to structure area. γ7,i is as follows: Where Ka is an area of dome, S is an area of worship building.  Parameter 8 (γ8)-Dome diameter to dome height Parameter 8 is related to dome diameter to dome height. γ8,i is as follows: Where Kc is the diameter of the dome, hk is height of dome.  Parameter 9 (γ9)-Cavity wall area to full wall area Parameter 9 is related to Cavity wall area to full wall area. γ9,i is as follows: Where Awi is the in-plan area of earthquake-resistant walls in direction, Awi,full is the in-plan area of earthquake-resistant cavity walls in direction  Parameter 10 (γ10)-The ratio of external load base shear force capacity building (dynamic analysis) Parameter 10 is related to the ratio of external load base shear force capacity building (dynamic analysis). γ10,i is as follows: Where VSd, base (FE) is the total base shear for seismic loading, VRd, base (FRd) is the shear strength of the structure.  Total Parameter The total parameter is used in determining whether historical building risky or not. Total parameter is as follows: Where γ1 is parameter 1, γ2 is parameter 2, γ3 is parameter 3, γ7 is parameter 7, γ8 is parameter 8, γ10 is parameter 10. According to the total parameter formulate, which are given above, was used to calculate the risk levels of the worship building. The risk levels are classified as "no risk" and "risk". Worship Buildings in Antalya, Turkey In this paper, six historical worship buildings have been selected which are located in Antalya. The worship buildings were explained below. Suleymaniye Mosque The Suleymaniye Mosque is located in Alanya, Antalya. The mosque is also called as Alâeddin, Alaüddin, Kale, Orta Hisar, and Sultan Suleyman Mosque. The mosque had been restored by the General Directorate of Foundations in 1960, 1964, 1973 and 1989. Suleymaniye Mosque consists of octagonal platform and has one main dome. There is a minaret on the northwest corner of the mosque and a five-eyed last community place on the north [14]. Photo and plan of Suleymaniye Mosque shown in Figure 2. Bali Bey Mosque Bali Bey Mosque located in Muratpasa, Antalya. The Mosque is constructed by Bali Bey according to some resources, but the construction date of the mosque is unknown. The mosque had been restored by the General Directorate of Foundations in 1905, 1963 and 1980. Bali Bey mosque has a rectangular plan, which is close to square, covered with a single dome. And there is the last community room that extending along the northern frontier of mosque. Photo and plan of Bali Bey Mosque shown in Figure 3. Murat Pasha Mosque Murat Pasha Mosque located in Muratpasa, Antalya. Although it was built in the Ottoman period in 1570, the mosque also has Seljuk calligraphy art traces. According to the inscription the mosque constructed by Murat Pasha. The mosque has a rectangular plan, which is close to square, covered with a single dome. Murat Pasha mosque located in a spacious courtyard and courtyard dimensions is 95×98 m. Photo and plan of Murat Pasha Mosque shown in Figure 4. Omer Pasha Mosque Omer Pasha Mosque was constructed by Ketendji Omer Pasha in 1602. The mosque is located in Elmalı, Antalya Province, Turkey. It reflects the classical Ottoman architecture. The mosque is the biggest Ottoman mosque in the Antalya area. The mosque has a square plan and covered with a central dome. A five-eyed congregation, a fountain, and a madrasah is located the north of the mosque. In the northwest corner, the minaret, which is built adjacent to the harim wall, rises. Photo and plan of Omer Pasha Mosque shown in Figure 6. Nasreddin Mosque Nasreddin mosque is 22 km from the Kas accident of Antalya province. It is located in the village of Kasaba. According to the mosque inscription; the mosque was constructed in 1776 by Yusuf aga. The mosque has a square plan and is covered with a single dome. The mosque has a three-eyed congregation in the north and a minaret in the northwestern part. Photo and plan of Nasreddin Mosque shown in Figure 6. Musellim Mosque Musellim mosque, located in Kısla, Antalya, is also known as Teklioglu mosque. According to the mosque inscription the mosque was built in 1796 by Mehmet Aga. Musellim mosque has a square plan and is covered with a single dome. The mosque was restored by the General Directorate of Foundations in 1952Foundations in , 1955Foundations in , 1985Foundations in , 1989Foundations in and 1991. Photo and plan of Musellim Mosque shown in Figure 7. Agalar Onu Mosque Agalar Onu mosque is located in Aksu, Antalya. According to the mosque inscription; the mosque was constructed in 1776 by Yusuf aga. The mosque has a square plan and is covered with a single dome. The mosque, which is functioning today, was restored by the General Directorate of Foundations in 2011. Photo and plan of Agalar Onu Mosque shown in Figure 8. Haskoy Mosque The mosque is located in Haskoy, 12 km from the Finike district of Antalya. The construction date and are unknown. There is no information about the construction date and architect of mosque. The mosque has a square plan covered with a central dome, which sits on an octagonal pulley. The mosque, which is now closed for worship, was restored in 1983 by the General Directorate of Foundations. Photo and plan of Haskoy Mosque shown in Figure 9. Takkaci Mustafa Mosque The mosque is located in Muratpaşa, Antalya. There is no exactly information about the construction date and architect of mosque. The mosque has a square plan covered with a central dome, which sits on an octagonal pulley. Photo and plan of Takkaci Mustafa Mosque shown in Figure 10. Haci Hasan Mosque The mosque is located in Serik, Antalya. According to the mosque inscription; the mosque was constructed in 1820 by Hacı Hasan Ağa. The mosque has a square plan and main single dome. Photo and plan of Haci Hasan Mosque shown in Figure 11. Yesilkaraman Mosque The mosque is located in yeşilkaraman village, 34 km from Antalya. According to the mosque inscription; the mosque was constructed in 1912 but there is no information about built by whom. The mosque has a square plan and main single dome. Photo and plan of Yesilkaraman Mosque shown in Figure 12. Kizilli Mosque The mosque is located in kızıllı village, 12,5 km from Varsak, Antalya. According to the mosque inscription; the mosque was constructed in 1912 but there is no information about built by whom. The mosque has a square plan and dome which sits on an octagonal pulley. Photo and plan of Kizilli Mosque shown in Figure 13. Alacami Mosque The mosque is located in Serik, Antalya. There is no exact information about the construction date and architect of mosque. The mosque has a square plan covered with a central dome. Photo and plan of Alacami Mosque shown in Figure 14. Results and Discussions In this approximate method, it was assumed that the materials were similar of all worship buildings, the volumetric weight of masonry was 20 kN/m 3 and β coefficient was equal to 0.037. The values were computed separately for X (longitudinal) and Y (the transversal) directions respectively. The values, which exceed threshold, were highlighted with the shaded cells. Zone A was taken into account for parameter 10. Each parameter has separate threshold value which is computed in accordance with its properties. Three soil type A was used for the application of the approximate method for parameter 10. According to the all average X and Y direction values are usually approximate. In terms of parameter 1, see Table 1, all values of parameter 1 exceed the threshold value except Tekeli Mehmet Pasha Mosque. And the same situation is appropriate for parameter 2, parameter 3, parameter 4, parameter 5, parameter 6, parameter 7 and parameter 9, (see Tables 2 to 11), respectively. In terms of parameter 8 only Tekeli Mehmet Pasha Mosque bellowed threshold value. In terms of values along X and Y direction of parameter-10 Soil A, all mosque values are below threshold. In terms of the total parameter calculation results are compared; Suleymaniye mosque, Murat Pasha Mosque, and Kurus Koyu have more risk than other mosques. The risk level of the worship buildings is presented in Figure 30. Risk situation was determined by whether exceeds the parameter values of mosque or not. Conclusion This paper presents an application of an approximate method for assessment of worship buildings in Antalya. The database includes 15 mosques. These mosques selected according to the availability of information and plan which has one single dome. Ten parameters and thresholds are used. The first six parameters and threshold values are based on Lourenco and Oliveria (Lourenco and Oliveira 2004), so in this study, it was assumed that threshold values of the first six parameters to be equally applicable for the worship buildings in Antalya. Generally, the X and Y direction of the worship buildings values are approximately each other. It is thought that the reason for these approximate values is buildings plan, which has square and symmetrical. In terms of the average results all parameters have acceptable results, according to the total parameter formulate results Murat pasha mosque, Kurus koyu mosque, and Suleymaniye mosque are high risky than other worship buildings, so that can be said that other mosques are more reliable under seismic risk. The methods and parameters as indicators for fast screening and decision to prioritize deeper studies in historical masonry buildings and to assess vulnerability to seismicity. In general, the values of the directions, which are longitudinal (y) and transversal (x), are approximate. The analysis of the parameters shows that a logical common trend can be established. It is very difficult to determine how a masonry building responds against seismic loads. In this regard, there should make seismic analysis by using analytical and experimental methods. Many historical masonry buildings are protected by the General Directorate of Foundations because of their cultural values. Therefore, the examination of buildings in many aspects involves a challenging process. the seismic assessment of the structures should not cause any damage. In this process, it is thought that the parametric seismic evaluation method, which is made considering the geometric and some structural features of the structures and gives approximate results, will meet the need in the first stage. Conflicts of Interest The authors declare no conflict of interest.
4,305
2019-12-01T00:00:00.000
[ "Engineering", "Geology" ]
Demeter high resolution observations of the ionospheric thermal plasma response to magnetospheric energy input during the magnetic storm of November 2004 High resolution Demeter plasma and wave observations were available during one of the geomagnetic storms of November 2004 when the ionospheric footprint of the plasmasphere was pushed below 64 degrees in the midnight sector. We report here onboard observations of thermal/suprathermal plasma and HF electric field variations with a temporal resolution of 0.4 s, which corresponds to a spatial resolution of 3 km. Local perturbations of the plasma parameters at the altitude of 730 km are analysed with respect to the variation of the field-aligned currents, electron and proton precipitation and large-scale electric fields, measured in-situ by Demeter and by remote optical methods from the IMAGE/Polar satellites. Flow monitoring in the 21:00 and 24:00 MLT sectors during storm conditions reveals two distinct regions of O + outflow, i.e. the region of the field-aligned currents, which often comprises few layers of opposite currents, and the region of velocity reversal toward dusk at sub-auroral latitudes. Average upward O velocities are identical in both local time sectors and vary between 200 and 450 m s −1, with an exception of a few cases of higher speed ( ∼1000 m s−1) outflow, observed in the midnight sector. Each individual outflow event does not indicate any heating process of the thermal O + population. On the contrary, the temperature of the O , outflowing from auroral latitudes, is found to be even colder than that of the ambient ion plasma. The only ion population which is observed to be involved in the heating is the O + with energies a few times higher than the thermal energy. Such a population was detected at sub-auroral latitudes in the region of duskward flow reversal. Its temperature raises up to a few eV inside the layer of sheared velocity. A deep decrease in the H + density at heights and latitudes, where, according to the IRI model, these ions are expected to comprise∼ 50% of the positive charge, indicates that the Correspondence to: E. Śeran<EMAIL_ADDRESS>thermospheric balance between atomic oxygen and hydrogen was re-established in favour of oxygen. As a consequence, the charge exchange between oxygen and hydrogen does not ffectively limit the O production in the regions of the electron precipitation. According to Demeter observations, the O concentration is doubled inside the layers with upward currents (downward electrons). Such a density excess creates the pressure gradient which drives the plasma away from the overdense regions, i.e. first, from the layers of precipitating electrons and then upward along the layers of downward current. In addition, the downward currents are identified to be the source regions of hiss emissions, i.e. electron acoustic mode excited via the Landau resonance in the multi-component electron plasma. Such instabilities, which are often observed in the auroral region at 2–5 Earth radii, but rarely at ionospheric altitudes, are believed to be generated by an electron beam which moves through the background plasma with a velocity higher than its thermal velocity. Introduction The main interest of this paper is the plasma modification in the auroral and sub-auroral regions of the Earth nightside ionosphere during a strong magnetic storm.Tail stretching and associated processes, like magnetic reconnection, current sheet instabilities, etc., which develop in the tail due to magnetic storms, have a direct impact on the nightside ionosphere (Hultqvist et al., 1999).The energy, which is carried by intense, large-scale electric fields and also by energetic particles, is transformed at ionospheric altitudes into plasma acceleration, heating, ionisation of the neutral population, photoemissions, waves, etc.This transformation, which is height and time dependent, modifies the global balance of the ionosphere.Part of the energy is carried back to the magnetosphere by the charged particles that escape from the Earth's gravity due to their energization along the magnetic field lines.The ionospheric plasma is estimated to be a dominant source of the near-Earth magnetospheric plasma.The ion outflow from the ionosphere is strongly correlated (Yau et al., 1988) with the solar and magnetic activity.Total outflow from both hemispheres reaches ∼10 26 ions s −1 during magnetic storms.A comprehensive review of a large variety of physical processes which are responsible for the plasma exchange between the ionosphere and magnetosphere is presented in the book by Hultqvist et al. (1999) and the paper by André and Yau, (1997).However, some ambiguities remain in the mechanisms of the energization of thermal ionospheric plasma and its transport into the magnetosphere. A few attempts were made to correlate the ion outflow with the precipitation of the auroral electrons and of the ring current heavy ions.Yamamoto et al. (1993), Hirahara et al. (1998) concluded that the ion beams observed above 5000 km are often associated with bright, discrete auroral signatures, contrary to the ion conics which were often seen outside the regions of intense electron precipitation or not even allied to any UV emission (Wilson et al., 2001).Wahlund and Opgenoorth (1989) emphasized that intense ion outflows are not correlated with an ion temperature enhancement, but rather with an increased ion production and electron heating.At sub-auroral latitudes, the enhanced O + production and related upward ion acceleration at the heights between 600 and 800 km are suggested (Yeh and Foster, 1990;Torr et al., 1974) to be associated with the ring current energization.According to Torr et al. (1974), the intense ring current O + ions, which precipitate into the ionosphere, can produce large upward fluxes of energetic atomic oxygen in the top ionosphere.While moving upward these atoms are ionised through collisions and transfer their energy to O + ions. In the present paper, we analyse the plasma disturbances observed in the regions of auroral precipitation and subauroral polarisation stream (SAPS).In the literature a westward reversal of plasma flow at sub-auroral latitudes is referenced as "polarisation jets" (Galperin, 1973;Yeh and Foster, 1990) or Subauroral Ion Drift (SAID) (Southwood and Wolf, 1978;Anderson et al., 2001) or Subauroral Polarisation Stream (SAPS) (Oksavik et al., 2006).Such streams are formed during magnetically disturbed conditions, have the latitudinal width of a few degrees, a speed of about 500 m s −1 and are suggested to be driven by the polarization electric field created at the outer boundary of the ring current due to a different depth of injection of the plasma sheet ions and electrons.The polarisation field mapped along the geomagnetic field lines to the subauroral ionosphere has a poleward direction and therefore drives duskward plasma convection. We discuss the current structuring associated with the large-scale electric field modulations, the variation of the O + density correlated with the intensity of the precipitating electrons, upward electron acceleration and generated hiss, escape of the H + from the top ionosphere and outflowing of the cold O + population, plasma acceleration and heating of the suprathermal ions in the sub-auroral regions. We start in Sect. 2 with a brief description of the onboard instruments.An overview of the magnetospheric and then ionospheric modifications during the exceptionally long period of the magnetic storm is made in Sect.3.1.Thorough analysis of the ionospheric plasma response on the energy input during the substorm development is performed in Sect.3.2.The main results are summarised in the Conclusions. Instrumentation In this paper we use the observations made by the UV cameras on board the IMAGE and Polar satellites and by the plasma instruments on board the Demeter satellite.The advantage of such a data set is that it provides quasisimultaneous information about the energy input into the ionosphere and resulted plasma perturbations at the different ionospheric layers, i.e. at the heights of ∼150 km (UV imagers) and 730 km (Demeter). Onboard imagers are the optical cameras (Torr et al., 1995;Mende et al., 2000) that detect the photoemissions in the ultra-violet frequency range.The IMAGE WIC and Polar LBHS are centred at 150 nm and the Polar LBHL at 170 nm.The Polar imagers both have about a 20 nm bandpass.The bandpass for the IMAGE FUV WIC is much larger, 40 to 50 nm.All these cameras detect the emissions of N 2 due to the impact of keV electrons.The Spectrographic Imager (SI/Image) is sensitive to the Doppler shifted Lyman H-alpha line emissions centred at 121.82 nm that is generated by energetic protons. The Demeter onboard instruments, the measurements of which are used in the present paper, are retarding (APR) and directional (ADV) ion analysers, electric field antenna (ICE) and fluxgate magnetometer. The APR retarding analyser is designed to measure atomic and molecular ions with densities down to a few particles per cm −3 and relative masses up to ∼56.The entrance of the charged species into the analyser is controlled by the polarisation grids.One of them is polarised at −12 V and does not allow the electrons with energies lower than 12 eV to reach the collector.The potential applied to another grid is positive and gradually varies between 0 and ∼20 V during each sequence of measurements.The variation of the current collected by the analyser versus the grid potential is used to deduce the ion plasma composition, the concentration of each species, the temperature and velocity parallel to the analyser axis. The ADV, like APR, measures the current due to positive ions.But the grid potentials are fixed and the collector comprises four identical parts.Thus, the combination of currents measured by each sub-collector allows one to estimate the component of flow velocity in the plane perpendicular to the analyser axis (it points along the direction of the satellite motion).A detailed description of both instruments and models that are used to deduce the plasma parameters can be found in the papers by Séran (2003) and Berthelier et al. (2006a). The electric field antenna (ICE) comprises four spherical monopoles.A combination of dc/ac potentials measured by each sphere allows one to deduce the electric field variations up to the frequency of 3.3 MHz (Berthelier et al., 2006b). For overview purposes we use magnetic observations in the solar wind by the ACE satellite at the Lagrangian point L1, the ionospheric flow survey by the polar ground-based network of the SuperDARN HF radars and the magnetic records by sub-auroral ground-based magnetometers. Magnetospheric/ionospheric response to the magnetic storm Observations which are presented in this paper were made on 7 and 8 November 2004 during a huge magnetic storm which commenced at ∼10:00 UT on 7 November and developed over ∼28 h.The magnetic induction in solar wind measured on board the ACE spacecraft reached 50 nT and stayed at this level during almost 10 h.Under such conditions one can expect an intense mass-loading of the solar wind plasma and consequent plasma transport between ionosphere and magnetosphere.In order to give an overview of the magnetospheric response during the storm development, we present in Fig. 1 the variations of (i) the transpolar potential, ϕ, deduced from the ExB flows measured by the SuperDARN coherent radars in the polar and auroral ionosphere of the Northern Hemisphere, and of (ii) the K p index, calculated as the mean value of the horizontal magnetic field disturbances, recorded by ground-based magnetometers, together with (iii) the variation of the south-north component, B z , of the interplanetary magnetic field, monitored by the ACE spacecraft at the L1 point.For the purposes of comparison, the last parameter is shifted by 1 hour, which is approximately the propagation time of the solar wind plasma from ACE to Earth.Good correlation between the solar wind dynamics and the magnetospheric response is clearly seen from the presented plots.Each rotation of the interplanetary magnetic field toward south (B z <0) results in an enhancement of the transpolar potential, the amplitude of which is mainly determined by the duration of the magnetic field southward orientation.The level of magnetospheric activity represented by K p follows the variation of the magnetic pressure in the solar wind whenever B z <0.The K p index reaches extremely high levels, i.e. 8-9, at the end of 7 November, and stays high for ∼12 h.This results in the polar cap expansion and associated displacement of the auroral oval to lower latitudes.The UV images recorded by the WIC camera on board the IM-AGE satellite at ∼06:30 UT perfectly illustrate this extension (Fig. 2).The emissions that indicate the auroral oval position are localised below 60 • magnetic latitude in the nightside sector.During the whole period of storm development, the ADV analyser on board Demeter detects the flow reversal towards the dusk which occurs at sub-auroral latitudes in both 21:00 and 24:00 MLT sectors of Northern and Southern Hemispheres.SAPS boundaries are observed to displace toward the equator by almost 15 • (see Fig. 3).This motion is directly related to the storm intensification (i.e.K p surpasses 7). Another component deduced from the ADV measurements is the vertical velocity.Bursts of the upgoing O + (here we call "burst" a vertical ion motion with the mean velocity higher than 200 m s −1 ) were observed over the large latitudinal range, i.e. from 30 • to 60 of such events versus time is represented in Fig. 3 by dots.The difference between their locations with respect to the SAPS boundaries in two local time sectors is remarkable.In contrast to the broad extension of outflow events over auroral/sub-auroral latitudes at the midnight sector, the midevening outflow is limited to higher latitudes.We see at least two likely reasons for the outflow observations at the subauroral regions, i.e. (i) duskward electric field, which drives the upgoing O + equatorward from the auroral source regions, or/and (ii) sub-auroral source of the O + outflowing.The average vertical speed of the detected O + bursts is estimated to vary between 200 and 450 m s −1 in both magnetic local time sectors, i.e. 21:00 MLT and 24:00 MLT, with the exception of a few events with higher speed, i.e. ∼1000 m s −1 .Each of the high-speed events is mapped to the region of the fieldaligned currents and corresponds to currents with densities of 5-9 µA m −2 , i.e. about 2 times higher than in the majority of observed events. Midnight observations during the conditions with K p =8 In order to place the observations in a global context, let us come back to Fig. 2, which shows two consecutive WIC images made at 06:25 and 06:31 UT on 8 November.The images, which cover almost entirely the high latitudes of the night part of the Southern Hemisphere, illustrate a growth phase of sub-storm development.A narrow spot of precipitating electrons, which appears in the first image between 21:00 and 22:00 MLT, expands then towards the pole and dusk.This expansion is accompanied by an intensification of the emission in the post-midnight sector.The relative intensity of the emissions with two wavelenght, i.e. 150 and 170 nm, made by the UVI camera on board the Polar satellite, which covered the same region as the Image cameras and made its observations at the same time, allows to estimate (using the method developed by Germany et al., 1998) the associated energies of the precipitating electrons.We found 9 and 5 keV in the 21:00 and 24:00 MLT sectors, respectively.During this time Demeter moves from higher latitudes toward the equator and makes the observations in the magnetic sector between 00:02 and 00:22 MLT, i.e. it crosses the part of the auroral oval which is typically associated with the downward currents that connect the tail and the ionosphere. In order to compare the two data sets, the latitudinal distribution of the emissions detected by the WIC and SI12 cameras inside the magnetic local sector of the Demeter path are drawn in Figs.4a and 4b, respectively, together with the density distribution of the field-aligned currents deduced from the magnetic perturbations measured on board Demeter (blue line).The currents of both polarities (upward and downward, represented, respectively, by the negative and positive density) co-exist and have approximately the same magnitude, i.e. 2-3 µA m −2 .The H-line emission (red line in Fig. 4b) closely follows the latitudinal distribution of the field-aligned current, i.e. the emission intensity is increased inside the layers of downward current and decreased in the regions of upward current.These observations give evidence that the precipitating protons, which produce the emissions, follow the currents that circulate between the tail and the ionosphere and probably contribute to them.The N 2 -emissions (red line in Fig. 4a) encompass the magnetic latitudes between −56 • and −48 • .With respect to these emissions the field-aligned currents are observed in the broader region extending toward the pole.We note that the emission intensification does not exactly correlate with the upward current distribution.Such a discrepancy can have different reasons.We just mention a few of them, i.e. -field-perpendicular diffusion of the energetic electrons due to collisions in the lower ionosphere; -integration of emissions over the large interaction layer along the line of sight which significantly deviates from the local magnetic field direction; -spatial and/or temporal variations in the distribution of thermospheric/ionospheric populations. Under similar conditions Peterson et al. (1994) have shown that there is a significant increase in molecular ions, such as NO + , O + 2 and N + 2 .Unfortunately, the APR analyser on board Demeter can not distinguish these species in the auroral regions which are populated with suprathermal electrons.Designed to measure the ion (positive) currents the analyser is protected by a negatively polarised grid from the electron collection.However, the grid potential, which is fixed at −12 V, is not sufficient to stop the electrons with energies higher than 12 eV.This results in the annulation of the positive current produced by minor positively charged species whenever the flow of suprathermal electrons exceeds the flow of the minor ions.For example, in the considered case the suprathermal electrons are estimated to comprise up to 20% of the total density.Therefore, the ions with density below 4000 cm −3 are not resolved. Contrary to a basic model of the current system, which predicts a couple of anti-parallel field-aligned currents flowing between the tail and ionosphere in the midnight sector, with the downward current poleward with respect to the upward current, several current layers are observed in the region between the −62 • and −48 • magnetic latitude.Each layer with the width of ∼400 km comprises a pair of anti-parallel currents, i.e. downward poleward and upward equatorward.Such multi-structuring observed at the ionospheric heights most likely reflects the spatial configuration of the source region or/and its temporal evolution. How does the ionosphere respond to the energy/mass input?With the intention to find a response, let us analyse the variation of the thermal ion parameters inside the auroral and sub-auroral regions, taking advantage of Demeter observations. Ion plasma convection Two components of the ion velocity, i.e. horizontal (in red) and vertical (in blue), measured by the ADV analyser in the plane perpendicular to the satellite orbital motion, are presented in Fig. 5.The velocities are positive/negative when the ions move dawn/dusk and down/up, respectively.At the latitudes poleward of −45 • , the ionospheric plasma flows dawnward.This motion is associated with the plasma convection across the polar cap driven by the solar wind and subsequent plasma return along the magnetospheric boundaries.The observed component of the convection speed (red line) stays high, i.e. about 1000 m s −1 , over the whole region that encompasses the field-aligned currents.This convection is associated with the equatorward electric field of ∼40 mV m −1 .The large-scale changes in the azimuthal velocity with amplitude of ∼200 m s −1 correlate with the current variation (green line in the same figure).Note that the maximum current intensity, in general, corresponds to the maximum velocity gradient, but the magnetic induction, |B|, does not.Thus, these waves are non-compressional and are presumably driven by the field-aligned currents. The flow reversal toward dusk is observed at latitudes equatorward of −45 • and has the latitudinal width of about 800 km with a high speed channel which extends over 250 km.Observed duskward flows are suggested to be driven by the polarisation electric field established at the outer edge of the ring current and caused by a deeper injection of the plasma sheet ions into the ring current with respect to the electrons.Thus, the typical scale of such charge separation is of the order of O + gyrodiameter, i.e. about 500-1000 km.Mapped to the ionospheric altitudes this field has a poleward direction, which is opposite to that which drives the post midnight convection. Thermal balance Presented in Fig. 6a is the O + temperature deduced from the APR data, together with ion, electron and neutral temperatures given by the IRI model.The unusually high (with respect to the model prediction) average O + temperature is most likely caused by the unusually long and intense period of the magnetic storm.We can cite few mechanisms which might contribute to the heating of the nightside high-latitude ionosphere, i.e. -electron precipitation with consecutive heating of thermal electrons in the lower ionosphere; -frictional heating in the regions of strong convection; -heat convection from the day-to nightside ionosphere, driven by the plasma convection across the polar cap; -solar wind entry along the open field lines. But there are other sources which also contribute to the thermal state of the ionosphere.In the example shown, significant temperature variations are observed in the regions that envelope the layers of the field-aligned currents.A deep temperature decrease around ∼6.435 and 6.46 corresponds to the moments of the O + outflowing.At the Demeter heights the thermal exchange between the outflowing and ambient plasma will likely result in a cooling of local plasma.Two thermal regimes are clearly distinguished from both curves presented results of modelling and observations.The ion temperature is significantly higher in the regions of the polar cap and jumps down across the current layers.A remarkable feature observed at the sub-auroral latitudes and associated with the SAPS boundaries is the heating of the suprathermal population.Typical observations of such heating are shown in Fig. 7a, b.Distribution of the current collected by the APR detector versus potential applied to the retarding grid illustrates the instrument response in the plasma with two co-existent O + species, i.e. cold with a temperature of fractions of eV and warm with a temperature of a few eV.The fact that the heated O + consists of ∼2% (i.e.∼400 cm −3 ) thermal ions and that the main part of the distribution function stays at the same temperature demonstrates that only suprathermal particles with thermal velocities higher than 2V T are involved in the heating process.Two examples presented in Fig. 7 correspond to the moments indicated by the arrows in Fig. 5, i.e. outside a and inside b the SAPS convection.Suprathermal heating commences at about 6.49, i.e. at the edge of the layer with strong sheared velocity.The temperature of suprathermal O + is gradually amplified and reaches ∼5 eV.Of course, the observed 5 eV is a lower limit of energy required to overcome the Earth's gravity.Nevertheless, this example indicates a potential source of the O + outflowing, i.e. heating associated with the flow reversal at the sub-auroral latitudes. Plasma composition and density According to the IRI model, two major positive species, i.e.O + and H + , are expected to populate the considered latitudes and heights of the mid-night ionosphere.The density of O + , which is the dominant ion below 700 km, drops down with height and the charge neutrality at the higher altitudes is mainly maintained by the H + species.At 700 km the H + is expected to comprise ∼50% of the positive charge.However, this ion is not distinguished in the APR measurements and therefore its density does not exceed ∼1% of O + .Possible displacement of the H + at the higher altitudes will result in the re-establishment of the ionospheric height scales and thermospheric balance between oxygen and hydrogen. In such a situation the charge exchange between O + and hydrogen does not effectively limit the O + production in the regions of the electron precipitation.Thus, one can expect an increase in the O + concentration in the layers of upward current.Indeed, shown in Fig. 6b, the variation of the O + density (in red) together with the field-aligned current density (in blue) confirms this conclusion.In order to make a straightforward comparison of the two plots, the sign of the current was changed to the opposite with respect to that in Figs. 4 and 5. Thus, a positive value means an upward current. As explained before, the APR analyser, which was designed to measure positively charged species, is not protected from the collection of suprathermal electrons with energies higher than 12 eV.In the auroral regions, where this population is significant, the high energy electrons that reach the analyser create the negative current which compensates the current carried by the minor ion populations.The negative current consists of up to 20% of the positive ion current.Plotting the electron current densities deduced from the APR data (black line in Fig. 6c) and those estimated from the magnetometer measurements (blue line), we note a similarity in their variations.Each enhancement of the electron current deduced from the retarding analyser is associated with the upward current seen by the magnetometer.Observed differences in amplitude and variations could be caused by at least two reasons.First, the magnetic perturbations which are used to deduce the total current are not necessarily induced uniquely by the crossed current structure, but reflect an ensemble of the nearby currents.In contrast, the negative current collected by the ion analyser is due to the local electron flow that reaches the collector.Second, the angle between the analyser axis and the magnetic field varies between 85 to 75 • during the considered period.Having an acceptance angle of 106 • the analyser will collect the electrons with a velocity vector which consists of an angle larger than 22 • with respect to the magnetic field direction.Therefore, the collected negative current contains two components, i.e. the first is due to the field-aligned electron flow and the second is due to the field-perpendicular motion.The second component does not contribute to the total current, since in the case of an isotropic field perpendicular particle distribution it is compensated by the opposite electron motion.few a layers of opposite currents.The O + , driven away from the overdense layers of upward current, is then confined in the layer of downward current and pushed upward.The fact that in spite of a strong convection, the upgoing O + observed in the regions of the field-aligned currents indicates that the plasma mainly drifts along the current layers and/or the upward acceleration takes place not far from the heights of the Demeter observations (Yeh and Foster, 1990). Waves Supplementary information about the plasma processes developed in the auroral region can be taken from the HF electric field measurements.Remarkable wide-band emissions with the upper cutoff at 500 kHz are observed (Fig. 8) in the moments when the satellite crosses the current layers.The excited frequencies lie well below the local electron cyclotron and plasma frequencies and well above the ions frequencies.The spectral width of the emissions is about 100 kHz at the flanks of the current region and grows up to 400 kHz in the vicinity of layers with downward current.Such waves have the properties similar to those observed in the auroral regions at 2-5 R E , referred to as hiss (see, for example, Gurnett et al., 1983;Lin et al., 1984), and are believed to be generated locally by the upgoing electron beams.The distinctive funnel-shaped frequency time structure of hiss is explained (Lin et al., 1984) as a propagating effect, i.e. the group velocity of the low frequency waves points along the magnetic field lines and of the high frequency waves is directed at larger angles with respect to the field lines associated with the beam motion.A large spectral width of the observed emissions is probably due to the large source extension along the field lines.In the multi-component electron plasma the electron acoustic instability is excited via the Landau resonance and has the upper frequency cutoff which approximately corresponds to the plasma frequency of the secondary electron population (Tokar and Gary, 1984).The observed upper frequency cutoff of 500 kHz gives the density estimate of ∼3×10 3 cm −3 , which represents ∼15% of the local plasma density at 700 km.The growth of the hiss mode requires that the resonance velocities lie in the region with a sufficiently positive derivative of the distribution function, i.e. above the thermal electron speed. Conclusions The principal purpose of this paper is to take advantage of simultaneous IMAGE-Polar-Demeter observations and to perform a detailed analysis of the ionospheric modifications due to intense energy/mass exchange during substorm development.Remote observations made by the UV cameras on board IMAGE and Polar give quasi-instantaneous images of photoemissions produced by the collisions and charge exchanges between energetic particles of magnetospheric origin and the species that populate the low thermosphere.Even if the emission intensities are highly representative, since they carry indirect information related to the energy and intensity of the precipitating electrons and protons (or, more precisely, their energetic tails), only a small part of the input energy goes into photonemission.The main part of the energy is released into collisions, plasma acceleration and heating, which subsequently result in a change of the entire conducting layer and in return energy/mass back to the magnetosphere.In-situ observations on board the Demeter satellite in the upper ionosphere allow to study the plasma variations in the regions of the field-aligned currents and sub-auroral polarisation stream.Simultaneous measurements by Demeter and IMAGE demonstrate that the observed current variations are caused by spatial structuring of the source region and are not temporal or propagation effects.Thus, the field-aligned current region is revealed to comprise few layers of anti-parallel currents.Enhanced O + production inside the regions of the electron precipitation creates the plasma pressure gradients that causes the ions to move away from the overdense layers, i.e. first, perpendicular to the current sheet boundaries and then along and upward.This scenario is illustrated by a schematic drawing presented in Fig. 9. Temperature of outflowing O + is found to be colder than that of the ambient plasma. Hiss emissions, which are regularly observed in the regions of the intense field-aligned currents with the source located inside the layers of the downward current, give the indications about the upward motion of the secondary electron population, with velocities higher than the electron thermal speed, i.e. ∼300 km s −1 .Generation of these emissions in the ionosphere is a distinguished signature of intense electron exchange between the magnetosphere and ionosphere. Observed in the region of sub-auroral polarisation stream, the suprathermal O + is produced inside the SAPS boundary layer, which is characterised by strong sheared velocities.The suprathermal population is heated up to a few eV, i.e. temperatures which are typical for the plasmaspheric ions.Thus, we suggest that the reversal in plasma convection driven at sub-auroral latitudes by the ring current polarisation field can be considered as a potential source region of the O + outflow from the ionosphere. Fig. 6 . Fig. 6.Plasma parameters measured by the APR analyser during the same period as in Fig. 5, i.e.(a) O + temperature (in black), together with the electron (in violet), ion (in red) and neutral (in green) temperatures deduced from the IRI model; (b) O + density (in red), together with the current density (in blue), as deduced from the magnetic perturbations; (c) current density due to the electrons with energy higher than 12 eV.Regions of O + outflow are indicated by horizontal arrows.The layers of anti-parallel currents are highlighted by vertical dashed lines and the current direction is indicated by arrows. Fig. 7 . Fig. 7. Current distribution versus potential applied to the retarding grid measured by APR shows two populations of the O + species, i.e. cold and warm.Moments of observations are indicated by arrows in Fig. 5. Temperature of the warm population is gradually increased inside the SAPS boundary layer.
7,380.8
2007-01-02T00:00:00.000
[ "Physics" ]
Bilayer ventilated labyrinthine metasurfaces with high sound absorption and tunable bandwidth The recent advent of acoustic metamaterials offers unprecedented opportunities for sound controlling in various occasions, whereas it remains a challenge to attain broadband high sound absorption and free air flow simultaneously. Here, we demonstrated, both theoretically and experimentally, that this problem can be overcome by using a bilayer ventilated labyrinthine metasurface. By altering the spacing between two constituent single-layer metasurfaces and adopting asymmetric losses in them, near-perfect (98.6%) absorption is achieved at resonant frequency for sound waves incident from the front. The relative bandwidth of absorption peak can be tuned in a wide range (from 12% to 80%) by adjusting the open area ratio of the structure. For sound waves from the back, the bilayer metasurface still serves as a sound barrier with low transmission. Our results present a strategy to realize high sound absorption and free air flow simultaneously, and could find applications in building acoustics and noise remediation. Results Structures. The designed structure consists of two ventilated labyrinthine metasurfaces with a spacing d, as shown in Figs. 1a,b. Both metasurfaces are made of rigid body and immersed in air. The building block of each metasurface has a size a in both the x and y directions, and a thickness b in the z direction. Inside the building block, there exist a ventilation duct along the z direction and a one-end-closed channel curled in the x-z plane. The ventilation duct has a rectangular cross section with area of S o = (a − t)(a − h) with t being the wall thickness. The curled channel has an effective length L ≈ Nb , a width w = (h − Nt − t)/N , a folding number N = 2 , a rectangular cross section with area of S c = (a − t)w , and an aperture near the surfaces of the bilayer metasurface. Such a curled channel constitutes a labyrinthine resonator with a fundamental resonant frequency where c is the sound speed in air and R ≈ 4L is the fundamental resonant wavelength. Theory. Consider the bilayer metasurface impinged by a plane sound wave with frequency f and at incident angle θ . For wavelengths longer than 2a ( = c/f > 2a ), only fundamental modes exist in the ventilation ducts and curled channels, and no diffracted propagating waves are generated by the structure (Fig. 1b). At the left side of the bilayer metasurface, the left-ward (right-ward) propagating wave has an complex amplitude C 1 ( D 1 ) in sound pressure. At the right side of the bilayer structure, the complex amplitude of sound pressure is C 2 ( D 2 ) for the left-ward (right-ward) propagating wave. Since the sound pressure and particle flow need to be continuous at each structural interface, the field ( C 2 , D 2 ) at the back (i.e. right) can be related to the field ( C 1 , D 1 ) at the front (i.e. left) by a 2 × 2 transfer matrix M is the transfer matrix for the region between the two metasurfaces, k = 2π/ is the wavenumber in air, and P 1 U 1 and U 2 P 1 are the transfer matrices for the front and back metasurfaces. The matrix P 1 is given by Here, a unit cell of the bilayer metasurface, consisting of two curled channels with sound loss β 1 and β 2 inside, is placed in an acoustic impedance tube. (d) Simplified models for cases with perfect absorption (I) and incomplete absorption (II). where n is a positive integer 18 . We note that when the back metasurface has no absorption loss ( β 2 = 0 ), it can present complete reflection at resonant frequency f R . When the openings of the two labyrinthine resonators in a unit cell have a distance of (2n − 1) R /4 , the back resonator can cause zero particle velocity at the opening of the front resonator at f R . To further obtain complete absorption at f R , a critical sound loss ( β 1 = β 1m ) is required in the front curled channels. Simulations. To verify the above theory, we perform simulations for bilayer ventilated labyrinthine metasurfaces, which have h = 0.5a , t ≪ a , and thus an open area ratio p o = 0.5 . Sound waves are incident normally from the front ( θ = 0 ). We first consider the case with a zero spacing between the two metasurfaces ( d = 0 ). In Fig. 2b. We see that the absorption A R approach a maximum A Rm (99.5%) at β 1 = β 1m = 0.161 and β 2 = β 2m = 0 , agreeing well with analytic results from Eqs. (6) and (7) ( β 1m = 0.159 and β 2m = 0 ). We note that A R remains high around the optimal losses ( β 1m , β 2m ). If the front and back metasurfaces possess the same optimal losses ( β 1 = β 2 = 0.148 ) in the curled channels, the maximal absorption can still be as high as 95% (see Fig. 3). But if only the front metasurface exists, the maximal absorption will decrease to 71% even with using an optimal loss ( β 1 = 0.108 ) (see Fig. 3). More results are shown in Fig. 2c for bilayer metasurfaces with different spacing d. It is found that the maximal absorption A Rm varies with increasing the spacing d. Near perfect absorption ( A Rm > 99% ) can be achieved in some ranges including 0 < d < 0.24L and 1.94L < d < 2.24L . But if the spacing is not in such optimized regions, low absorption will occur. For instance, at d = 0.66L and even with using optimal losses ( β 1m = 0.321 , β 2m = 0.115 ), the absorption A Rm can be only 59.3%, indicating the importance of the spacing d for achieving perfect absorption. The corresponding absorption spectrum is plotted as curve II in Fig. 2a, where two wide resonant peaks are visible at frequencies of 0.208c/L and 0.281c/L. It should also be mentioned that a very sharp absorption peak can occur at a critical spacing ( d = 0.7L ) with tiny optimal losses of β 1m = 0.0073 and β 2m = 0 (see curve III in Fig. 2c). We note that such a high-Q resonant mode can be viewed as an acoustic quasi bound state in the continuum (BIC), which can exist in various acoustic systems 43 . To clarify the above results, the distribution of particle velocity in a unit cell of the bilayer metasurface is simulated at resonant frequency using a finite-element method (COMSOL Multiphysics), as shown in Fig. 2d. When d = 0 , the openings of the two labyrinthine resonators in the unit cell have a distance of about a quarter of resonant wavelength ( ∼ R /4 ). Since a large particle velocity occurs at the opening of the back resonator, zero velocity can be obtained at the opening of the front resonator (see case I in Figs. 1d and 2d). Therefore, a single absorption peak is visible (curve I in Fig. 2a) and its strength can be 100% with using an appropriate loss in the front resonator. In contrast, if the spacing between the two metasurfaces is not appropriately chosen, the ventilation duct can also serve as a resonator but with a zero loss ( β = 0 ) (see case II in Figs. 1d and 2d). Hence, two resonant peaks occur with imperfect strengths (see curve II in Fig. 2a). Experiments. Based on the above theoretical results, we fabricated bilayer ventilated labyrinthine metasurfaces with polylactic acid (PLA) by means of 3D-printing technology (see Fig. 4a and b and Methods). The structural parameters are a = 43 mm, b = 107 mm, t = 1 mm, d = 0 , and p o = 0.5 . By using an impedance tube with a square cross section (see Figs. 1c, 4c, and Methods), the reflection, transmission, and absorption spectra www.nature.com/scientificreports/ were measured for a unit cell, as shown in Fig. 5a-c. For the unit cell, the back resonator is hollow whereas the front one contains an appropriate amount of porous media (with a mass of 0.1498 g; see Methods). It is found that for sound waves incident from the front, the unit cell exhibits a low reflection (17%) in a wide frequency range (100-800 Hz). Very low reflection ( R = 0.2% ), low transmission ( T = 1.2% ), and near-perfect absorption ( A = 98.6% ) can be seen at resonant frequency ( f R = 368 Hz). For sounds incident from the back, a lower absorption (67%) is found at resonant frequency due to the asymmetric loss of the unit cell ( β 1 > β 2 ). It should also be mentioned that almost the same transmission is observed for sounds from the front and back, agreeing with the theoretical expectation. Numerical calculations were conducted for the above sample based on the transfer-matrix method (Fig. 5d-f). In the calculations, a thin layer of air (thickness = 0.6 mm) is considered to be static at the surface of each wall. We calculated absorption spectra for different losses ( β 1 , β 2 ). It is found that when β 1 = 0.11 and β 2 = 0.03 are used in the front and back curled channels, the strength and width of the calculated absorption peak can match the experimental values. Since slight sound loss ( β 2 = 0.03 ) exists in the back channel, the back resonator cannot provide complete reflection. Hence, for sound incident from the back, a considerable absorption ( A back = 67% ) is observed in experiments. If an ideal asymmetric absorption (i.e. A back = 0 and A front = 100% ) is desired, the back channel should exhibit zero sound loss ( β 2 = 0). (b) A R as a function of β 1 and β 2 for bilayer metasurfaces with d = 0 . A R reaches a maximum A Rm at optimal losses ( β 1 = β 1m , β 2 = β 2m ). (c) β 1m , β 2m , and A Rm as a function of layer spacing d. (d) Simulated distributions of |v|/|v 0 | in the x-z plane for structures I and II at f R in (a). v represents particle velocity and v 0 is the particle velocity of the incident sound wave. www.nature.com/scientificreports/ Besides the absorption strength, the absorption bandwidth is also important for a sound absorber. For the above sample, the measured absorption is higher than 50% for frequencies ranging from 305 Hz to 435 Hz, corresponding to a relative band width � f /f R = 36% with the full width at half maximum f = 129.4 Hz. We note that the thickness of the bilayer metasurface is in a subwavelength scale (λ/5h) compared to the lower limit (305 Hz) of the absorption band. In addition, the relative bandwidth of absorption here is much larger than (7 times of) recent results with using a pair of Helmholtz resonators in the unit cell 37 . If the same wide absorption The ventilation performance is further characterized for the bilayer labyrinthine metasurfaces. Here, a unit cell of the metasurface is placed in an aluminum tube with a square cross section of a size 44 mm. One end of the tube is seamlessly connected with the outlet pipe of an electric blower (9028E, Anjieshun, China). An anemometer (DP-1000-IIIB, Yiou, China) is used to monitor the air flow velocity at a fixed position in the tube. When the tube is hollow, the wind velocity is v wo (8 and 15 m/s were tested in our experiments). When the unit cell is placed in the tube, the wind velocity becomes v w . Thus, a ventilation rate can be defined as the ratio of wind velocity ( v wo /v w ) 27,37 . Figure 7b shows the measured wind velocity ratio for different unit cells. We can see that the wind velocity ratio increases with increasing the open area ratio of the structure. For the above five samples, the ventilation rates are larger than 0.6. Discussion In summary, we design, fabricate, and characterize a bilayer ventilated labyrinthine metasurface for perfect sound absorption and free air flow. Both a ventilation duct and a curled single-port channel exist in the constituent single-layer metasurface. By using asymmetric losses in the two single-layer metasurfaces and adjusting their www.nature.com/scientificreports/ spacing, perfect sound absorption is achieved at resonant frequency for sound waves incident from the front. The measured peak absorption is as high as 98.6% even at a relatively large open area ratio (50%). By tuning the open area ratio, the relative absorption bandwidth can be adjusted in a large range (from 12% to 98%). For sounds incident from the back, the bilayer metasurface still serves as a sound barrier with low transmission and partial reflection/absorption. Our work provides a strategy for achieving broadband perfect sound absorption in ventilated structures and could be extended to other fields such as electromagnetic waves and water waves. Methods Sample preparations. Each experimental sample is composed of two boxes, and each box contains a labyrinthine structure (see. Fig. 1a) and a side plate. The two parts are first fabricated with PLA by 3D-printing technology, then agglutinated together. The porous media placed in the box is polyester pillow stuffing (white 15d× 51mm hcs 100% polyester stuffing pp staple fiber filling pillow). Sound absorption measurements. A commercial impedance tube (Hangzhou Aihua, AWA6290T) was applied to measure the absorption of acoustic metasurfaces (see Figs. 1c and 4c). Here, a unit cell of the bilayer metasurface was placed in an aluminum impedance tube, which has a total length of 2.9 m, a wall thickness of 3 mm, and a square cross section with an inner size of 44 mm. A loudspeaker is placed at the left end of the impedance tube. The right part of the impedance tube is filled with sound-absorbing materials of a length of 1.4 m, so that the resulted reflection can be less than 1.5% in the frequency band of interest ( f > 250 Hz). Four 1/4-in. condensed microphones are situated at designated positions to sense local pressure. The loudspeaker was fed with a sinusoidal signal of which the frequency increases with increasing time. By analyzing the signals from microphones, the reflection, transmission, and absorption spectra can be obtained for the unit cell.
3,437
2020-12-01T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
Parameter identification of a delayed infinite-dimensional heat-exchanger process based on relay feedback and root loci analysis The focus of this contribution is twofold. The first part aims at the rigorous and complete analysis of pole loci of a simple delayed model, the characteristic function of which is represented by a quasi-polynomial with a non-delay and a delay parameter. The derived spectrum constitutes an infinite set, making it a suitable and simple-enough representative of even high-order process dynamics. The second part intends to apply the simple infinite-dimensional model for relay-based parameter identification of a more complex model of a heating–cooling process with heat exchangers. Processes of this type and construction are widely used in industry. The identification procedure has two substantial steps. The first one adopts the simple model with a low computational effort using the saturated relay that provides a more accurate estimation than the standard on/off test. Then, this result is transformed to the estimation of the initial characteristic equation parameters of the complex infinite-dimensional heat-exchanger model using the exact dominant-pole-loci assignment. The benefit of this technique is that multiple model parameters can be estimated under a single relay test. The second step attempts to estimate the remaining model parameters by various numerical optimization techniques and also to enhance all model parameters via the Autotune Variation Plus relay experiment for comparison. Although the obtained unordinary time and frequency domain responses may yield satisfactory results for control tasks, the identified model parameters may not reflect the actual values of process physical quantities. 2. Saturated relay feedback experiment is performed on the SFOTDM. The detected pole loci are then set to the initial parameters' guess for the HETDM (as a sufficiently accurate mathematical model of the circuit system with heat exchangers), which is enhanced via the LM method, under the single relay-test data. 3. Three scenarios are compared to determine the remaining model parameters and further enhance the already estimated ones via the ATV+ technique and the solution of a nonlinear optimization problem using the LM and NM algorithms. 4. An independent determination of numerator and denominator transfer function coefficients along with the pole loci assignment enables to reduce the number of necessary relay test for some of the scenarios. The rest of the paper is organized as follows. "Methods and techniques" summarizes theoretical fundamentals of (retarded) TDM spectrum and feedback relay-based experiment, the model parameters' identification using a DF, and the LM and NM methods. "Results" has two fundamental subsections. The first subsection provides the reader with a detailed analysis of the SFOTDM pole loci. The second one presents the HETDM and all steps of determining its parameter values. Namely, the mathematical model of the HETDM is introduced, then the reader is acquainted with the poles assignment, the transfer function numerator estimation using a single relay test, and the complete model parameters estimation via the ATV + technique. In "Discussion", the obtained results are discussed. Finally, "Conclusions" concludes the paper. The standard notation is used throughout the paper, i.e., C, N, R denote the sets of complex, natural (excluding zero) and real numbers, respectively, R n + expresses the n-dimensional Euclidean space of positive real-valued vectors, Re(s) and Im(s) mean the real and imaginary parts of some s ∈ C , respectively. Superscript T denotes the vector (matrix) transpose. Methods and techniques Retarded quasi-polynomial and its spectrum. Let us concisely introduce the Retarded Quasi-Polynomial (RQP) form and its spectrum, i.e., the zero points 9,14,15 . A RQP has the following form where s ∈ C is the Laplace transform variable, q ij ∈ R are non-delay parameters, τ ij ∈ R + with τ i0 = 0 represent delays, and n ∈ N means the RQP order (of derivative). Definition 1 The RQP spectrum is the set of RQP zeros, i.e., 2. RQP zeros s k ∈ are isolated and function R n−1 i=0 m i � q ij , τ ij � → s k ∈ C is continuous. 3. For any finite γ ∈ R , the subset � R = {s ∈ � : Res > γ } is finite, while � L = {s ∈ � : Res ≤ γ } is infinite. □ Note that the relation q ij , τ ij → s k is not necessarily smooth; namely, in points where a multiple real root bifurcates into a complex pair. Definition 2 The RQP spectral abscissa is defined as Relay-based parameter identification. As introduced above, experimental plant identification using the relay (or another simple nonlinear element) method represents a widely used technique in various engineering and industrial applications. Consider a plant (the model of which is to be identified) under a relay feedback control, as depicted in Fig. 1. In the figure, r(t), e(t), u(t) , and y(t) mean the reference, control error, manipu- www.nature.com/scientificreports/ lated input and controlled output variables, and G(s) stands for the actual plant (process) dynamics. The choice of r(t) (usually of a constant value) enables to set the operating point. If the relay parameters are appropriately set, the closed-loop system reaches sustained oscillations of period T osc in a finite time. If the relay element does not cause a phase lag, the corresponding angular frequency ω osc = 2π/T osc is supposed to be close to the so-called ultimate frequency ω u for which ImG jω u = 0 (more precisely ∡G jω u = −π ), j 2 = −1 . However, as a model G m (s) cannot express the true dynamics G(s) exactly, it generally holds that ω osc = ω u . Whenever the relay exposes a phase lag, ω osc < ω u . By adopting the idea of the DF, one point G jω osc ∈ C can be estimated, i.e., two parameters of G m (s) can be determined. The relay DF, N(·) ∈ C , can be considered as a linear approximation of the relay gain. It is usually derived using a consideration that e(t) is a harmonic signal and u(t) is subject to a truncated Fourier series expansion. Then, for the sustained oscillations, it holds that which enables to estimate parameters of G m (s) . Note that (4) can be graphically interpreted as the intersection of the Nyquist plot of G m (s) with the horizontal line −N −1 (·) . The DF depends on the amplitude A of e(t) oscillations and some other relay setting parameters. On/off relay test. Let us consider an asymmetrical biased two-level relay. Its static characteristics and the corresponding sustained oscillations (limit cycles) are displayed in Figs. 2 and 3, respectively. In practice, the setting ε = 0 is suitable when the feedback signal is affected by noise so that the switching relay rate can be reduced. The advantage of the option δ = 0 lies, i.a., in the possibility to estimate the process static gain k = G m (0) = G(0) as (4) N(·)G m jω osc = −1 + 0j ⇔ N(·)G m jω osc = 1 ∡ N(·)G m jω osc = −π www.nature.com/scientificreports/ for t 0 satisfying that sustained oscillations start for some t < t 0 35 . Purposefully induced asymmetry can also be used to estimate and attenuate the load disturbance 42 . However, it may stop oscillations so that the relay test fails. In addition, model parameter identification with asymmetric relay yields an estimation error of up to 40% in a FOPDT case 50 . Relay with saturation. Estimating the critical point at ω u (or any other ω osc ) does not provide an accurate enough parameter estimation for some processes, e.g., for those with significant time delays. For instance, an error of 23% for FOPDT models was reported 59 . Model parameters identification can be improved by using saturation relay 35,59 . Its static characteristics and a sketch of the corresponding sustained oscillations (under the assumption of a harmonic output variable) are depicted in Figs. 4 and 5. The saturation relay does not cause an abrupt step change at e(t) = ±ε , yet it provides a smooth transient around zero. The relay input e(t) is multiplied by k sat resulting in the relay output u(t) up to the limit B = k sat A . The corresponding DF reads Ideally, if the gain k sat is set optimally (i.e., A = A ), input and output signals has the same shape; hence, the DF N sat A, A = k sat is exact. However, in real conditions, u(t) has a shape of the truncated sinusoidal wave with upper and lower limits. Not that the limit case k sat → ∞ yields the ideal relay, i.e. N sat (A, 0) = N(A, 0, 0). A saturation relay test should follow the standard relay experiment (see the preceding subsection). Once k osc = N(A, ·) is found, then it is set k sat = k min k osc , k min > 1 . Originally, it was suggested to take k min = 1.4 59 . The higher the value is, the closer to the two-level signal u(t) is. Contrariwise, smaller values of k min force u(t) to be closer to sinus-like waves; however, the relay takes a longer time or even can fail to generate sustained oscillation. Figure 3. Sustained oscillations using the asymmetrical biased relay. 38,69,71 introduces an artificial delay τ a > 0 to the serial link between the relay and the process. Every single value of τ a causes the phase lag of ϕ a = ω osc τ a where ω osc means the corresponding angular frequency of sustained oscillations when the delay applies here. Then, the overall phase shift attributed to the process reads Hence, by detecting ω osc and the corresponding amplitude A , another point on the process (model) Nyquist curve can be determined. Obviously, whenever a number of N model parameters are needed to be resolved, then ⌈N/2 − 1⌉ distinct τ a s is required, where ⌈·⌉ means the ceiling function (i.e., the rounding upward to the nearest integer). The original setting 69 comes from the following idea. The goal is to identify a point located at 45° distance from the negative real axis, i.e., ϕ a = π/4 (under the assumption that ∡N(·) = 0 ). This point is expected to occur at frequency ω osc = 3/5ω u . It eventually yields the following condition and the setting result The disadvantage of this technique is the prolongation of the relay feedback experiment. However, if the initial conditions are sustained oscillations, it lasts a significantly shorter time to restore the oscillations than starting from a constant steady state. Parameter optimization methods. To solve (2) for given roots s k , (4), and (8), two well-established optimization algorithms are adopted. Their concise description to acquaint the reader follows. www.nature.com/scientificreports/ J means the Jacobian of f with respect to p , k expresses the iteration step, and > 0 is an adjustable parameter (the so-called damping factor) 93 . Solution (11) and (12) attempts to solve the nonlinear least-squares problem, i.e., Levenberg-Marquardt method. Consider a set of nonlinear differentiable functions The value of (the so-called damping factor) may vary during iterations. One of the framework strategies is to decrease its value as the residual sum on the right-hand side of (13) ( Res p ) decreases, and vice versa. Let us introduce the multiplicative factor κ > 0 as k+1 = k κ . Particular choices of 1 , 1 p , and κ are discussed in "Relay-based parameter identification of heat-exchanger process". A disadvantage of the LM algorithm is that the solution may converge to a local minimum (as for other Newton-type methods) or it may even diverge (especially, if is set inappropriately). Nelder-Mead method. Assume an unconstrained optimization problem, first The idea of the NM method 92 is to iteratively search for the optimal solution by moving a variable-shape simplex in the space of p . The simplex vertices represent the so-called test points. Once the initial simplex 1 S = 1 p 1 , 1 p 2 , . . . , 1 p m+1 is selected, its vertices are re-ordered such that f 1 p i ≤ f 1 p i+1 , i = 1, 2, . . . m (i.e., 1 p 1 represents the best solution estimation) and set k = 1 . Then, the center of the subvector kS = k p 1 , k p 2 , . . . , k p m is computed The worst-valued vertex is reflected through k p c as k p r = k p c + γ r k p c − k p n+1 , γ r > 0 . Then, four scenarios can happen: On condition that f k p e < f k p 1 , set k+1 S = k p 1 , k p 2 , . . . , k p m , k p e , else k+1 S = k p 1 , k p 2 , . . . , k p m , k p r . 3. If f k p m ≤ f k p r < f k p m+1 , the outer contraction is done as k p oc = k p c + γ oc k p r − k p c , 0 < γ oc < 1 . Whenever f k p oc < f k p r , set k+1 S = k p 1 , k p 2 , . . . , k p m , k p oc , else perform the shrinkage as 4. If f k p r ≥ f k p m+1 , compute the inner contraction k p ic = k p c + γ ic k p m+1 − k p c , 0 < γ ic < 1 . On condition that f k p ic < f k p m+1 , set k+1 S = k p 1 , k p 2 , . . . , k p m , k p ic , else shrink the simplex using (16). Then k = k + 1 , re-order simplex vertices, and calculate (15), etc. If, however, inequality constraints on a subset p ⊆ p are required, one may use the concept of barrier functions. That is, instead of the objective function f p as in (14), the extended function p is subject to the optimization procedure where β > 0 and f b p > 0 must be sufficiently small as soon as all g j p ≪ 0 ; otherwise, the value of f b p increases considerably until f b p → ∞ as g j p → 0 − . Results Root loci analysis of the simple quasi-polynomial. In this subsection, a thorough zero loci analysis of the SFOTDM 12 is provided. The derived results then serve for the pole assignment of the HETDM giving rise to the initial parameters setting of its CE (see "Parameter estimation of the heat-exchanger process model via pole assignment"). The model reads Although pole loci properties of the SFOTDM were studied in the past, according to the authors' best knowledge, a complete image and a thorough exact guide on finding the dominant subset of its spectrum has not been www.nature.com/scientificreports/ provided yet. For instance, Marshal, Gorecki, Walton, and Korytowski 94 studied a generalized characteristic RQP of the SFOTDM with relative parameters ( T = 1, � = ϑ/T, θ = τ/T ), and they determined ranges in which the model is asymptotically stable, aperiodic, and periodic. Moreover, intersections of pole loci trajectories with the imaginary axis for the generalized model were determined. Analogous conditions for which the model is stable, overdamped, critically damped, and underdamped were presented in 95 . Asymptotic behavior of pole loci trajectories in infinity and nearby the imaginary axis were also studied in 96 . Hence, our aim is to analyze the solution (or its rightmost subset) of the CE in C and provide the reader with a simple guide how to compute these pole loci. Result (22) can also be formulate as = e −1 . Let us introduce relative real and imaginary parts of a quasipolynomial root as α = −ϑα , ω = ϑω , respectively. Then the range (23) becomes Lemma 2 means that q SFOTDM (s) has the rightmost double real root at α = 1 for = e −1 . 96 . The double dominant (i.e., rightmost) real root s 1,2 = −1/ϑ (i.e., α = 1 ) becomes a complex conjugate pair for � lim δ→0 + = e −1 + δ . Contrariwise, the double real root becomes a pair of single real roots for � lim δ→0 + = e −1 − δ . □ Theorem 1 q SFOTDM (s) has a real dominant zero in the LHP for = 0, e −1 and a complex conjugate rightmost pair in the LHP for � = e −1 , 0.5π , where the particular root abscissa is within the range (23) (or (24), equivalently). □ Proof From Lemma 1, a negative root abscissa exists only for (23). If ranges from 0 to e −1 , the rightmost real root moves from α = 0 to α = 1 due to Lemma 2 and Lemma 3. From Lemma 2, it is also known that there is the rightmost double real root for = e −1 that bifurcates into a conjugate pair for � = e −1 , 0.5π . Eventually, this pair reaches the imaginary axis again for � = 0.5π as the only (i.e., the rightmost) quasi-polynomial root pair due to Lemma 1. ■ Lemma 3 In the following part of the subsection, dominant (and other) SFOTDM pole loci are investigated. As an alternative of the proof, one can easily deduce that (28) and (29) are in contradiction. ■ (20) can have only two real solutions (counting multiplicity). □ Proof Lemma 6 implies from Lemma 5 directly due to the root continuity (see Proposition 1). That is, a complex conjugate zeros of q SFOTDM (s) can bifurcate in a pair of distinct real roots only through a multiple pair. Alternatively, distinct real solutions of (20) satisfy (27). Function α → αe −α is unimodal with local and global maximum in α = 1 and the function value 1/e . This point agrees with Lemma 5. Otherwise, the function has two distinct intersections with the constant function ∈ 0, e −1 for α ∈ (0, ∞] . Hence, there is no real solution of (20) for � > e −1 . The situation is illustrated in Fig. 6. www.nature.com/scientificreports/ (b) If ∈ 0, e −1 , complex conjugate zeros of q SFOTDM (s) are given by (31) and (32), and single real roots are given by the unique solution pair of (c) If = e −1 , complex conjugate zeros of q SFOTDM (s) are given by (31) and (32), and the multiple real root reads s 1,2 = −ϑ −1 (i.e., α = 1). □ Proof Consider item a) first. From Theorem 1 and Lemma 6, there are no real solutions of (20). Complex conjugate ones have to satisfy i.e., both the real and imaginary parts must be equal to zero Lemma 6 Equation After some algebraic manipulation, conditions (35) become By expressing �e α from one equation und substituting it into another one, formula (31) is obtained where singularities are to be denied. Further, the latter equation in (36) gives which yields (32) directly. Naturally, only positive right-hand sides of (37) are admissible to get real α values. We know from Lemmas 5 and 6 and Theorem 1 that the only possible double real root bifurcates into a complex conjugate pair for � > e −1 and there cannot exist another real root of q SFOTDM (s) . Note that only one root from the pair is sufficient to take due to the symmetry. Assuming item b), a pair of single real roots exists due to Theorem 1. However, it is the only such a pair according to Lemma 6. A single real root must satisfy the first condition in (26), giving rise to (27), the result of which agrees with (33). However, complex conjugate zeros of q SFOTDM (s) must simultaneously exist due to its transcendental manner. Regarding item c), the existence of the double real root is given by Lemma 2. Besides, there is no other real root of q SFOTDM (s) due to Lemma 5, yet s i ∈ C\R as in (31) and (32) still exist. ■ Theorem 2 does not consider multiple quasi-polynomial roots s i ∈ C\R . The following proposition verifies that such roots can be neglected. (35) and also It is enough to show that a complex conjugate pair of multiplicity 2 does not exist, i.e., n = 1 . We proof a contradiction; hence, let there exists a double root s i ∈ C\R that has to satisfy which gives Proposition 2 Equation (20) does not admit a multiple pair solution The latter formula in (39) has two solutions: �e α = 0 or sin ω = 0 . The first one is in the contradiction to the former condition in (39), whereas the second one yields (39), one gets �e α = 1 , which implies ω = sin (ω) from (36) or (37). That is, the unique solution ω = 0 means that the quasi-polynomial root is real, which gives a contradiction. ■ (Necessity.) Now, we prove by contradiction that ω remains within the limit. Consider that α = (0, 1) . Let exist ω = 0 or ω = π 2 such that the first equation in (41) holds. The limit values are, respectively, which is in contradiction to α = (0, 1). The case ω < 0 can be omitted due to the root loci symmetry in C . Whenever ω > 0.5π , then there exists tan ω < 0 , which implies α < 0 from (31), and we have a contradiction again. Item c) represents a reformulation of Lemma 2. ■ To conclude this subsection, a graphical procedure to find all the roots of q SFOTDM (s) or the rightmost spectrum of the SFOTDM poles follows. Whenever condition of item b) of Theorem 2 is satisfied, real poles are found as per Fig. 6. Complex conjugate poles are given by (31) and (32) or (41), which can be graphically interpreted as intersections of real-valued functions α 1 (ω) = ω/ tan ω and α 2 (ω) = ln (ω/(� sin ω)) where values α 2 (ω) ∈ C\R determine "forbidden regions" of the function graph, see Relay-based parameter identification of heat-exchanger process. Infinite-dimensional heat-exchanger process model. The HETDM serves as a simulation testbed. The mathematical model arises from heat and mass balance equations that include delays and a thorough analysis of static and dynamic responses of the particular laboratory appliance (see Fig. 8). A concise description of the apparatus follows 84 first. Positions in the figure correspond to the numbers in curly brackets. The heat fluid circulates in the closed loop flowing through an instantaneous heater {1}, a long insulated coiled pipeline {2}, and a cooler {3}. The power input to the heater (that can be viewed as a solid-liquid flow heat exchanger) is continuously controlled in the pulse-width-modulation sense. Its maximum value is 750 W. The heated fluid temperature on the heater output {4} is only slightly affected when flowing through the 15 m long pipeline; however, the most significant loop delay is caused therein. The outlet temperature of the pipeline is measured by a platinum resistance Pt1000 thermometer {5}. The cooler is constructed as a radiator (i.e., a www.nature.com/scientificreports/ plate-and-fin heat exchanger) that can be considered as an indirect unmixed cross-flow heat exchanger from the process point of view. It is equipped with two cooling fans {6} (one of them is continuously controlled, while another is on/off type). The expansion tank compensates for the impact of the water thermal expansion {7}. The outlet temperature from the cooler is measured by Pt1000 again {8}. Finally, the continuously controllable magnetic drive centrifugal pump {9} serves for fluid circulation. Despite its simplicity, the mathematical formulation of the HETDM and especially its dynamic properties are remarkable due to the model transcendental characteristic equation 84 . As the model is multivariable, the relation between the heater power input u(t) and the cooler outlet heat fluid temperature y(t) is selected as the most interesting input-output pair. Note that both quantities are considered as their deviations from a steady state. The analytically modelled linearized relation reads which is a DDE where b 0 , b 0τ , a 2 , a 1 , a 0 , a 0ϑ ∈ R and τ , τ 0 , ϑ ∈ R + express input/output delays and the state (internal) delay, respectively. The corresponding transfer function is the denominator of which represents the model characteristic RQP (i.e., q HETDM ). In 84 , the following parameter values have been determined by a thorough and complex analysis of static and dynamic data Let us take these data as a benchmark for the significantly more straightforward relay-based experiment. As the values arise from determined physical quantities of the process, they are assumed to be closed to the actual (true) real-life values. Remark 2 The used Pt1000 thermometers have the guaranteed time constant T 63 of 8 s, i.e., T 90 ≈ 18.4 s. This additional dynamical latency has not been taken into account in analytically-derived model (44), and the true temperature values can be different from the measured ones. However, such negligence does not pose a serious problem with the model. First, plant delays τ , ϑ caused by the thermal fluid transportation have significantly higher values, approx. ≈ 150 s. This means that possible sensor latencies have only a minor effect on the overall dynamics. Second, sensors' latencies do not affect the internal delay of the model itself since they act only in the input/output relation; yet, they are included in the internal delay of the relay-feedback closed loop. If the system is considered linear (indeed, model (44) is a linearized formulation valid in the vicinity of an operating point), a sensor delay can be considered as the additional input/output delay of the model. As input/output delays are not derived analytically but based on measurements, the relay experiment data's evaluation also covers these non-modeled latencies. Therefore, once the model is used for plant control, the plant model and the output signal for the feedback have the same value (in the ideal case). Simple model parameter estimation using the relay-based experiment. The first step of the identification chain is estimating the SFOTDM parameters, especially those of q SFOTDM (20). It has three substeps. First, the on/off relay with δ > 0 (and ε being sufficiently small), see (5), is used to estimate the static gain k in (19) as per (6). Second, the ideal relay ( δ = 0 ) is applied to get the initial estimation of oscillation data and the input/output delay value. Finally, the saturation relay is used to improve the accuracy of oscillation parameters, which yields the SFOTDM parameters from (4) and (7). All the substeps can be done within a single experiment, saving time since the transition from particular substantial oscillations to others takes less time than setting the oscillations from a constant steady state. Let us use the notation τ → τ s , ϑ → ϑ s for (19) to distinguish the SFOTDM parameters from those of the HETDM (for which no subscript is used). The combination of (4) and (7) can be solved analytically yielding 12 where N · (A, ·) stands for either (5) or (7). However, it is inherently expected that the argument of cos −1 (·) is within the range [−1, 1] . Whenever it does not hold, a numerical solution of the combination of (4) and (7) have to be used instead of (47). Set B = 100 , δ = 0.05 , and ε = 10 −5 . The relay-test responses are displayed in Fig. 9. The arrows indicate when a particular relay starts to be used. The eventual data from Fig. 9 are summarized in Table 1. Formula (19) gives k = 3.22 × 10 −2 . The value of τ s can be estimated as the time interval between the switching point of u(t) and the peak time instant of y(t) . Hence, it can be measured that τ s ≈ 136.7 s. Note that k min = 1.4 has been taken for the saturation relay setting, which gives rise to k sat = 185.1 , A = 0.555. As kN · (A, ·) cos (ω osc τ s ) = −2.72 (for the relay with saturation), (47) cannot be used. Hence, we attempt to apply the NM method to solve the minimization problem www.nature.com/scientificreports/ where G SFOTDM (s) is given (19), N sat A, A and ω osc are taken from Table 1. The barrier function is chosen as f b (T, ϑ s , τ s ) = − x∈{T,ϑ s ,τ s } ln 1 − e −x . The NM control parameters are set to γ r = 1 , γ e = 2 , γ oc = γ ic = γ s = 0.5 . The initial estimation reads (48) [T, ϑ s , τ s ] * = arg min f (T, ϑ s , τ s ) www.nature.com/scientificreports/ The setting of 1 T in (49) arises from the assumption that the inverse of ω osc ≈ ω u is close to the time constant of the delay-free system. The value of 1 ϑ s represents the mid-point of the stability interval (21). Two scenarios for many setting combinations of the initial simplex size and β in (18) are made. First, the fixed value τ s = 136.7 is assumed (i.e., only T, ϑ s are optimized). Second, all three parameters in (48) are to be found. Among dozens of results (models), six of the most distinguished ones are summarized in Table 2. The results either minimize f (T, ϑ s , τ s ) in the frequency domain or integral absolute error (IAE) and integral time absolute error (ITAE) criteria in the time domain, or represent a trade-off of all the criteria values. Unit step responses (i.e., u = 1 W) are displayed in Fig. 10. The models in Table 2 are also equipped with Roman numerals to label them. Results IV, V, and VI provide an outstanding cost function value in the frequency domain, yet a worse IAE criterion compared to the remaining results. However, results V and VI give the best ITAE value. Unsatisfactory www.nature.com/scientificreports/ time-domain response of parameters IV can be due to an excellent characterization of the ultimate point G SFOTDM jω u = −N −1 sat A, A but an erroneous estimation of the remaining Nyquist plot. Hence, we do let provide the reader with graphical comparisons of process and model Nyquist plots, see Fig. 11. Note that the terminal frequency value in Fig. 11 is ω fin = 0.1 rad/s. As can be seen, time and frequency responses for models I, II, and III, and those for models V and VI almost coincide. Figure 11 also indicates that the fixed input/output delay value yields a worse estimation of the ultimate point (i.e., the model crossing point with the negative real axis is quite far from that of the original process characteristics). Moreover, results V and VI seem to give the closest frequency-domain responses to the original Nyquist plot. Hence, Table 3 provides root means squares (RMS) values to measure the error between the process and the models. Two terminal frequency values for RMS computation are chosen, ω fin = 0.1 and ω fin = ω osc = 1.7 × 10 −2 rad/s. www.nature.com/scientificreports/ Data in Table 3 prove the above-introduced assumption that Models V and VI estimate the process frequency response reasonably on low frequencies (i.e., up to the oscillation frequency); however, they fail for higher ones. It is also worth noting that the estimation of the ultimate frequency is quite accurate since the true one is about ω u = 1.67 × 10 −2 rad/s. Table 2 serve as the initial estimates for HETDM parameters identification. Several scenarios are tested and compared after the HETDM dominant spectrum assignment according to the pole loci of the SFOTDM. Parameter estimation of the heat-exchanger process model via pole assignment. Now, models (results) in Initial denominator parameters estimation. The rightmost spectra of SFOTDMs (see Table 2) poles are displayed in Table 4. These loci are computed via the technique presented in "Root loci analysis of the simple quasipolynomial". From Table 4, it can be deduced that the rightmost pole has a decisive impact on the dynamic properties (by comparing the result for SFOTDM I, II, and III), since although the remaining spectra significantly differ, the model time is the same and frequency domain responses are almost identical. Besides, by comparing spectra of models III and VI that are very close to each other, different dynamic properties indicate a high impact of the value of τ s . The pole assignment problem can be characterized by the set of nonlinear algebraic equations to be solved Since problem (50) includes five unknown parameters to be determined, the unique solution (under the assumption that the Jacobian of the q HETDM has the full rank) requires taking {s 1 , s 2 , s 3 , s 4 , s 5 } . However, it is not always possible. For instance, SFOTDM I has only two significant real poles (the rest are too far in the LHP), and complex conjugate pairs of other spectra cannot be decoupled. Hence, only less number than five of SFOTDM poles are taken in (50). It is also worth noting that whenever s k ∈ C\R , the q HETDM in (50) is split into its real and imaginary parts. We use the LM method (see "Levenberg-Marquardt method") for solving the pole assignment problem. Let us discuss the ideal initial parameters' selection, 1 p = 1 (a 2 , a 1 , a 0 , a 0ϑ , ϑ) T . Take the CE HETDM and divide both the sides by a 0ϑ Then, it is apparent that the setting yields the CE SFOTDM . However, such a solution is non-feasible and may result in an immature solution of (50). Therefore, several approximate feasible initial settings are eventually chosen. The multiplicative parameter κ that alternates the damping factor in iteration steps should increase when the residual sum on the right-hand side of (13) increases and vice versa. Some authors suggest setting an asymmetric 97 . Hence, after some numerical experiments, we eventually set κ = 5 when stepping up and κ = 10 when stepping down. Due to numerous settings of 1 , 1 p , a bunch of possible results is obtained. Selected results (i.e., q HETDM parameter values) and their residual sums Res p opt (see (13)) are provided to the reader in Table 5. Corresponding (50) CE HETDM (s, a 2 , a 1 , a 0 , a 0ϑ , ϑ)| s={s 1 ,s 2 ,...}∈� SFOTDM : q HETDM (s, a 2 , a 1 , a 0 , a 0ϑ , ϑ) = 0 s={s 1 ,s 2 ,...}∈� SFOTDM (51) 1 a 0ϑ s 3 + a 2 a 0ϑ s 2 + a 1 a 0ϑ s + a 0 a 0ϑ + e −ϑs = 0 (52) a 2 = a 0 → 0, a 1 = a 0ϑ T, a 0ϑ → ∞, ϑ = ϑ s www.nature.com/scientificreports/ dominant pole loci of the models are enumerated in Table 6. Note that the QPmR software package 18 is utilized to compute the poles here. It can be deduced that despite diverse results within each of six models (characteristic RQP) families (i.e., I to VI), the obtained dominant spectra are very close to each other. This yields multimodality of the optimization problem. Characteristic RQPs I-a to I-d, II-b, III-a, III-d, III-e, IV-b, V-c, and VI-a to VI-f give almost identical spectra to the original ones (i.e., those of SFOTDMs). These findings correspond to the values of Res p opt in Table 5. It is exciting to observe that even if a subset of two or four poles is prescribed, other one or two complex conjugate pairs coincide with the original spectrum. This feature proves the success of the assignment task and an excellent mapping between q SFOTDM and q HETDM parameters. Now three different scenarios of how to set numerator parameters of the model transfer function (45) or even eventually alter all the transfer function parameters based on relay-experiment data follow. Numerator parameters estimation using Levenberg-Marquardt method. The first scenario adopts the LM method and utilizes data solely from the single relay test (i.e., without a necessity to perform additional experiments) that should indicate the ultimate data (i.e., the critical point of the Nyquist curve). Once the parameters are determined as in "Initial denominator parameters estimation", they are fixed. Hence, the HETDM transfer function numerator parameters are estimated to comply with conditions (4). The advantage of this scenario lies in fact that all the model parameters are found within a single test. In more detail, the set of nonlinear algebraic equations to be solved reads where ω osc holds when using the relay with saturation. As only two parameters can be determined by solving (53), two other ones need to be set a priori. Let τ = τ s be fixed as in Table 2. As the static gain k = 3.22 × 10 −2 is known, the following equality is substituted to (53) G o,HETDM (s, ·) = N sat A, A G HETDM (s, ·) www.nature.com/scientificreports/ Hence, the parameter set in (53) eventually reads p = (b 0τ , τ 0 ) T and b 0 is then calculated using (54). The LM control parameters are set as in "Initial denominator parameters estimation". Table 7 displays the most distinguished results such that two parameters ' sets from each of the six pole spectrum families are selected. The corresponding Res p opt , IAE and ITAE criteria values from unit step responses, and RMS values from Nyquist plots for ω fin = 0.1 and ω fin = 1.7 × 10 −2 are provided in Table 8. Compared to SFOTDMs (Tables 2, 3), IAE and ITAE criteria values of unit step responses for HETDMs have been enhanced in all six families of models. Contrariwise, frequency-domain error measures have not been improved; the results in Table 8 are very close to those displayed in Table 3. The unit step responses and Nyquist plots of some selected HETDMs are displayed in Figs. 12 and 13, respectively. The models are selected such they dynamic responses differ significantly. Regarding non-displayed responses, they are very close to some displayed ones. Namely, in the time domain, II-a-1 almost meets III-a-1, yet the latter is faster. II-d-1 and III-c-1 are close to I-a-1. IV-a-1 and IV-b-1 almost coincide, yet IV-b-1 is faster. The same assertion holds for the pair V-c-1 and V-a-1. Finally, VI-e-1 is nearly identical to V-a-1. In the frequency domain, II-a-1 is close to the pair I-a-1 and III-a-1 at the whole frequency range. II-d-1 approaches I-a-1 at low frequencies and I-b-1 at higher ones. III-c-1 is almost identical to III-a-1 for all frequencies. Finally, pairs V-a-1/V-c-1 and VI-c-1/VI-e-1 have frequency responses very close to each other. Table 6. Dominant pole HETDM spectra. www.nature.com/scientificreports/ Numerator parameters estimation using Nelder-Mead method from single relay test. Now, let us solve the same task as in the preceding subsection via the NM method. The optimization problem is formulated as follows where the HETDM characteristic RQPs are fixed as in Table 5 and τ = τ s . Again, the value of ω osc is taken from the saturation relay test. Note that the cost function with real and imaginary parts of G o,HETDM (s) is used in (55) rather than that with the amplitude and phase since numerical experiments give better results (in the sense of (4)). The most outstanding results are provided in Table 9 (two parameters' sets from each of the six pole spectrum families are selected again). Table 10 displays corresponding performance measures and implies that the accuracy of HETDMs in Table 9 is very close (or slightly worse) to models obtained in "Numerator parameters estimation using Levenberg-Marquardt method" (except for models from family I). This means that the models give better accuracy than SFOTDMs in the time domain yet only comparable ones in the frequency domain. Unit step responses and Nyquist plots of selected models are given in Figs. 14 and 15, respectively. Note again that other characteristics are close to some displayed ones. In the time domain, responses for model families I, II, and III almost coincide when I-a-2 is the fastest and II-c-2 is the slowest. The same assertion holds for families V and VI (VI-b-2 and V-a-2 give the fastest and the slowest response, respectively), while models IV significantly differ from the others. Nyquist plots of III-c-2 and III-f-2 are closest to each other at low frequencies and the (55) www.nature.com/scientificreports/ Table 9). www.nature.com/scientificreports/ pair II-c-2/III-f-2 at high ones. The pair VI-b-2/VI-d-2 almost coincides at the whole frequency range. Notice that although perfect optimization based on the measured ultimate data (see Table 10) is reached, model IV-a-2 does not provide excellent frequency response. Figure 15 indicates that there must exist a frequency warping. Indeed, the RMS error value of the Nyquist plot for model V-a-2 and ω ∈ 0, 1.7 × 10 −2 is better than that for model VI-b-2. However, the curve for the latter model is closer to the original Nyquist plot than that for the former one. This implies that the model accuracy cannot be judged solely on the shape of characteristics. Another question is whether optimizing more than three transfer function numerator parameters can improve the HETDMs accuracy. Numerator/denominator parameters estimation using Nelder-Mead method via Autotune Variation Plus experiment. By substituting (54) into (45), the 8-parameter model is ready to be identified, i.e., q HETDM is not fixed. We do let use the ATV + technique (see "ATV + technique") that dictates the use of three artificial delays yielding the estimation of three additional critical points in the frequency domain. Hence, let τ a,2 = 77.021 s as per (9) and take linear values in the neighborhood of this delay as,τ a,1 = 61.617 , τ a,3 = 92.425 . The common saturated-relayfeedback experiment (see Fig. 16) yields A 2 = 1.460 • C, T osc,2 = 560.03 s , A 1 = 1.328 • C, T osc,1 = 527.23 s , and A 3 = 1.570 • C , T osc,3 = 591.56 s , respectively. Note that k sat = 185.1 , A = 0.555 as in "Simple model parameter estimation using the relay-based experiment". The NM method is hence used to solve the problem www.nature.com/scientificreports/ where i = 0 is associated with τ a,0 = 0 . Conditional inequalities are incorporated via the barrier function f b (τ 0 , τ , a 2 , a 1 , ϑ) = − x∈{τ 0 ,τ ,a 2 ,a 1 ,ϑ} ln 1 − e −x . If any inequality is broken, the model is not stable or feasible. The remaining parameters, however, can be non-positive. The standard setting γ r = 1 , γ e = 2 , γ oc = γ ic = γ s = 0.5 is adopted while different values of β-see (18)and the initial simplex size are benchmarked. The initial RQP parameter estimates come from Table 5 and let and the value of 1 τ depends on the particular model family, see Table 2, i.e., 1 τ = 136.7, 79.377, 106.869 , or 106.39 s. Results are summarized in Table 11 (again, two eventual HETDMs from each family of models are taken). Corresponding performance measures in the time and frequency domains are provided to the reader in Table 12, and particular dynamic responses for selected models are displayed in Figs. 17 and 18. Apparently, HETDM parameter identification based on the estimation of four Nyquist plot points results in significantly improved time-domain responses compared to 3-parameter optimizations (see "Numerator parameters estimation using Levenberg-Marquardt method" and "Numerator parameters estimation using Nelder-Mead (56) p * = [b 0 , b 0τ , τ 0 , τ , a 2 , a 1 , a 0 , a 0ϑ , ϑ] * = arg min f (b 0 , b 0τ , τ 0 , τ , a 2 , a 1 , a 0 , a 0ϑ , ϑ) www.nature.com/scientificreports/ method from single relay test"). Regarding the model accuracy in the frequency domain, substantial enhancement is achieved for low frequencies, which, however, does not hold for the whole frequency range. As can be deduced from Tables 8, 10, and 12, the better RMS value for ω fin = 1.7 × 10 −2 is, the worse value for ω fin = 0.1 is and vice versa (except for model family IV). In Fig. 17, nearly nonsmooth step response of HETDM I-a-3 can be observed. An oscillatory response with high-frequency modes of nonnegligible amplitude can be seen for HETDM 4-a-3A. The remaining (non-displayed) dynamic responses have a similar shape to the displayed ones; however, the plots are not as close as the 3-parameter optimizations. An exception appears for III-f-3, V-c-3, and VI-a-3 where nonminimum-phase-like time responses appear, yet the corresponding Nyquist plots do not prove this feature. The significant step response differences come from diverse input-output delays. Regarding Nyquist Table 11. HETDM transfer function parameters computed via the NM method (artificial delays are used). b0, b 0τ , τ 0 , τ , a 2 , a 1 , a 0 , a 0ϑ www.nature.com/scientificreports/ plots, the closest curves can be observed for pairs I-a-3/I-b-3 and IV-a-3A/V-b-3 at low frequencies. At higher frequencies, responses differ more significantly, especially those for HETDMs III-c-3, III-f-3, and V-c-3 are far from the remaining ones. To sum up, most of the models obtained by the solution of task (56) based on the (quadruple) relay-feedback ATV + test gives satisfactory results from the identification point of view. Discussion Let us discuss observations made during the entire HETDM identification procedure and also point out some practical issues. In our experiments, we have supposed that the hysteresis of the on/off relay is negligible, ε ≈ 0 . However, a suitable nonzero value has to be set in practice due to the measurement noise. Such a setting prevents the relay from switching too frequently, which may cause failures. Another important practical issue is the static gain estimation, according to (6). Whenever an asymmetry ( B + � = B − ) is induced, the output of the feedback system also becomes asymmetrical. The problem is that the original output setpoint then shifts, which may cause an erroneous estimation of the static gain. The setpoint shifting can be caused by the feedback nature of a relay experiment, process nonlinearities, and/or disturbances. If someone is unsure about the static gain, the step response test can be made. It is worth noting that disturbances also induce asymmetry of the ideal on/off relay experiment. In such a case, various methods of restoring symmetry can be applied 50 . As a DF generally represents a linear approximation of a nonlinear element, it is impossible to estimate the critical (or another frequency) point exactly by nature. More precisely, neither the found frequency ω osc nor the loci G m jω osc = −1/N(·) meets the particular values of the actual (measured) process frequency response. This implies that even if the solution of (4) is perfect (see, e.g., HETDM V-a-2 in Table 10), the model does not www.nature.com/scientificreports/ provide sufficient results from the identification point of view. Besides, even if one or more points of the Nyquist curve are estimated well, the remaining course of the plot may vary from the desired loci significantly (see, e.g., HETDM I-a-3 in Fig. 18). As can be seen from Tables 7, 9, and 11, relatively high ratios |b 0 /b 0τ | , |a 0 /a 0ϑ | , |a 0ϑ /a 0 | , (|b 0 | + |b 0τ |)/(b 0 + b 0τ ) often occur. This unpleasant feature yields erroneous steady state computation or numerical instability when simulations (i.e., solving differential equations) due to digital representation of values in computer. Therefore, only some eventual models can be used for control system design and its verification. Displayed step responses indicate that the initial input/output delay estimation τ = τ s = 136.7 is pretty good. It can be observed that corresponding model families I, II, and III (see Tables 2, 3, 8, and 10) provide slightly better IAE values and significantly lower overall RMS ( ω fin = 0.1 ) compared to models with different values of τ . On the contrary, the better RMS value for low frequencies ( ω fin = 1.7 × 10 −2 ) is, the lower the ITAE value is obtained, which proves the importance of good low-frequency Nyquist plot estimation for the overall timedomain model response. The eventual SFOTDMs identified using relay feedback experiment were proved to be sufficient for controller design 12 . By matching SFOTDMs dominant pole loci with those of HETDMs and calculating remaining model parameters, performance measures very close to those of SFOTDMs have been obtained in this study. This implies that the eventual HETDMs based on the data from the single relay test (i.e., without artificial delays) can also be used for control tasks. However, the models seem to be insufficient from the identification point of view. Fortunately, the use of the ATV + experiment has brought about much improved models. As many diverse results have been computed with more or less the same cost function values, the identification problem for HETDMS seems to be multimodal task. Therefore, none of the eventual models (see Table 11) approaches the true parameter values (46) given by physical analysis of the thermal process. www.nature.com/scientificreports/ The relay-based experiment can be improved in several ways. For instance, one has to be more careful when setting k sat of the saturated relay (see Fig. 4). The ideal setting should result in sinusoidal-like sustained oscillations of u(t) . However, Figs. 9 and 16 indicate that k sat was set too high. On the other hand, one has to be aware of the necessary existence of sustained oscillations. Another way how to enhance the coefficient value estimation is to capture multiple points of the Nyquist plot, which may yield better matching of process and model curves for a wide frequency range [47][48][49] . Conclusions This study should have examined whether it is reasonable to identify parameters of a complex model of a thermal circuit system with internal delays by a parameter identification of a simpler delayed model followed by the models' poles matching. As the identification tool, the standard on/off relay with biased and unbiased feedback test and the relay with saturation have been used. The latter relay should have yielded a more accurate estimation of points on the frequency curve corresponding to sustained oscillations data. Once the simple model is found under a single feedback experiment, its dominant pole loci (of an infinite spectrum) are matched to those of the complex model, giving rise to the characteristic quasi-polynomial coefficients. A simple graphical-based method has been analytically derived to find these loci. The Levenberg-Marquardt method has been applied to solve the pole assignment task. Surprisingly, although only a few poles have been prescribed, some other uncontrolled poles have also been matched. Based on the single-test data, the remaining model parameters have been estimated by the solution of a nonlinear optimization problem (using the Nelder-Mead vs. the Levenberg-Marquardt methods). It has been proved that both the models have had similar time and frequency domain performances. While the eventual models may be sufficient from the control point of view, they fail regarding the accuracy of identified parameters. On the other hand, the proposed procedure enables the estimation of multiple parameters under a single relay test, which is its main benefit. However, we have also performed the ATV + test with artificial delays to get multiple relay-feedback data, which has resulted in much better eventual models. In the Discussion section, we have touched on some issues that have to be considered in practice and proposed possible further improvements to the proposed concept. Besides, one may apply advanced and more sophisticated optimization approaches to solve the tasks raised in this study. For instance, metaheuristic methods can be benchmarked in the future 98 . In addition, real-life experiments will be made to prove the concept. Data availability Data are available by L.P. upon request.
12,395.6
2022-06-03T00:00:00.000
[ "Mathematics" ]
Surface radiation dose comparison of a dedicated extremity cone beam computed tomography (CBCT) device and a multidetector computed tomography (MDCT) machine in pediatric ankle and wrist phantoms Objectives To evaluate and compare surface doses of a cone beam computed tomography (CBCT) and a multidetector computed tomography (MDCT) device in pediatric ankle and wrist phantoms. Methods Thermoluminescent dosimeters (TLD) were used to measure and compare surface doses between CBCT and MDCT in a left ankle and a right wrist pediatric phantom. In both modalities adapted pediatric dose protocols were utilized to achieve realistic imaging conditions. All measurements were repeated three times to prove test-retest reliability. Additionally, objective and subjective image quality parameters were assessed. Results Average surface doses were 3.8 ±2.1 mGy for the ankle, and 2.2 ±1.3 mGy for the wrist in CBCT. The corresponding surface doses in optimized MDCT were 4.5 ±1.3 mGy for the ankle, and 3.4 ±0.7 mGy for the wrist. Overall, mean surface dose was significantly lower in CBCT (3.0 ±1.9 mGy vs. 3.9 ±1.2 mGy, p<0.001). Subjectively rated general image quality was not significantly different between the study protocols (p = 0.421), whereas objectively measured image quality parameters were in favor of CBCT (p<0.001). Conclusions Adapted extremity CBCT imaging protocols have the potential to fall below optimized pediatric ankle and wrist MDCT doses at comparable image qualities. These possible dose savings warrant further development and research in pediatric extremity CBCT applications. Background Extremity cone beam computed tomography (CBCT) represents an imaging modality that produces cross-sectional studies from volumetric acquisitions. It employs an X-ray tube and a flat-panel detector rotating opposite to each other in a gantry [1][2][3]. The main difference to modern multidetector computed tomography (MDCT) scanners operated in volumetric mode is the use of a flat panel detector and the absence of pre-detector collimation [2,3]. The resulting simplicity of the CBCT construction facilitates small and mobile machines [4] and weightbearing applications [5][6][7][8][9][10][11]. CBCT as an imaging method is not a new concept, and it is widely used in dental imaging for instance [12][13][14][15][16][17][18]. Previous studies on extremity CBCT indicated comparable diagnostic value compared to MDCT and characterized it as a possible alternative in adult orthopedic imaging [19][20][21][22]. Recently, initial experiences of pediatric CBCT applications were published as well [23]. Extraordinary dose-saving potentials in adults have been described in the knee and ankle region [24,25]. These low-dose capabilities drew attention of pediatric radiology research on the new modality lately. Pugmire et al. published lower extremity CBCT doses in children compared to MDCT [23]. Conversely, Lang et al. and Neubauer et al. did not substantiate these findings and reported comparable doses of CBCT and MDCT examinations, when taking image quality into account [20,22]. As a consequence, actual CBCT dose savings compared to MDCT remain a legitimate subject for debate and research. The purpose of the current study was to compare surface radiation doses of an established and optimized MDCT and a novel extremity CBCT scanner, both depicted in Fig 1a, in pediatric anthropomorphic ankle and wrist phantoms. Adapted pediatric imaging protocols on both machines were used to achieve realistic results at our dedicated pediatric radiology division. Surface doses were assessed by employing thermoluminescence dosimeters (TLDs) [26]. Methods Dosimetric studies were conducted on a left lower leg and a right forearm of a pediatric whole body phantom PBU-70 (Kyoto Kagaku Co. Ltd, Kyoto, Japan) [27]. The anthropomorphic phantoms were modelled after a 4-year-old child of 105 cm height and 20 kg weight. The used extremities embedded cortical and cancellous bone in soft tissue equivalent material (Fig 1b and 1c). CBCT dosimetry was performed on a Planmed Verity 1 scanner (Planmed Oy, Helsinki, Finland). Aided by laser position markers, the phantoms were centered in the middle of the gantry. Both extremities were positioned proximo-distally with the intra-articular spaces of either the talocrural or the radioulnar joint at the center (compare Fig 1b and 1c). The field of view (FOV) had a diameter of 16 cm and an extension of 12 cm proximo-distally. CBCT tube rotation angle was 210˚. used for the ankle, and a "double small" FOV (diameter of 9.28 cm) for the wrist. All MDCT volumetric acquisitions were completed in a single tube rotation of 360˚without table incrementation (no pitch). Three study protocols were measured both in the left ankle and the right wrist phantom: 1. CBCT, study protocol 2. MDCT (routine), protocol used in routine imaging at the authors' institution 3. MDCT (CTDI equivalent), protocol adapted to match the CBCT's CT dose index volume (CTDIvol) values Each of these three study protocols consisted of three exposure levels: • "low", suitable for pre-schoolers younger than 6 years • "medium", suitable for schoolers aged 6 to 12 years • "high", suitable for teenagers older than 12 years In CBCT, the "high" protocol corresponded to the manufacturer-lowered pre-set for children. Prior to the dosimetric analyses, the study authors consensually agreed on the two other lowered CBCT exposure settings "medium" and "low", with the aim of a barely sufficient image quality to securely make a diagnosis. The settings were based on initial experiences with phantoms, animal cadavers, and patients. The first MDCT protocol set (MDCT routine) consisted of exposure settings actually used in clinical routine at the authors' institution. The second set [MDCT (CTDI equivalent)] was adapted to match the CTDIvol (16 cm phantom) values of the respective CBCT protocol, taking the underlying diverging CTDIvol phantoms in both machines into account. This exposure set was meant to show dose conformity between the modalities. All described protocols and exposure settings are summarized in Table 1. Lithium fluoride (LiF) thermoluminescence dosimeters TLD-100™ square rods with diameters of 1×6 mm (Thermo Fisher Scientific Inc., Waltham, Massachusetts, USA) were used. TLDs were calibrated to the X-ray quality of both devices (CBCT and MDCT) individually. Dose results are based on calibration and correction factors determined at the Competence Center of Medical Physics and Radiation Protection, University Hospital of Graz, Austria. TLD glow-curves were read out on a Harshaw TLD Model 5500 reader with planchet heating system and WinREMS readout software (Thermo Fisher Scientific Inc., Waltham, Massachusetts, USA). Surface doses were measured in 6 circular positions around the examined joints. For each measurement a pair of TLDs was attached in the same way at every 1, 3, 5, 7, 9, and 11 o'clock position, seen in the direction of the extremity as shown in Fig 2. All dosimeters were consecutively irradiated 10 times per study protocol in order to ensure that a sufficient amount of radiation had been applied. Additionally, all protocols were repeated 3 times to prove the measurement test-retest reliability. After exposure TLDs were read out by a medical physicist with longstanding experience. A total of 322 (of 324) successful TLD readouts were performed, while 2 TLD readouts failed due to material fatigue. After the respective image acquisitions, axial slices with thicknesses of 1.4 millimeters were reconstructed relative to the examined body part. The reconstruction kernel was "Standard" without iterative reconstruction in CBCT, and FC30 "Bone sharp" plus "AIDR" iterative reconstruction in MDCT. FIJI 1.49v [20], an ImageJ distribution (open source image processing software, http:// rsbweb.nih.gov/ij/) was used to measure mean pixel values and noise of air, cancellous bone, and soft tissue. Ellipsoid regions of interest (ROI) were placed in matching axial slices and positions and the resulting Hounsfield units (HU) were assessed. These measurements were repeated in 10 different matching image positions across all studies. Maximum and mean HU values of cortical bone and the respective standard deviations were read out in all series too. This information was used to calculate a normalization factor (NF = maximum cortical bone intensity of single study / maximum cortical bone intensity of all studies) applied to contrast-to-noise ratios (CNR), with the aim to extenuate device specific effects of differences in HU display [28,29], and of various kVp settings [30,31]. NF was 1.0 for the study containing the highest maximum cortical bone pixel value, and lower for all other exposure levels (compare S2 Dataset, listing all objective and subjective image measurements). Combined mean image noise (of air, cancellous bone, and soft tissue), normalized CNR [(mean cortical bone -mean air) / SD air à NF] and signal-to-noise ratios (SNR = mean [material] / SD [material]) were calculated for every CBCT and MDCT protocol as parameters for objective image quality. Subjective image quality was rated consensually on a five grade Likert scale (1 = excellent, 2 = good, 3 = average, 4 = fair, 5 = poor) by three radiologists with 4, 6 and 28 years of experience in musculoskeletal CT. The assessed parameters included overall quality, contrast, sharpness, noise, beam hardening, aliasing, and ring artifacts. Collected data was imported and analyzed in SPSS Statistics Version 21 software (IBM Corp., Armonk, NY). Descriptive statistics were used to explore the measured values. Mean values were compared with independent samples t-tests. Pearson correlations were used to assess relations between dose and image quality parameters. Intraclass correlation [two-way mixed average measures, ICC(3,k)] coefficients and a Bland-Altman plot were calculated to prove dose measurements' test-retest reliability. P values less than 0.05 were assumed to be statistically significant. An ethics committee approval was not needed for this phantom study. Results In CBCT the mean surface dose was 3.0 ±1.9 mGy, averaged over low, medium, and high ankle and wrist protocols. Significantly higher surface doses were measured for MDCT (routine) with 3.9 ±1.1 (p<0.001). As expected, differences between the CBCT and the CTDImatched MDCT (CTDI equivalent) protocol were not statistically significant (mean 3.0 ±1.9 mGy vs. 3.0 ±1.4 mGy, p = 0.903). Significant linear correlations between surface doses and both CTDIvol (r = 0.957, p<0.001) and DLP (0.950, p<0.001) were found. Surface doses in CBCT were significantly lower compared to MDCT (routine) in the majority of the exposure levels apart from "high" in general (p = 0.633), "high" in the ankle phantom (p = 0.131), and overall in the ankle phantom (p = 0.053). Respective findings are summarized in Table 2. In CBCT the posterior TLDs at 5 and 7 o'clock positions showed significantly lower surface doses than the anterior positions at 1 and 11 o'clock (mean 2.5 ±1.5 mGy vs. mean 3.5 ±2.0 mGy, p = 0.014), accounted for by the incomplete CBCT tube rotation of 210 degrees. This is graphically depicted in Fig 3c. In MDCT, no significant differences between the surface doses at the different dosimeter positions were detected. Mean values of all measured surface doses are listed in Table 3; respective raw data is available in S1 Dataset. The objectively measured image parameters were significantly in favor of CBCT (all p<0.001), even though the noise reduction algorithm of iterative reconstruction had been enabled in MDCT: Mean noise was 33.3 ±17. 8 Subjective overall image quality decreased with lower exposure settings (R = -0.594, p = 0.009), and showed significant negative linear correlations with surface dose in both Table 4. Excellent test-retest reliability was shown for the TLD surface dose measurements by an overall ICC(3,k) of 0.953 (p<0.001). Split up between the modalities, the ICC was 0.946 (p<0.001) in CBCT and 0.922 (p<0.001) in MDCT. The measurement repeatability is depicted in Fig 6. Discussion In this study surface doses applied by an extremity CBCT and a MDCT scanner in pediatric ankle and wrist phantoms were assessed. Adapted pediatric imaging protocols with age-related exposure settings were used to compare both scanners as realistically as possible. To the best of our knowledge, this is the first study assessing and systematically comparing extremity CBCT doses in pediatric ankle and wrist phantoms. In adults, exceptional dose-saving potentials of CBCT compared to MDCT have been published for the knee and ankle region by Koivisto et al. [24,25]. In contrast, recent studies by Lang et al. and Neubauer et al. did not confirm these findings and the authors reported comparable doses of extremity CBCT and MDCT examinations, when taking image quality into account [20,22]. The currently available clinical pediatric extremity CBCT studies report doses that usually are lower than MDCT [23,32]. As a result, dose optimization and possible dose savings still remain subject of debate and warrant future studies, especially in the radio-susceptible pediatric population. Pediatric extremity CBCT versus MDCT surface doses In our study, we explicitly did not try to assess effective radiation doses, because different factors and unknown variables influence their correct calculation in pediatric extremities [33]. Mainly, the distribution of radiosensitive red bone marrow and relatively radio-insensitive yellow bone marrow are known to vary at different ages and to decrease until adulthood [34][35][36]. Additionally, inter-individual variety of marrow distribution and conversion may occur [37], rendering it impossible to predict the actual individual patient risk of an examination expressed by a specific effective dose value. Moreover, effective dose is believed not to be able to serve as a valid parameter of patient risk per se [38][39][40][41], and stochastic radiation induced damage may be underestimated [42,43]. Effective dose was not intended to be applied to patients or children [38,44], and appropriate pediatric tissue weighting factors remain a subject of ongoing debate and research [45][46][47]. In respect of the stated shortcomings in regard to effective dose in children, we have chosen to compare both modalities based on surface doses only. In the current study image noise, normalized CNR, and SNR were examined as three basic, objectively measurable image quality parameters. In both CBCT and MDCT altering dose settings directly influenced the resulting image quality, as expected. Due to the missing pre-detector collimation in CBCT, scatter radiation additionally decreased the image quality through image artifacts [20,22,48,49]. Therefore, scatter radiation is normally corrected by mathematical algorithms [50,51], which may not equally correct for scatter induced artifacts in every situation. On the other hand, iterative reconstruction algorithms of the MDCT decrease image noise and alter the image impression [52,53], which was not available or applied in CBCT. Due to the reasons given above, a direct comparison between different CBCT devices or CBCT and MDCT machines is further complicated regarding objective, and especially subjective image quality. We found quantitative image quality in favor of CBCT at comparative dose [20,22] we detected more image-degrading beam hardening artifacts in CBCT. In contrast to previous related studies by Koivisto et al. [24,25] we have chosen TLDs over MOSFET dosimeters. The main reason was the better availability of, and longer experience with TLDs in our institution. Prior studies have not shown significant differences between both methods of dose measurements [54][55][56]. The following study limitations need to be mentioned: The most important limitation is the fact that phantoms were used instead of pediatric patients or cadaveric specimens, which we had favored due to reasons of availability, longevity and consistency. These phantoms of the ankle and wrist may not be completely anthropomorphic with different material compositions and X-ray absorption. Another important limitation is the choice of appropriate and matchable exposure settings. The general differences between the modalities complicate their comparison. The exposure parameters in the analyzed MDCT had been optimized and validated in clinical routine for many years, and this amount of dose titration was certainly not possible in the CBCT machine. We therefore decided to run two sets of protocols in MDCT, one actually used in clinical routine, and one that provided matching CTDIvol values with the CBCT scanner. Conclusions Compared to an optimized MDCT, CBCT surface doses were significantly lower in a majority of the examined study protocols with superior objective, but only equivalent subjective image quality. Even though the previously reported extraordinary CBCT dose savings could not be substantiated in the examined pediatric ankle and wrist phantoms, the study results raise hope that CBCT could be a valuable low-dose alternative in pediatric extremity trauma imaging.
3,640
2017-06-01T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
Mechanical and Impact Properties of Engineered Cementitious Composites Reinforced with PP Fibers at Elevated Temperatures : The repeated impact performance of engineered cementitious composites (ECCs) is not well explored yet, especially after exposure to severe conditions, such as accidental fires. An experimental study was conducted to evaluate the degradation of strength and repeated impact capacity of ECCs reinforced with Polypropylene fibers after high temperature exposure. Compressive strength and flexural strength were tested using cube and beam specimens, while disk specimens were used to conduct repeated impact tests according to the ACI 544-2R procedure. Reference specimens were tested at room temperature, while three other groups were tested after heating to 200 ◦ C, 400 ◦ C and 600 ◦ C and naturally cooled to room temperature. The test results indicated that the reference ECC specimens exhibited a much higher failure impact resistance compared to normal concrete specimens, which was associated with a ductile failure showing a central surface fracture zone and fine surface multi-cracking under repeated impacts. This behavior was also recorded for specimens subjected to 200 ◦ C, while the exposure to 400 ◦ C and 600 ◦ C significantly deteriorated the impact resistance and ductility of ECCs. The recorded failure impact numbers decreased from 259 before heating to 257, 24 and 10 after exposure to 200 ◦ C, 400 ◦ C and 600 ◦ C, respectively. However, after exposure to all temperature levels, the failure impact records of ECCs kept at least four times higher than their corresponding normal concrete ones. Introduction Regardless of the function and type of occupation of any structural facility, it is still probable to be subjected to unfavorable extreme or accidental loads. Most of the modern reinforced concrete structures are designed to withstand the usual design gravity loads in addition to lateral loads, such as wind and seismic loads. However, considering the accidental loading cases in design is not a typical procedure required by building design codes because this action would distend the construction cost. Among the most probable types of accidental loads are fires and impact loads. The rapid increase of temperature due to the combustion of furniture, nonstructural materials and electrical wiring can noticeably degrade the structural capacity of slabs, beams and columns. On the other hand, sudden impact loads can cause serious concentrated damage that may affect the integrity of the structure. Although there are great advantages in fire resisting systems and materials in the construction industry, fires keep occurring every day. Large numbers of fire accidents are reported every year [1], where approximately half a million accidental fires were reported between 2013 and 2014 in the USA, while more than 150,000 fire accidents were reported in the UK during the same period. From these fires, 40% were recognized as structural fires [1,2]. Between 1993 and 2016, approximately 90 million fire accidents were recorded the ACI 544-2R. Works on this material [36,37,41] showed that using intermediate fibrous meshes can improve the impact resistance at cracking and failure stages. However, the most influential contribution to impact strength development was attributed to the steel fibers. Compared to conventional concrete that have similar strength and fiber content, Engineered Cementitious Composites (ECCs) are a type of high-performance SCC concrete that possess extraordinary ductility with strain hardening and multiple cracking under tensile and flexural stresses. ECCs were first introduced by Li in 1993 [44] and used in several applications [45]. Since that time, numerous studies have been conducted to introduce different ECC mixtures with different fiber types and contents. Plenty of research is available in literature on the different mechanical properties of ECCs. However, research on ECC repeated impact behavior is rare. The performance of ECCs under repeated impact was experimentally investigated by Ismail et al. [46] using the ACI 544-2R technique. Different ECC mixtures were introduced using fixed contents of binder, water, sand and fiber. The results indicated that using 15% to 20% metakaolin with fly ash significantly enhanced the impact performance. Similarly, some studies that evaluate the performance and residual mechanical properties of different ECC mixtures after fire exposure are available in literature [47][48][49][50]. It is obvious from the introduced literature that very rare experimental works are available in literature on the repeated impact strength of ECCs. Similarly, there is a serious gap of knowledge about the residual impact strength of fibrous concrete after fire temperatures. To the best of the authors' knowledge, no previous research was conducted to study the residual repeated impact strength of ECCs after high temperature exposure. To fill this gap of knowledge, an experimental program was directed in this research to investigate the cracking and failure repeated impact performances and impact ductility of PP-based ECCs after exposure to high temperatures reaching 600 • C. Such type of research is required because both accidental fire and impact loading are expected along the lifespan of structures. Hence, the research outputs can be utilized to evaluate the residual material and structural response of structural members made of ECCs under such accidental cases. Mixtures and Materials The aim of this study is to evaluate the residual repeated impact performance of ECCs after exposure to elevated temperatures, which can be considered as a type of new concrete that includes no aggregate particles and a high content of fine cementitious and filler materials. The M45 is a typical ECC mixture introduced by leading researchers, which was the base and most widely used mixture with proven characteristics [44,45]. This mixture was used in this study but with PP fiber instead of the typical and much more expensive polyvinyl alcohol fiber (PVA). On the other hand, a normal strength conventional concrete mixture (NC) with an approximately comparative compressive strength was used for comparison purposes. The mix design proportions of both mixtures are detailed in Table 1. A single type of Portland cement (Type 42.5) was used for both mixtures, while fly ash was used as a second cementitious material in the ECC mixture. The chemical composition and physical properties for both cement and fly ash are listed in Table 2. As preceded, the ECC mixture included no sand or gravel, where the filler of the mixture was composed of a single type of very fine silica sand with a grain size of 80 to 250 micrometer and a bulk density of 1500 kg/m 3 . On the other hand, local sand and crushed gravel from the central region of Iraq were used as fine and coarse aggregates for the NC mixture. The grading of the sand and gravel are shown in Table 3, while the maximum size of the gravel particles was 10 mm. For the ECC mixture, a super plasticizer (SP) type ViscoCrete 5930-L from Sika ® was used to assure the required workability due to the large amount of fine materials, while 2% by volume of PP fiber was used with the properties shown in Table 4. [52]. All of the disk, cube and beam specimens were cured under the same standard conditions in temperature-controlled water tanks for 28 days. After the curing period, the specimens were dried in the laboratory environment for 24 h. Previous researchers and trial tests conducted in this study showed that the heating of specimens without initial drying may lead to the explosive failure of some specimens at high temperatures. Therefore, all specimens were pre-dried using an electrical oven at a temperature of approximately 105 • C for 24 h. Afterwards, the specimens were heated using the electrical furnace shown in Figure 1a at a constant rate of approximately 4 • C/min to three levels of high temperatures of 200 • C, 400 • C and 600 • C. When the specified temperature level was reached, the temperature was kept constant for 60 min to assure the thermal saturation at this temperature. Finally, the furnace door was opened and the specimens were left to cool slowly at the laboratory temperature until testing time. The heating regime of the three temperature levels is described in Figure 1b. In addition to the three groups of heated specimens, a fourth group was tested at room temperature without heating as a reference group. of specimens without initial drying may lead to the explosive failure of some specimens at high temperatures. Therefore, all specimens were pre-dried using an electrical oven at a temperature of approximately 105 °C for 24 h. Afterwards, the specimens were heated using the electrical furnace shown in Figure 1a at a constant rate of approximately 4 °C/min to three levels of high temperatures of 200, 400 and 600 °C. When the specified temperature level was reached, the temperature was kept constant for 60 min to assure the thermal saturation at this temperature. Finally, the furnace door was opened and the specimens were left to cool slowly at the laboratory temperature until testing time. The heating regime of the three temperature levels is described in Figure 1b. In addition to the three groups of heated specimens, a fourth group was tested at room temperature without heating as a reference group. Repeated Drop-Weight Impact Test The impact response of materials and structures can be experimentally evaluated using different types of tests, among which is the drop-weight one. ACI 544-2R [28] addressed two types of drop-weight tests. The first is the instrumented drop-weight test which is the most commonly used technique to evaluate the impact response of structural members. This test is mostly used for reinforced beam and slab elements and requires expensive sensor instrumentation and data acquisition equipment. On the other hand, the alternative drop-weight impact test is a very simple one that is conducted on small size specimens and requires no instrumentation or any sophisticated measurement systems. This test requires that a drop weight of 4.54 kg is dropped repeatedly on the test specimen from a height of 457 mm until a surface crack becomes visible, then the repeated impacts are resumed until the fracture failure of the specimen. The numbers of the impacts at which the first crack and failure occur are recorded as the cracking impact number and failure impact number. The test is generally considered as a qualitative evaluation technique, which compares the impact resistance of different concrete mixtures based on their ability to absorb higher or lower cracking and failure impact numbers. The standard test specimen is a cylindrical (disk) with a diameter of approximately 150 mm and a thickness of approximately 64 mm. The standard test is operated manually by hand-lifting the drop weight to the specified drop height and releasing it to be freely dropped by gravity on a steel ball, which rests on the center of the specimen's top surface. The steel ball is used as a load distribution point and is held in place using a special framing system that also holds the concrete disk specimen, as illustrated in Figure 2a. However, it was found in previous works [27,32] that the manual operation requires significant effort and is time consuming, especially because at least 6 replication speci- Repeated Drop-Weight Impact Test The impact response of materials and structures can be experimentally evaluated using different types of tests, among which is the drop-weight one. ACI 544-2R [28] addressed two types of drop-weight tests. The first is the instrumented drop-weight test which is the most commonly used technique to evaluate the impact response of structural members. This test is mostly used for reinforced beam and slab elements and requires expensive sensor instrumentation and data acquisition equipment. On the other hand, the alternative drop-weight impact test is a very simple one that is conducted on small size specimens and requires no instrumentation or any sophisticated measurement systems. This test requires that a drop weight of 4.54 kg is dropped repeatedly on the test specimen from a height of 457 mm until a surface crack becomes visible, then the repeated impacts are resumed until the fracture failure of the specimen. The numbers of the impacts at which the first crack and failure occur are recorded as the cracking impact number and failure impact number. The test is generally considered as a qualitative evaluation technique, which compares the impact resistance of different concrete mixtures based on their ability to absorb higher or lower cracking and failure impact numbers. The standard test specimen is a cylindrical (disk) with a diameter of approximately 150 mm and a thickness of approximately 64 mm. The standard test is operated manually by hand-lifting the drop weight to the specified drop height and releasing it to be freely dropped by gravity on a steel ball, which rests on the center of the specimen's top surface. The steel ball is used as a load distribution point and is held in place using a special framing system that also holds the concrete disk specimen, as illustrated in Figure 2a. However, it was found in previous works [27,32] that the manual operation requires significant effort and is time consuming, especially because at least 6 replication specimens are required to assess the test records due to the high dispersion of this test's results [27]. Therefore, an automatic repeated loading machine was manufactured to apply the standard dropping weight from the standard dropping height with a better accuracy and much less effort. The manufactured machine was provided with a high accuracy digital camera to observe the surface cracking and failure in addition to a special isolation cabin to reduce the test noise. The manufactured repeated drop weight impact testing machine is shown in Figure 2b. mens are required to assess the test records due to the high dispersion of this test's results [27]. Therefore, an automatic repeated loading machine was manufactured to apply the standard dropping weight from the standard dropping height with a better accuracy and much less effort. The manufactured machine was provided with a high accuracy digital camera to observe the surface cracking and failure in addition to a special isolation cabin to reduce the test noise. The manufactured repeated drop weight impact testing machine is shown in Figure 2b. Compressive Strength The residual compressive strength-temperature relationship of the ECC tested cubes is shown in Figure 3, while Figure 4 shows that of the NC. It is clear in Figure 3 that the ECC strength reduced after exposure to 200 °C by approximately 22% compared to the reference unheated specimens, where the reference strength was 57.5 MPa, while it was 44.8 MPa after heating to 200 °C. A further decrease was recorded when the heating temperature was increased to 400 °C. However, this additional decrease was small compared to the initial one, where the residual strength percentages after exposure to 200 and 400 °C were approximately 78 and 70%, respectively. When the specimens were heated to 600 °C, a significant strength degradation was noticed with a residual compressive strength of 29.5 MPa, which means that the strength loss was approximately 49% compared to the strength of the unheated specimens. On the other hand, the percentage strength reduction of NC was less than that of the ECC after exposure to 200 and 400 °C. The residual compressive strength of the NC cubes after exposure to 200 and 400 °C was approximately 81% at both temperatures compared to the reference cubes as shown in Figure 4. However, the percentage residual compressive strength of the NC at 600 °C was approximately 50%, which was almost equal to that of the ECC (51.4%). Compressive Strength The residual compressive strength-temperature relationship of the ECC tested cubes is shown in Figure 3, while Figure 4 shows that of the NC. It is clear in Figure 3 that the ECC strength reduced after exposure to 200 • C by approximately 22% compared to the reference unheated specimens, where the reference strength was 57.5 MPa, while it was 44.8 MPa after heating to 200 • C. A further decrease was recorded when the heating temperature was increased to 400 • C. However, this additional decrease was small compared to the initial one, where the residual strength percentages after exposure to 200 • C and 400 • C were approximately 78% and 70%, respectively. When the specimens were heated to 600 • C, a significant strength degradation was noticed with a residual compressive strength of 29.5 MPa, which means that the strength loss was approximately 49% compared to the strength of the unheated specimens. On the other hand, the percentage strength reduction of NC was less than that of the ECC after exposure to 200 • C and 400 • C. The residual compressive strength of the NC cubes after exposure to 200 • C and 400 • C was approximately 81% at both temperatures compared to the reference cubes as shown in Figure 4. However, the percentage residual compressive strength of the NC at 600 • C was approximately 50%, which was almost equal to that of the ECC (51.4%). The denser microstructure of ECCs compared to NC is considered as the main cause of the further strength reduction between 200 and 400 °C. ECCs comprise a much larger amount of very fine binder, fine silica sand, no coarse aggregate and lower water/cementitious material content, which in turn lowers the porosity of the ECC compared to the NC. The evaporation of the free pore water below 200 °C induces a pore pressure inside the microstructure. The dissipation of this pressure in the NC specimens due to the higher porosity relieves the internal thermal stresses, while these stresses are higher in the ECC due to the denser microstructure. As a result, the ECC suffered higher compressive strength losses at 200 and 400 °C. Previous researchers [53] reported that the total volume of the 0.1 micrometer and larger pores in the ECC reduced after exposure to 400 °C, which is attributed to the pozzolanic reaction of the unhydrated fly ash and other cementitious materials. Such a reaction would induce unfavorable volume changes due to the production of more C-S-H gel, which results in microstructural cracking leading to further strength degradation. The dehydration of hydrated products after exposure to temperatures higher than 400 °C is the main cause of the steep strength reduction at 600 °C, where this process leads to the degradation of the microstructure due to the increase of pore size and number and the further volume changes' micro-cracking. Sahmaran et al. [47] reported a significant increase in the volume and size of the pores of the ECC after exposure to 600 °C, where the porosity increased by 9% after exposure to 600 °C, which is large enough compared to 5% after exposure to 400 °C, while the pore size increased by at least 300% after 600 °C exposure. The denser microstructure of ECCs compared to NC is considered as the main cause of the further strength reduction between 200 • C and 400 • C. ECCs comprise a much larger amount of very fine binder, fine silica sand, no coarse aggregate and lower water/cementitious material content, which in turn lowers the porosity of the ECC compared to the NC. The evaporation of the free pore water below 200 • C induces a pore pressure inside the microstructure. The dissipation of this pressure in the NC specimens due to the higher porosity relieves the internal thermal stresses, while these stresses are higher in the ECC due to the denser microstructure. As a result, the ECC suffered higher compressive strength losses at 200 • C and 400 • C. Previous researchers [53] reported that the total volume of the 0.1 micrometer and larger pores in the ECC reduced after exposure to 400 • C, which is attributed to the pozzolanic reaction of the unhydrated fly ash and other cementitious materials. Such a reaction would induce unfavorable volume changes due to the production of more C-S-H gel, which results in microstructural cracking leading to further strength degradation. The dehydration of hydrated products after exposure to temperatures higher than 400 • C is the main cause of the steep strength reduction at 600 • C, where this process leads to the degradation of the microstructure due to the increase of pore size and number and the further volume changes' micro-cracking. Sahmaran et al. [47] reported a significant increase in the volume and size of the pores of the ECC after exposure to 600 • C, where the porosity increased by 9% after exposure to 600 • C, which is large enough compared to 5% after exposure to 400 • C, while the pore size increased by at least 300% after 600 • C exposure. Flexural Strength As shown in Figure 5, the flexural strength of the ECC followed a continuous decrease behavior with temperature up to 600 • C. The reference flexural strength of the ECC at room temperature was 6.94 MPa, while it reduced to 5.75 MPa, 4.32 MPa and 2.31 MPa after exposure to 200 • C, 400 • C and 600 • C, respectively. This means that the strength respective reductions at these temperatures were approximately 17%, 38% and 67%. Similarly, the NC showed a continuous steep decrease in flexural strength with temperature increase as shown in Figure 6. The residual flexural strength records of the NC after heating to 200 • C, 400 • C and 600 • C were 2.87 MPa, 2.16 MPa and 0.32 MPa, while the reference unheated specimens recorded a flexural strength of 3.70 MPa. Hence, the percentage reductions were approximately 22%, 42% and 91% at 200 • C, 400 • C and 600 • C, respectively. NC might be attributed to the finer mixture constituents and the absence of coarse aggregate in the ECC, which minimized the effect of bond degradation. Wang et al. [54] showed that the residual flexural strength of PVA-based ECC after exposure to 400 °C was approximately 58% of the unheated strength, which is quite comparable to the obtained result in this study, while Yu et al. [55] reported that PVA-based ECC exhibited flexural strength reductions of more than 50% and more than 40% after exposure to temperatures of 400 and 600 °C, respectively. Figure 7 shows the appearance of the external surfaces of a reference impact disk specimen and others heated to 200, 400 and 600 °C before testing. No significant changes in the specimens' appearance were noticed after high temperature exposure. However, it was observed that the gray color became lighter after 200 °C and small yellow areas were noticed on the surface of specimens exposed to 600 °C. This slight color change might be due to the decomposition of C-S-H gel particles [56][57][58]. It should also be noticed that PP fibers cannot sustain high temperatures where its melting point is less than 200 °C. As shown in Figure 8a, the presence of PP fibers had a significant impact in bridging the crack's opposite sides, resulting in a more gradual and ductile failure of the reference unheated specimens. On the other hand, the complete melting of fibers after exposure to 400 °C and higher eliminated this effect and created a more porous media. The channels left after fiber melting would connect and produce continuous porous networks, which have a positive effect by relieving the internal stresses due to the vapor pressure dissipation. On the other hand, these channels may have a negative effect by making the media more porous and hence more brittle under loads. Figure 8b shows that after exposure to 600 °C, the vaporization of PP fibers changed the internal color of the specimen to a dark gray and left a very porous structure behind. The continuous decrease in the flexural strength after high temperature exposure is generally attributed to the volumetric changes in the cement matrix due to vapor movements beyond 100 • C and the bond loss between binder and filler after 400 • C due to their different thermal properties. In addition, most of the degradation at higher temperatures is attributed to the chemical reactions after 400 • C (dehydration of C-S-H) and the increased porosity as discussed in the previous section. As the flexural strength depends on the capability of concrete to withstand tensile stresses, the initial flexural strength was apparently higher for the ECC owing to the crack bridging activity of PP fibers, in addition to the higher content of cementitious materials. However, this bridging activity diminished after exposure to temperatures higher than 200 • C due to the melting of PP fibers. The better performance of the ECC at high temperatures compared to the NC might be attributed to the finer mixture constituents and the absence of coarse aggregate in the ECC, which minimized the effect of bond degradation. Wang et al. [54] showed that the residual flexural strength of PVA-based ECC after exposure to 400 • C was approximately 58% of the unheated strength, which is quite comparable to the obtained result in this study, while Yu et al. [55] reported that PVA-based ECC exhibited flexural strength reductions of more than 50% and more than 40% after exposure to temperatures of 400 • C and 600 • C, respectively. Figure 7 shows the appearance of the external surfaces of a reference impact disk specimen and others heated to 200 • C, 400 • C and 600 • C before testing. No significant changes in the specimens' appearance were noticed after high temperature exposure. However, it was observed that the gray color became lighter after 200 • C and small yellow areas were noticed on the surface of specimens exposed to 600 • C. This slight color change might be due to the decomposition of C-S-H gel particles [56][57][58]. It should also be noticed that PP fibers cannot sustain high temperatures where its melting point is less than 200 • C. As shown in Figure 8a, the presence of PP fibers had a significant impact in bridging the crack's opposite sides, resulting in a more gradual and ductile failure of the reference unheated specimens. On the other hand, the complete melting of fibers after exposure to 400 • C and higher eliminated this effect and created a more porous media. The channels left after fiber melting would connect and produce continuous porous networks, which have a positive effect by relieving the internal stresses due to the vapor pressure dissipation. On the other hand, these channels may have a negative effect by making the media more porous and hence more brittle under loads. Figure 8b shows that after exposure to 600 • C, the vaporization of PP fibers changed the internal color of the specimen to a dark gray and left a very porous structure behind. Figure 7 shows the appearance of the external surfaces of a reference impact disk specimen and others heated to 200, 400 and 600 °C before testing. No significant changes in the specimens' appearance were noticed after high temperature exposure. However, it was observed that the gray color became lighter after 200 °C and small yellow areas were noticed on the surface of specimens exposed to 600 °C. This slight color change might be due to the decomposition of C-S-H gel particles [56][57][58]. It should also be noticed that PP fibers cannot sustain high temperatures where its melting point is less than 200 °C. As shown in Figure 8a, the presence of PP fibers had a significant impact in bridging the crack's opposite sides, resulting in a more gradual and ductile failure of the reference unheated specimens. On the other hand, the complete melting of fibers after exposure to 400 °C and higher eliminated this effect and created a more porous media. The channels left after fiber melting would connect and produce continuous porous networks, which have a positive effect by relieving the internal stresses due to the vapor pressure dissipation. On the other hand, these channels may have a negative effect by making the media more porous and hence more brittle under loads. Figure 8b shows that after exposure to 600 °C, the vaporization of PP fibers changed the internal color of the specimen to a dark gray and left a very porous structure behind. Cracking and Failure Impact Numbers The recorded cracking numbers (Ncr) of the ECC and NC are shown in Figure 9 at different levels of high temperatures, while the results of failure numbers (Nf) are shown in Figure 10. It is worthy to mention that the ACI 542-2R test is known for the high des- Cracking and Failure Impact Numbers The recorded cracking numbers (Ncr) of the ECC and NC are shown in Figure 9 at different levels of high temperatures, while the results of failure numbers (Nf) are shown in Figure 10. It is worthy to mention that the ACI 542-2R test is known for the high des- Cracking and Failure Impact Numbers The recorded cracking numbers (Ncr) of the ECC and NC are shown in Figure 9 at different levels of high temperatures, while the results of failure numbers (Nf) are shown in Figure 10. It is worthy to mention that the ACI 542-2R test is known for the high despersion of test results, where the Coefficient of Variation (COV) of the Ncr records of the ECC was in the range of 42% to 68.8%, while the COV of the recorded Nf results of the ECC specimens was in the range of 30.9% to 61.8%. reported that PVA fibers did not melt completely after exposure to 300 °C, which is higher than the approximate melting point of PVA (200 to 230 °C). ECCs are known for their high ability to withstand plastic deformation after cracking under tensile and flexural loads, which is attributed to their unique microstructure with high content of binder and fine filler in addition to the potential of fibers to withstand high tensile stresses across the cracks. These characteristics enabled the ECC specimens to absorb significantly higher energy compared to NC after cracking. The test results of this study showed that this potential is also valid under repeated impact loads. As shown in Figure 10, the failure impact number (Nf) of the unheated ECC specimens jumped to a very high limit compared to its corresponding Ncr, while that of NC was comparable to its cracking number, which duplicated the difference of Nf between the ECC and NC several times although the Ncr of the NC was higher than that of the ECC. The Nf of the unheated ECC was 259.3, while that of the NC was only 57.2. This means that the Nf of the NC was approximately equal to its Ncr with only 2.2 higher impacts, while the ECC sustained 216 more impacts after cracking. After exposure to 200 °C, the NC specimens lost approximately 73% of their initial failure impact performance and retained only 15.2 impacts at failure. Oppositely, the ECC specimens kept approximately the same failure strength of the unheated specimens due to the same reasons discussed above. As shown in Figure 10b, the residual Nf of the ECC after exposure to 200 °C was 99% of the corresponding unheated Nf with 256.7 impacts. As discussed previously, the PP fibers did not melt completely at 200 °C, which means that the fiber bridging activity was still partially effective after cracking. The hydration of the unhydrated products at this temperature might be another reason that enabled the specimens to sustain high impact numbers before cracking. On the other hand, as temperature increased beyond 200 °C, the microstructure of the ECC deteriorated steeply after the complete melting and vaporization of the PP fibers (around 340 °C [60]) and the decomposition of C-S-H gel, which resulted in a weak microstructure. Therefore, the impact strength deteriorated sharply after exposure to 400 and 600 °C. As shown in Figure 10b, the percentage residuals of the Nf after exposure to these temperatures were only 9.2 and 3.8%, respectively. Failure Patterns of Impact Specimens The post-failure appearance of a reference ECC specimen and others heated to different high temperatures after repeated impact loading are shown in Figure 11. It is clear in Figure 11a that the central loading area of the top surface of the reference specimen was fractured due to the damage. This fracture zone occurred under the effect of the repeated concentrated compressive stresses from the steel ball, which reflects the ability of the material to absorb significant impact energy under the concentrated impact loading. After the fracture of the surface layer, the PP fibers kept bridging the internal micro-cracks where the compressive impacts try to split the cylinder and hence induce internal tensile stresses, see Figure 8a. However, the continuous impacting could finally break the fibers or their bond with the surrounding media resulting in a progressive crack widening and propagation. Hence, the surface cracks become visible. As shown in Figure 11a, the reference specimens exhibited a ductile failure behavior with central Figure 9 shows that the reference unheated cracking number of the NC was higher than that of the ECC, which is attributed to the presence of gravel in the NC that enabled it to absorb a higher initial number of impacts before cracking. However, after high temperature exposure, the NC specimens showed much weaker response and deteriorated at much higher rate compared to the corresponding ECC specimens as shown in Figure 9a,b. The unheated Ncr of the ECC and NC were 43.3 and 55, respectively, noting that each impact number represents the average of six specimen records. On the other hand, the residual ECC cracking numbers were 41.5, 19.5 and 8.8 after exposure to 200 • C, 400 • C and 600 • C, respectively, while those of the NC specimens were 14.2, 3 and 1, respectively. The results reveal a steep drop in the cracking impact numbers of the NC, where the percentage residual Ncr values were only 25.8%, 5.5% and 1.8%, respectively, compared to the reference unheated number as shown in Figure 9b. On the other hand, the ECC showed an insignificant decrease (less than 5%) after exposure to 200 • C, while the percentage residual Ncr values were 45% and 20.4% after exposure to 400 • C and 600 • C, respectively. The rapid decrease of the Ncr of the NC is attributed to the discussed physical and chemical changes that occur after exposure to high temperatures, especially the dehydration of C-S-H, which deteriorates the cement matrix, in addition to the different thermal movements of cement paste and aggregate. Consequently, the internal structure becomes more and more brittle as the temperature increases, which leads to the loss of impact energy absorption capacity and hence to rapid cracking. On the other hand, the higher cementitious materials content, the finer matrix and the absence of aggregate reduced these effects and enabled the ECC specimen to continue withstanding more impacts before cracking. It should be noticed that although the melting point of PP fibers is less than 200 • C, a significant amount of these fibers still existed in the specimens heated to 200 • C. These fibers helped maintain a significant impact number before cracking, which is approximately equal to that of the unheated specimens (95.8%). Aslani et al. [59] reported that PVA fibers did not melt completely after exposure to 300 • C, which is higher than the approximate melting point of PVA (200 • C to 230 • C). ECCs are known for their high ability to withstand plastic deformation after cracking under tensile and flexural loads, which is attributed to their unique microstructure with high content of binder and fine filler in addition to the potential of fibers to withstand high tensile stresses across the cracks. These characteristics enabled the ECC specimens to absorb significantly higher energy compared to NC after cracking. The test results of this study showed that this potential is also valid under repeated impact loads. As shown in Figure 10, the failure impact number (Nf) of the unheated ECC specimens jumped to a very high limit compared to its corresponding Ncr, while that of NC was comparable to its cracking number, which duplicated the difference of Nf between the ECC and NC several times although the Ncr of the NC was higher than that of the ECC. The Nf of the unheated ECC was 259.3, while that of the NC was only 57.2. This means that the Nf of the NC was approximately equal to its Ncr with only 2.2 higher impacts, while the ECC sustained 216 more impacts after cracking. After exposure to 200 • C, the NC specimens lost approximately 73% of their initial failure impact performance and retained only 15.2 impacts at failure. Oppositely, the ECC specimens kept approximately the same failure strength of the unheated specimens due to the same reasons discussed above. As shown in Figure 10b, the residual Nf of the ECC after exposure to 200 • C was 99% of the corresponding unheated Nf with 256.7 impacts. As discussed previously, the PP fibers did not melt completely at 200 • C, which means that the fiber bridging activity was still partially effective after cracking. The hydration of the unhydrated products at this temperature might be another reason that enabled the specimens to sustain high impact numbers before cracking. On the other hand, as temperature increased beyond 200 • C, the microstructure of the ECC deteriorated steeply after the complete melting and vaporization of the PP fibers (around 340 • C [60]) and the decomposition of C-S-H gel, which resulted in a weak microstructure. Therefore, the impact strength deteriorated sharply after exposure to 400 • C and 600 • C. As shown in Figure 10b, the percentage residuals of the Nf after exposure to these temperatures were only 9.2% and 3.8%, respectively. Failure Patterns of Impact Specimens The post-failure appearance of a reference ECC specimen and others heated to different high temperatures after repeated impact loading are shown in Figure 11. It is clear in Figure 11a that the central loading area of the top surface of the reference specimen was fractured due to the damage. This fracture zone occurred under the effect of the repeated concentrated compressive stresses from the steel ball, which reflects the ability of the material to absorb significant impact energy under the concentrated impact loading. After the fracture of the surface layer, the PP fibers kept bridging the internal micro-cracks where the compressive impacts try to split the cylinder and hence induce internal tensile stresses, see Figure 8a. However, the continuous impacting could finally break the fibers or their bond with the surrounding media resulting in a progressive crack widening and propagation. Hence, the surface cracks become visible. As shown in Figure 11a, the reference specimens exhibited a ductile failure behavior with central fracture zone and multi-surface cracking. number; (b) residual ratio of failure number. Failure Patterns of Impact Specimens The post-failure appearance of a reference ECC specimen and others heated to different high temperatures after repeated impact loading are shown in Figure 11. It is clear in Figure 11a that the central loading area of the top surface of the reference specimen was fractured due to the damage. This fracture zone occurred under the effect of the repeated concentrated compressive stresses from the steel ball, which reflects the ability of the material to absorb significant impact energy under the concentrated impact loading. After the fracture of the surface layer, the PP fibers kept bridging the internal micro-cracks where the compressive impacts try to split the cylinder and hence induce internal tensile stresses, see Figure 8a. However, the continuous impacting could finally break the fibers or their bond with the surrounding media resulting in a progressive crack widening and propagation. Hence, the surface cracks become visible. As shown in Figure 11a, the reference specimens exhibited a ductile failure behavior with central fracture zone and multi-surface cracking. Referring to the impact response of the ECC specimens after exposure to 200 °C, the failure pattern at this temperature was similar to that of the reference unheated specimens, but with a lower number of standing fibers across the mouth of the main crack. It should also be noticed that the other minor cracks were wider at this temperature ( Figure 11b) compared to those of the specimens, which discloses the lower ductility and higher brittleness of the heated specimens. As previously disclosed, the heating to 400 and 600 °C caused serious damage to the microstructure of the ECC and vaporized the reinforcing elements (PP fibers), which was approved by the brittle and sudden failure of the specimens to two, three or four pieces with wide cracks. This failure was not associated with central fracturing as in the case of the reference and 200 °C specimens, where the thermally weakened structure could not absorb significant concentrated impacts, as shown in Figure 11c,d. Strength Correlation with Temperature In some cases, it is required to evaluate the residual strength of a material after exposure to a specific temperature. If sufficient experimental data are not available, extrapolation from other existing data may be considered satisfactory for a quick primary evaluation. Despite the limited number of points for each fit, simplified correlations were introduced, as shown in Figure 12, to describe the relation of the strength and impact numbers of the PP-based ECC after exposure to high temperatures. Figure 12a shows that Referring to the impact response of the ECC specimens after exposure to 200 • C, the failure pattern at this temperature was similar to that of the reference unheated specimens, but with a lower number of standing fibers across the mouth of the main crack. It should also be noticed that the other minor cracks were wider at this temperature (Figure 11b) compared to those of the specimens, which discloses the lower ductility and higher brittleness of the heated specimens. As previously disclosed, the heating to 400 • C and 600 • C caused serious damage to the microstructure of the ECC and vaporized the reinforcing elements (PP fibers), which was approved by the brittle and sudden failure of the specimens to two, three or four pieces with wide cracks. This failure was not associated with central fracturing as in the case of the reference and 200 • C specimens, where the thermally weakened structure could not absorb significant concentrated impacts, as shown in Figure 11c,d. Strength Correlation with Temperature In some cases, it is required to evaluate the residual strength of a material after exposure to a specific temperature. If sufficient experimental data are not available, extrapolation from other existing data may be considered satisfactory for a quick primary evaluation. Despite the limited number of points for each fit, simplified correlations were introduced, as shown in Figure 12, to describe the relation of the strength and impact numbers of the PP-based ECC after exposure to high temperatures. Figure 12a shows that the relations of both compressive strength and flexural strength with temperature can be represented using linear fits with good determination coefficients (R 2 ) of 0.96 and 0.99, respectively. Referring to Figure 12c, it can be said that a multilinear relation would better describe the reduction of compressive strength with temperature. However, a determination coefficient of 0.96 is good enough to accept the simpler linear correlation. Conclusions Compressive, flexural and repeated impact tests were conducted in this study to evaluate the residual strength of PP fiber-based ECCs after exposure to high temperatures up to 600 °C. Based on the results obtained from the experimental work of this study, the following are the most important conclusions: 1-The compressive strength of the ECC decreased with temperature increase. However, the residual strength at 400 °C was close to that at 200 °C, while exposure to 600 °C led to a significant strength reduction. The percentage residual compressive strengths of the tested ECC cubes after exposure to 200, 400 and 600 °C were approximately 78, 70 and 51%, respectively. The reason for the strength deterioration after 400 °C is attributed to the chemical and physical changes within the material microstructure due to the temperature exposure, which include the decomposition of C-S-H gel and the increase of porosity owing to the vaporization of PP fibers. The linear correlation could effectively describe the degradation of compressive strength after high temperature exposure with an R 2 of 0.96. 2-The flexural strength of the ECC showed a clear continuous reduction with tem- The impact numbers showed a weaker linear correlation degree with temperature than those of compressive strength and flexural strength. As shown in Figure 12b, the linear relations of Ncr and Nf with temperature underestimate the retained impact numbers at 200 • C, while that of Nf overestimates the experimental failure impact number recorded at 400 • C. The deviations from the experimental records at these temperatures impacted the degree of the linear correlation, especially for Nf, where the R 2 of the linear correlation was 0.84, which is the lowest among the obtained ones. To avoid such a low degree of correlation, nonlinear correlations were tried and the exponential one was found to give a coefficient of determination of 0.9, which is quite acceptable as an indication of a good correlation. As shown in Figure 12c, the exponential correlations could acceptably estimate the degradation of Ncr and Nf after exposure to the highest temperatures (400 • C and Conclusions Compressive, flexural and repeated impact tests were conducted in this study to evaluate the residual strength of PP fiber-based ECCs after exposure to high temperatures up to 600 • C. Based on the results obtained from the experimental work of this study, the following are the most important conclusions: 1-The compressive strength of the ECC decreased with temperature increase. However, the residual strength at 400 • C was close to that at 200 • C, while exposure to 600 • C led to a significant strength reduction. The percentage residual compressive strengths of the tested ECC cubes after exposure to 200 • C, 400 • C and 600 • C were approximately 78%, 70% and 51%, respectively. The reason for the strength deterioration after 400 • C is attributed to the chemical and physical changes within the material microstructure due to the temperature exposure, which include the decomposition of C-S-H gel and the increase of porosity owing to the vaporization of PP fibers. The linear correlation could effectively describe the degradation of compressive strength after high temperature exposure with an R 2 of 0.96. 2-The flexural strength of the ECC showed a clear continuous reduction with temperature compared to that of compressive strength and higher percentage reductions at 400 • C and 600 • C. Therefore, the linear correlation with temperature was the most accurate one among the conducted tests with an R 2 of 0.99. The residual flexural strengths were reduced to approximately 62 and 33% after heating to 400 • C and 600 • C, respectively. 3-The ECC specimens exhibited minor reductions in the cracking number (Ncr) after exposure to 200 • C with a residual percentage of approximately 96%. The reduction in Ncr was much higher after exposure to the higher temperatures. However, the deterioration of normal concrete (NC) was much faster. ECCs retained percentage residual Ncr values of approximately 45% and 20% after exposure to 400 • C and 600 • C, respectively, while the corresponding percentages of NC were approximately 5% and 2%. The much higher binder content, finer matrix and the absence of aggregate enabled the heated ECC specimen to continue absorbing higher impacts till cracking compared to NC. 4-The failure impact number of the unheated ECC specimens jumped several times higher than the corresponding Ncr, which assured the ability of the dense and fine microstructure of the ECCs, with the help of the PP-fibers crack bridging elements, to amplify the capacity impact energy absorption at failure. The retained Nf was 259.3, which was approximately 4.5 times that of NC although of the higher Ncr of NC. After exposure to 200 • C, the ECC retained almost the same unheated Nf number (99%), while NC retained only 27% of its unheated failure number. Oppositely, both ECC and NC sharply lost their impact resistances after exposure to 400 • Cand 600 • C with percentage residual Nf values of less than 10%, and 4%, respectively. 5-The linear correlation was found suitable to describe the reduction of Ncr with temperature with a good R 2 of 0.93. However, such correlation noticeably underestimated the recorded Nf at 200 • C and overestimated that at 400 • C, which decreased its R 2 to 0.84. On the other hand, the exponential relation was found to better describe the deterioration of Nf after high temperature exposure, where R 2 was 0.9.
11,296.6
2021-12-30T00:00:00.000
[ "Engineering", "Materials Science" ]
Chromothripsis is correlated with reduced cytotoxic immune infiltration and diminished responsiveness to checkpoint blockade immunotherapy Background: Chromothripsis caused massive, clustered genomic rearrangements is prevalent in cancer and is considered a new paradigm for tumorigenesis and progression. In this study, we investigated the association among chromothripsis, anti-tumor immune responses, and responsiveness to immune checkpoint blockade (ICB). Methods: Quantification of immune cell infiltration and functional enrichment of immune-related signaling pathways were performed in the discovery set (n = 9403) and the validation set (n = 1140). we investigated the association between chromothripsis and anti-tumor immune responses. In the immunotherapy cohort, copy number alteration-based chromothripsis scores (CPSs) were introduced to assess the extent of chromothripsis to evaluate its association with responsiveness to ICB. Results: In the discovery set and the validation set, the ratios of CD8+ T cells to Tregs, TAMs, and MDSCs were significantly lower in tumors with chromothripsis (P = 1.5 × 10-13, P = 5.4 × 10-8, and P = 1.2 × 10-4, respectively, TCGA; P = 1.0 × 10-13, P = 3.6 × 10-15, and P = 3.3 × 10-3, respectively, PCAWG). The relevant pathways underlying the antitumor immune effect were significantly enriched in tumors without chromothripsis. Chromothripsis can be used as an independent predictor, and patients with low-CPSs experienced longer overall survival (OS) after immunotherapy [HR, 1.90; 95% confidence interval, 1.10-3.28; P = 0.019]. Conclusions: Our findings highlight the reduced cytotoxic immune infiltration in tumors with chromothripsis and enhanced immunosuppression in the tumor microenvironment. Chromothripsis can thus be used as a potential indicator to help identify patients who will respond to ICB, which could complement established biomarkers. Introduction Chromothripsis is typically associated with massive genomic rearrangements accompanied by copy number alterations in a small region of one or several chromosomes [1,2]. This catastrophic event is caused by broken chromosome segments being randomly stitched together by the DNA repair machinery to facilitate cell survival after a huge disruption to the cell genome (massive breakage of chromosomes) [3]. However, the causative force of this physical chromosomal damage is unclear [4][5][6]. The definition of chromothripsis and an accurate description of its characteristics are essential for the Ivyspring International Publisher identification of chromothripsis events. Rigorous judgment criteria have been proposed and explained by Korbel and Campbell [2]. A) Breakpoints on chromosomes are clustered. B) The copy number oscillation changes are regular. C) Heterozygous deletion regions and heterozygous regions are spaced apart from each other. D) Those affected by chromothripsis are usually chromatids. E) The joining of DNA fragments is random, i.e., there is no directional preference for joining of fragments. F) The order in which broken DNA fragments are rejoined together is also random, i.e., the distance between the two breakpoints involved in each rearrangement is random. ShatterSeek [7] (whole-genome sequencingbased data) and CTLPScanner [8] (microarray-based data) are currently available for chromothripsis detection and analysis. The classical hypothesis of tumorigenesis and progression assumes that tumorigenesis is a progressive process; as such, tumor precursor cells require cumulative mutations in multiple key genes to acquire a growth advantage, but the concept of chromothripsis challenges this. Chromothripsis might allow for the simultaneous occurrence of oncogenic fusion/amplification and the loss of tumor suppressor genes, which could accelerate the tumorigenic process [7,9,10]. In addition, chromosomal rearrangements in tumors with chromothripsis present opportunities for tumors to evolve faster to adapt to altered growth conditions (e.g., drug resistance acquisition) [10,11]. Chromothripsis is prevalent in tumors, with liposarcoma and osteosarcoma having the highest susceptibility [7,12]. It has also been reported to be associated with poor prognosis in patients with a variety of cancers [13][14][15][16]. Chromosomal instability (CIN), a hallmark of chromothripsis, has been extensively studied in terms of its association with immunity [17,18]. Copy number alteration burden, particularly a copy number loss burden, is associated with reduced gene expression with respect to immune-related pathways. The copy number loss burden is also higher in patients who do not respond to immune checkpoint inhibitor therapy [19]. Chromosome somatic copy number alteration (SCNA) levels are associated with immune escape, and a high SCNA level is associated with poorer patient survival. Further, it has been suggested that a large fraction of canonical chromothripsis events in polyploid tumors are late events [7,20]. This suggests a potential association between chromothripsis and immunity. Chromothripsis is a primary mechanism that accelerates genomic DNA rearrangements and amplification to form circular extrachromosomal DNA (ecDNA) [10]. EcDNA is encapsulated in micronuclei, which represent an important source of immunostimulatory DNA [4,21,22]. This suggests the involvement of chromothripsis in the cGAS-sting pathway, which is a component of innate immunity, through micronucleus formation. In addition, clinical data indicate a higher incidence of chromothripsis in patients exhibiting weaker anti-tumor immune effects [23]. Collectively, the association between chromothripsis and antitumor immune response is ambiguous. Here, our objective was to investigate the relationship among chromothripsis, anti-tumor immune responses, and responsiveness to immune checkpoint blockade (ICB) immunotherapy. In both a discovery and validation dataset, we identified consistently reduced immune cell infiltration in tumors with chromothripsis, along with impaired cytolytic activity. We also explored the association between tumor chromothripsis and broad manifestations in the immune microenvironment. In addition, we constructed chromothripsis prediction models from copy number alteration (CNA) signatures and obtained chromothripsis scores (CPSs) from them to elucidate the relationship between chromothripsis and therapeutic outcomes in patients receiving ICB immunotherapies. Chromothripsis is correlated with reduced cytotoxic immune infiltration in the discovery The Cancer Genome Atlas (TCGA) dataset To investigate the effect of chromothripsis on the tumor microenvironment, we examined multi-omics data from 24 cancer types (solid tumors only) from TCGA. For copy number profiles derived from the SNP6 microarray, we used CTLPScanner to detect and annotate chromothripsis in patients [8]. At the same time, we quantified immune-related features based on gene expression profiles using established methods (see Materials and Methods for details). Patients were divided into two groups based on the occurrence of chromothripsis, where the total immune cell infiltration score was significantly lower in the chromothripsis group than in the non-chromothripsis group (median 1.58 × 10 −1 vs 1.05 × 10 −1 , P < 2.2 × 10 −16 , Figure 1A). The enrichment of 28 tumor-infiltrating immune cells in the tumor ( Figure 1B, see Table S1A for details) was used to further characterize the changes in the microenvironmental composition of tumors with chromothripsis [24]. First, we observed that cytotoxic lymphocytes (CD8 + T cells and natural killer (NK) cells), which are considered the primary executor of antitumor immunity [25], were enriched in tumors without chromothripsis. Moreover, we found that immunosuppression-associated tumor-infiltrating myeloid-derived suppressor cells (MDSCs), tumorassociated macrophages (TAMs), and regulatory T cells (Tregs) [26] were enriched in the tumors without chromothripsis. Chromothripsis is correlated with reduced cytotoxic immune infiltration in the validation Pancancer Analysis of Whole Genomes (PCAWG) dataset The development of next-generation sequencing technology allows us to easily obtain whole genome sequencing (WGS) data, based on which we can call CNA and Structure Variantion (SV). Using SV information, ShatterSeek can be used to detect and annotate tumor chromothripsis [7]. Here, we acquired multi-omics data from the PCAWG project [32] to validate the findings of TCGA dataset. Association between chromothripsis with genetic features In addition to transcriptomic expression profiling, we next focused on genetic features of cancer, particularly immune-related predictors. From patient samples, we obtained the mutation frequencies in ICB-responsiveness-related genes (including TP53, KRAS [33], PTEN [34], JAK1/2 [35], and B2M [36]), the tumor mutation burden (TMB) [37,38], the burden of somatic copy number alterations [19], and the level of somatic copy-number alterations (quantified as weighted genome instability (wGII) [39]; see Materials and Methods for details) to comprehensively compare their relationship with chromothripsis. Consistent with previous publications [7,9], TP53 mutation frequencies were higher in-patient samples from the chromothripsis group than in those from the non-chromothripsis group in the discovery TCGA dataset. In the pan-cancer scale analysis, the differences in mutation frequencies of the remaining representative genes were much lower than that for TP53 (see Table S3C for details, Figure 3B). The TMB was modest and significantly higher in tumors with chromothripsis (median -8.62 × 10 −2 vs 4.58 × 10 −2 , P = 2.6 × 10 −9 , Figure 3C), suggesting that chromothripsis can lead to an increase in somatic mutations and represents a possible increase in tumor neoantigens. Expectedly, the SCNA levels, represented by wGII, were also significantly higher in tumors with chromothripsis (median 2.94 × 10 −1 vs 6.28 × 10 −1 , P < 2.2 × 10 −16 , Figure 3E), which resulted from the massive, clustered genomic rearrangements mediated by chromothripsis. Chromothripsis could thus lead to increased wGII. Similarly, we found that the TP53 mutation frequency in the validation PCAWG dataset was higher in-patient samples with chromothripsis (see Table S3C for details, Figure 4B). TP53 malfunction could be a predisposing factor for chromothripsis [4]. Although we found a higher incidence of TP53 mutations in tumors with chromothripsis in the discovery and validation set, >60% (mean value) of the tumors with chromothripsis never showed TP53 mutations or amplifications. This suggested that TP53 (and other representative genes) might not be a significant factor in suppressing the anti-tumor immune response in tumors with chromothripsis. In addition, we observed inconsistent results. In the discovery TCGA dataset, the burden of copy number alterations (CNA burden, including CNA loss and CNA gain) was significantly higher in the chromothripsis group (P < 2.2 × 10 −16 , P = 3.8 × 10 −5 , respectively, Figure 3D). In the validation PCAWG dataset, the CNA burden was significantly lower in tumors with chromothripsis (P < 2.2 × 10 −16 , P = 2.5 × 10 −3 , respectively, Figure 4D). There were many possibilities for this difference, and this difference suggested a possible stochastic association between the CNV burden and chromothripsis. The CNV burden cannot be a major factor in the reduced antitumor immune response in tumors with chromothripsis. Collectively, we believed that chromothripsis can be used as an independent predictor. Chromothripsis scores predict survival outcomes for patients after immunotherapy Given the correlation between chromothripsis and ICB-responsiveness features, we next examined whether there was a correlation between chromothripsis and patient survival after immunotherapy. We acquired datasets from three ICB-treated clinical trials, including two melanoma cohorts and one glioma cohort [42][43][44], for which tumors were available for whole-exome sequencing (WES) data analysis. CN signatures serve as a flexible tool to identify the presence of chromothripsis, with a performance comparable to that of ShatterSeek for both WES and WGS data [13]. We further validated the reliability of this tool using both the discovery and validation datasets (TCGA, AUC = 0.81; PCAWG, AUC = 0.89; Figure S3). We obtained CPSs based on a CN signature prediction model. Notably, we integrated all three immunotherapy cohorts to expand the sample size and ensure the reliability of results from the prediction model. In addition, we accounted for the TMB, wGII, CNA burden, PD-L1 expression level, and CD8A expression level (see Table S4 for details), which have been described as biomarkers of survival outcomes for patients treated with immunotherapy. We compared the sequential trends associated with these predictors in terms of overall survival (OS) by generating time-dependent receiver operating characteristic (ROC) curves ( Figure 5A). The time-dependent ROC curve for CPS was continuously superior (Figure 5B). According to the survROC curves for 1-, 2-, and 3-year OS for CPS, the ROC curve was found to be greater for 1-and 3-year OS ( Figure S4). The univariate Cox regression analysis showed that CPS was significantly associated with survival in patients treated with ICB immunotherapy (HR with 95% CI = 1.90 [1.10-3.28], P = 1.9 × 10 −2 , Figure 5D). The remaining predictors did not show significance, which is consistent with previous results. We then defined patients with CPSs greater than the median as the CPS high group, and these patients had a median survival time of 15.8 months, which was significantly lower than the median survival time of 28.1 months for patients in the CPS low group (P = 1.9 × 10 −2 , Figure 5D). Grouping patients again using the median as the threshold, it was found that other biomarkers could not be adequately used to classify patient responses in these cohorts ( Figure S5). Furthermore, we observed a numerical trend towards lower response rates and objective response rates in the CPS high group compared to the CPS low group (P=0.64, Fisher exact test, Figure S6A; P=0.48, Fisher exact test, Figure S6B). These results support the potential of CPS to complement established biomarkers in identifying patients who are likely to respond favorably to immunotherapy. Discussion The role of chromothripsis in anti-tumor immunity is unclear, even though it is prevalent in tumors and plays an important role in tumor evolution. Here, we report that chromothripsis is associated with reduced cytotoxic immune cell infiltration and that its copy number signature-based score can be used to reliably predict survival outcomes for patients receiving ICB treatment. In both discovery and validation datasets, we found that immune infiltration in tumors with chromothripsis was reduced and that immune suppression in the tumor microenvironment was enhanced. These results all suggested an unfavorable survival outcome for patients harboring tumors with chromothripsis receiving immunotherapy. Chromothripsis is involved in the cGAS-STING signaling pathway through micronucleus formation. Activation of the cGAS-STING signaling pathway in innate immune cells induces the production of type I interferon, which initiates an antigen-specific immune response leading to tumor killing [45,46]. Recent findings suggest that STING can induce regulatory B cells to suppress the anticancer capacity of NK cells [47]. Thus, the role of STING in immunotherapy is controversial [48]. The frequencies of mutations in ICB-responsiveness-related genes (including Kras, etc.) and the CNV burden were not consistent in both the discovery and validation datasets. One possible explanation for this is that chromothripsis is heterogeneous. Chromothripsis could exhibit certain chromosomal preferences; for example, chromothripsis is enriched in chromosome 12 in liposarcomas and chromosomes 3 and 5 in kidney renal cell carcinomas [7]. Meanwhile, the location of chromothripsis occurrence might be different among different patients with the same type of cancer, which provides diverse options for cancer evolution. TMB symbolizes the intrinsic characteristics of tumor and is representative of immunogenic neoantigens [40,49]. We found that chromothripsis was associated with a high TMB, which seems to create a contradiction. One possible explanation for this is that in our study, both antigen presentation and antigen recognition were impaired in tumors with chromothripsis, which resulted in the inability of tumor neoantigens to exert their conventional effect on anti-tumor immunity. In fact, CIN leads to impairments in antigen processing and presentation and has been described in the previous results [40,50]. In addition, it has been reported that TMB is not an accurate predictive biomarker for ICIs, for example, non-small cell lung cancer patients with KRAS and SKT11 co-mutations and high TMB do not respond to immunotherapy [51]. This could also explain the association of patients with chromothripsis with high TMB but with poor responsiveness to immunotherapy. We further integrated multi-omics and clinical data from multiple published clinical trials of ICB. Both time-dependent ROC curves and univariate Cox regression analysis showed that the CPS, based on a copy number signature, outperformed other biomarkers. Chromothripsis, as a potential indicator, may thus predict survival outcomes for patients after ICB immunotherapy. In conclusion, our analysis suggests that the identification of chromothripsis could be useful to determine which patients are most likely to respond to ICB immunotherapy. Furthermore, an in-depth study of the mechanisms by which chromothripsis affects anti-tumor immunity and responses to immunotherapy might provide a pathway that could be therapeutically targeted to improve response rates to ICB. With the accumulation of available samples in the future, more comprehensive and in-depth studies will be possible. Tumor microenvironment analysis Bulk RNA-seq data originated from the TCGA and PCAWG databases. We acquired feature gene panels for 28 immune cell types from a publication [24]. Meanwhile, the relative abundance (represented by enrichment scores) of the immune cell types in tumor microenvironment was quantified by Single Sample Gene Set Enrichment Analysis (ssGSEA) [52]. Similarly, we used multiple gene signatures from clinical trials or widely used gene signatures to quantify the ICB-responsiveness of the tumor microenvironments by performing ssGSEA. The enrichment score of the ssGSEA had a negative value, which resulted in the relevant ratios of the relative abundance of the relative cells not being calculated. Considering this situation, we replaced the ssGSEA score with the geometric mean, calculated using the gene signatures of each cell type, which ensured that the calculation of the relative ratios was correct. GSVA, as an unsupervised gene enrichment method, enables modeling in highly heterogeneous sample populations to estimate related variation in pathway activity. We quantified the activity of signaling pathways (including antigen processing and presentation, CD8 TCR downstream pathways, IL-2 signaling, IL-15 signaling, NK-mediated cytotoxicity, and IFN-γ pathway) that were derived from MSigDB gene sets and involved anti-tumor immune effects. GSEA, which is also a gene set enrichment method, was used to validate the GSVA results [52,53]. Genetic features of cancer VCF or MAF files containing somatic mutations and copy number alteration profiles were downloaded from TCGA and PCAWG. Using the R package Maftools (version 2.8.05), we converted VCF files to MAF files for the subsequent statistical analysis [54]. The total number of somatic gene coding errors, insertions or deletions, base substitutions per million bases is defined as TMB which was calculated using the tmb function in the R package Maftools, based on the acquired somatic mutations. Patients were divided into two groups, namely the chromothripsis and non-chromothripsis groups. Then, we determined the frequency of mutations in TP53, KRAS, PTEN, JAK1/2, and B2M in different groups. The burden of copy number alterations indicates the total number of genes with copy number gains or losses. Bedtools (version 2.30.0) [55] was utilized to overlap copy number alteration profiles with protein-coding regions to obtain the number of genes exhibiting gains and losses per copy number segment, and the results were finally summarized. Tumor ploidy is expressed as a weighted median integer copy number (the weight is the length of the copy number segment). With tumor ploidy, we obtained the percentage of genomic material gained and lost per chromosome. Ultimately, the average of this percentage for all autosomes is the wGII of a sample [39]. These approaches were also applied to the immunotherapy cohorts. Processed paired BAM files (tumor and matched normal samples) were used as input to MuTect2 (which is integrated in GATK, default parameters) to identify somatic single nucleotide variants and small insertions or deletions. We further filtered the acquired mutations based on three rules. First, we filtered for high-confidence variants (coverage of at least 5-fold or allele ratio > 0.05). Second, non-silent variants (including missense, nonsense, frameshift, and splice site variants) were selected. Third, only rare variants (the frequencies of variants must be less than 0.005 in relevant databases, including 1000G, ESP6500, dbSNP, ExAC) were selected. loss-ofheterozygosity events and CNAs were acquired by Fraction and Allele-specific Copy number Estimate from Tumor/normal Sequencing (FACETS) [59]. CNV signature analysis Based on the processing protocol in the publication [13], we further processed copy number profiles from the TCGA, PCAWG, and immunotherapy cohorts, with processing details including the removal of regions corresponding to IgK, IgL, IgH, and X chromosomes, and exclusion of CN changes less than 50 kB. Similarly, we followed the definition of six essential characteristics of CN in the publication: 1) the size of the segments, 2) the absolute CN of the segment, 3) the CN difference between adjacent segments, 4) the number of breakpoints per chromosome arm, 5) the length of oscillating CN segment chains, and 6) the number of breakpoints per 10 Mb. Based on the mclust R package, we identified the optimal number of categories for each CN feature. The hierarchical Dirichlet process (hdp) was utilized to perform de novo CN signature extraction and was performed based on the CN category matrix. The extracted CN signatures were used to calculate the prediction metrics of chromothripsis using the generalized linear model. Based on area-under-thecurve from ROC curves, we evaluated the prediction accuracy of chromothripsis via 10-fold crossvalidation. Analysis of chromothripsis For whole exome sequencing data, the obtained copy number variation profiles were used for CNV signature analysis. First, we filtered the CNV profiles and removed the corresponding regions on X chromosome. The optimal number of categories was obtained by clustering the six CN basic features in the CNV profiles. Then we extracted the CN features from scratch by the hierarchical Dirichlet process (hdp), and these features were input to the established CN signatures prediction model (generalized linear model) as basic elements to obtain the prediction metrics of chromothripsis, which are CPSs. Survival analysis To ensure comparability among individual indicators, CPSs obtained from the CN feature prediction model were grouped in the same way as other indicators (including TMB, wGII, CNA burden, PD-L1 expression, and CD8A expression). The median of all indicators was used as a threshold to group patients, and the one higher than the median of this indicator was the high group. Univariate Cox regression analysis was applied to determine HRs. Kaplan-Meier analysis was employed to estimate survival and log-rank test was used to determine the p-values. Statistical Analysis Our analysis was performed with R version 4.0.5. The Fisher's exact test was used for 2 × 2 tables of categorical variables and Wilcoxon rank-sum test was applied for differences in continuous variables unless otherwise specified. The noted software tools (including GSVA, pROC, Maftools, hdp, glmnet, timeROC, and survminer) that we used throughout the analysis are publicly available. Data and materials availability All data are available in the main text or the supplementary materials. All raw data used in our analysis are publicly available. We acquaired the datasets for clinical parameters, SCNAs (SNP-arraybased data), mutations, and RNA-seqfrom TCGA database (https://tcga-data.nci.nih.gov). The datasets for clinical parameters, SCNAs (the next-generation sequencing-based data), mutations, and RNA-seq came from the PCAWG database (https://dcc.icgc .org/pcawg/). To ensure the reliability of the hierarchical Dirichlet process, we integrated all immunotherapy cohorts to expand the sample size. For immunotherapy cohorts, we utilized raw whole exome sequencing data and RNA-seq clinical parameters from the following studies: 1) Hugo et al., an advanced melanoma anti-PD-1 treated cohort; 2) Riaz et al., an advanced melanoma anti-PD-1 treated cohort; 3) Zhao et al., an advanced glioblastoma anti-PD-1 treated cohort. RECIST 1.1-based quantifications of responses were then used to designate patients as responders (stable disease (SD) for ≥ 6 months, partial response (PR), or complete response (CR)) or non-responders (SD with <6-month duration or progressive disease (PD)). Similarly, patients could also be distinguished as objective (CR or PR) or non-objective (PD or SD) responders. In TCGA and PCAWG databases, the types of tumor samples include fresh tissue, liquid nitrogen cryopreserved tissue, dry ice cryopreserved tissue, or paraffin tumor tissue. In the immunotherapy cohort, Zhao cohort did not specify the type of tumor sample, and both Riaz and Hugo cohorts included fresh tissue samples.
5,345.2
2023-02-27T00:00:00.000
[ "Medicine", "Biology" ]
Standard Model fragmentation functions at very high energies We compute the leading-order evolution of parton fragmentation functions for all the Standard Model fermions and bosons up to energies far above the electroweak scale, where electroweak symmetry is restored. We discuss the difference between doublelogarithmic and leading-logarithmic resummation, and show how the latter can be implemented through a scale choice in the SU(2) coupling. We present results for a wide range of partonic center-of-mass energies, including the polarization of fermion and vector boson fragmentation functions induced by electroweak evolution. Introduction It is a well known fact that electroweak corrections to hard processes at proton or electron colliders contain logarithmically enhanced contributions of the form α n L 2n , where L = ln(q/m V ), q being the hard process scale and m V ∼ m W/Z . This is the case even for observables that are completely inclusive over the final state, and can be traced back to the fact that the initial state protons are not singlets under the SU(2) gauge group. Due to this double-logarithmic scaling, the convergence of electroweak perturbation theory becomes worse as the center-of-mass energy increases, and ultimately breaks down completely, namely when αL 2 ∼ 1. To obtain reliable predictions at these energy scales requires a reorganization of the perturbative expansion such that these large logarithms are resummed to all orders in perturbation theory. Most of the studies of electroweak logarithms have considered completely exclusive observables, such that the final state is fixed. In this case the only contributions to logarithmically enhanced electroweak corrections arise from the virtual exchange of massive -1 -JHEP11(2018)030 gauge bosons. The mass of the gauge boson regulates the IR divergences present in massless gauge theories, giving rise to the logarithmic sensitivity on m V . These electroweak Sudakov logarithms have been studied for a long time [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17], and a systematic way to resum them using soft-collinear effective theory (SCET) [18][19][20][21] was developed in [15,16]. Just as for massless theories, the real radiation of gauge bosons leads to infrared sensitivity, and therefore logarithmic sensitivity to m V is present in such real emission contributions as well. An analogy with parton showers allowed the resummation of the enhanced corrections to leading logarithmic (LL) accuracy [22]. As already discussed, even fully inclusive observables contain double logarithmic sensitivity to the ratio q/m V , due to the fact that the initial state is not an SU(2) singlet. For an observable that is completely inclusive over the final state, all logarithmically enhanced terms arise from initial-state radiation of W bosons. To LL accuracy, the large logarithms arise from emissions of heavy gauge bosons that are both collinear and soft, and are described by the DGLAP evolution of parton distribution functions [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37], where one needs the full set of particles in the Standard Model. These DGLAP equations were first derived in [31], and the phenomenology of this DGLAP evolution in the complete SM was studied in [38,39]. As will be shown in this paper, while the DGLAP evolution presented in [38,39] was only accurate to double-logarithmic level, the full LL structure can be obtained for such completely inclusive observables through an appropriate scale choice in the SU(2) coupling constant. Most realistic observables, however, contain a final state which is neither fully inclusive nor fully exclusive. The results of [40] allow one to obtain NLL predictions where a subset of the final state particles is fixed, while being inclusive over the emission of additional particles. So for example, they allow one to compute the cross section of the process pp → e + e − +X, where X denotes additional particles in the final state. For the most general case, where one wants to include additional final state particles only partially (for example only in a given kinematic range, or only those that decay in a particular way) one needs to use an electroweak parton shower, which generates an arbitrary final state. If formulated correctly, such a parton shower will resum all LL electroweak Sudakov logarithms, and furthermore include many (but not all) of the NLL logarithms. A final-state parton shower including emissions from all interactions in the Standard Model was developed [41], which also paid special attention to important threshold effects for longitudinal gauge bosons. To obtain the full NLL accuracy of [40] requires four types of input: the hard cross sections evaluated at the partonic center-of-mass energy in the unbroken SU(3)⊗SU(2)⊗U(1) Standard Model, the parton distributions functions (PDFs) describing the collinear evolution of the initial-state particles, the fragmentation functions (FFs) describing the collinear evolution of the final-state particles, and a soft function describing the wide-angle soft radiation. The collinear evolution needs to be performed with the full gauge structure SU(3) ⊗ SU(2) ⊗ U(1) and was discussed for the PDFs in detail in [38,39]. In this paper we will perform a similar analysis for the FFs, including numerical results showing the impact of the logarithmic terms. Our results can be used as one of the inputs to [40], which allows full NLL accuracy. When used on their own, one relies on the scaling αL 1 being sufficient to assume that LL accuracy, matched with fixed-order electroweak corrections as -2 -JHEP11(2018)030 discussed in [39], will be sufficient. Under this assumption, one can omit the soft functions and use the hard cross sections only in combination with the collinear evolution of PDFs and FFs, but one has to keep in mind that large factors in the NLL terms could invalidate this naive logarithmic counting. 1 This paper is organized as follows: in section 2 we discuss the form of the fragmentation function and their DGLAP evolution with q. This discussion is correct to double logarithmic accuracy, and we discuss in section 3 how the results can be modified to achieve full leading-logarithmic accuracy through an appropriate scale choice of the SU(2) coupling α 2 . After a brief discussion of some implementation details in section 4, we present the results for the fragmentation functions in section 5. Our conclusions are given in section 6, and in appendix A and B we give details of an isospin and CP basis that decouples parts of the DGLAP evolution and the equations used in the forward evolution. Resummation of collinear final-state logarithms Electroweak logarithms arise from the exchange of massive gauge bosons in loops, or from the real radiation of massive gauge bosons. To LL accuracy, the only contributions are from gauge bosons that are collinear to one of the initial-or final-state particles. These are precisely the contributions that are contained in the DGLAP evolution of PDFs (for emissions collinear to initial-state particles) and FFs (for emissions collinear to final-state particles). In the strong sector, the DGLAP equations only give rise to single logarithmic terms. This is because the limits where emissions are simultaneously soft and collinear cancel between virtual and real contributions to the DGLAP equations. This fact is easy to understand, since an arbitrarily soft emission of a gluon cannot be observed experimentally, so the divergence associated with this emission needs to cancel against the virtual contribution. This is different from the case of the soft emission of a W boson, which can always be observed through the change of flavor (or SU(2) quantum numbers) of the emitting particle. Thus, as long as a process is sensitive to the SU(2) quantum numbers of the external particles, soft radiation of W bosons from these particles gives rise to double logarithms. Any observable at hadron or lepton colliders is sensitive to the SU(2) quantum numbers of the initial state, since the particles being collided are not SU (2) singlets. This leads to the important prediction that electroweak double logarithms are present for any observable, even if they are completely inclusive over the final state. For observables where one identifies the SU(2) properties of the final state (for example by demanding to find two leptons of given flavors), additional double logarithms arise from the collinear radiation off final state particles (even if one is completely inclusive over the momenta of said particles, and also over extra particles being radiated). These collinear logarithms can be resummed by solving the DGLAP evolution of FFs, as we will now discuss. DGLAP equations for FFs are very similar to those for PDFs, and the discussion of them closely follows [38,39]. We will therefore be relatively brief in this work, and refer the reader to the previous papers for more discussion. 1 An analysis of the size of various contributions to the full NLL resummation in exclusive processes was performed in [42]. JHEP11(2018)030 Our solutions to the SM evolution equations are obtained in the approximation of exact SU(3)×SU(2)×U(1) symmetry. That is, we neglect fermion and Higgs masses and the Higgs vacuum expectation value, the effects of these being power-suppressed at high scales. We impose an infra-red cutoff m V on interactions that involve the emission of an electroweak vector boson, V = W i for SU (2) or B for U(1). 2 Leading-order evolution kernels and one-loop running couplings are used. All the electroweak FFs are generated dynamically by evolving upwards from a scale q 0 ∼ m V . In practice we take q 0 = m V = 100 GeV. More details of the input FFs will be given in section 4. Definition of the fragmentation functions The fragmentation function D k i (x, q) gives the distribution of the momentum fraction x for particle species k in a jet initiated by a parton of type i produced in a hard process at momentum scale q. As in the case of PDFs, it is convenient to define the momentumweighted FFs, Note that when we omit one of the labels i or k, our expressions apply independent of its value. One important thing to realize is that only particles in the broken-symmetry phase (or the products of their decay or hadronization) can be observed with a given momentum in the detector, and the index k therefore only runs over the particles in the broken basis, that is, the fermions, photon, gluon, Higgs, W ± and Z 0 bosons. Furthermore, one typically does not distinguish between left-and right-handed particles, or the different polarizations of the vector bosons, in a detector. Thus, the total number of fermions is 6 quarks and anti-quarks, and 6 leptons and anti-leptons, giving 24 fermions. There are a total of 5 vector bosons and one Higgs, giving a total of 30 particles we need to consider for k. Since the index i denotes the object produced at a high scale that initiates the jet, we define it in the unbroken-symmetry phase. When i is a fermion, one needs to separate left-and right-handed chirality states, which evolve differently as they belong to different representations of the SU(2) ⊗ U(1) symmetry. This gives a total of 12 quarks and antiquarks, and 9 leptons and anti-leptons, making 42 fermions. For each transversely-polarized SM vector boson, we need separate positive and negative helicity FFs, d k V ± , since boson polarization is generated during evolution and transmitted to the fermions. 3 Interference between different helicity states cancels upon azimuthal integration of transverse momenta in successive parton splittings, so there are no mixedhelicity boson FFs. Since SU(3) is unbroken, we need only a single gluon FF of each helicity for each fragmentation product, d k g + and d k g − . For the SU(2) ⊗ U(1) symmetry, there are 8 transverselypolarized gauge bosons (W + ± , W − ± , W 3 ± and B ± ), For the neutral bosons B and W 3 , one also needs to take into account the two mixed BW ± FFs, representing the interference Table 1. Total number of fragmentation functions required. For a given final-state particle k, one requires a total of 60 FFs, which is the same as the number of PDFs needed for the initial state. Each object i can fragment into 30 particles k (the total number of particles and antiparticles in the Standard Model). Thus, in general 60 × 30 = 1800 FFs are required. contribution when i could have been either of them. Such mixed FFs arise from the fact that the left-handed fermions and Higgs carry both isospin and hypercharge, such that an interference between two amplitudes, one with a B ± and one with a W 3 ± boson in the final state, can arise. This is the very similar to the case of mixed PDFs when such an interference arises in the initial state [31,38]. and instead of the neutral gauge bosons B and W 3 , one has the photon and transverselypolarized Z 0 . In the latter case, one can construct the FFs for the photon, the Z 0 and their mixed γZ state as transformations of the FFs for the B, the W 3 and their mixed state. This is anyway necessary at the electroweak scale, below which the symmetry is broken. Using A = c W B + s W W 3 and Z 0 = −s W B + c W W 3 , the relation between FFs containing i = γ, Z, γZ and those with i = B, W 3 , BW is and thus JHEP11(2018)030 where the weak mixing parameters are given by . In the neutral Higgs sector, the relation between the broken and unbroken fields is where h and Z L represent the Higgs and the longitudinal Z 0 fields, respectively. The corresponding FFs are There are also the mixed FFs which describe the difference between the Higgs and longitudinal Z 0 FFs: Although the flavor basis chosen above is the most intuitive, the fact that many of the 60 FFs are coupled to one another makes it quite difficult to solve the evolution equations. To decouple some of the equations, it helps to change the basis such that the ingredients have definite total isospin T and CP quantum numbers, which (neglecting the tiny CP violation) are conserved in the Standard Model. Then only FFs with the same quantum numbers can mix. The FFs for each set of quantum numbers required are shown in table 2. In the case of the vector bosons, the unpolarized FFs d k The mixed Higgs FFs have unit isospin and we can form the combinations with definite CP: where only the {1, +} state contributes to the physical Higgs and Z 0 L FFs. Further details of the isospin and CP basis are given in appendix A. Note that in general there can be additional mixed FFs, which however are zero in our matching conditions at scale q 0 and are not generated in the evolution. In particular, there can be states mixing left-and right-handed fermions, but they are not present when we consider only the FFs d k i for unpolarized particles k. General evolution equations We consider the x-weighted FFs of parton species i at momentum fraction x and scale q, d i (x, q). In leading order they satisfy evolution equations of the following approximate form: 4 Here, the sum over I goes over the different interactions in the Standard Model and the notation q ∂/∂q d k i (x, q) I implies that we only keep the terms proportional to the coupling α I when taking the derivative. 5 We denote by I = 1, 2, 3 the pure U(1), SU(2) and SU(3) gauge interactions, by I = Y the Yukawa interactions, and by I = M the mixed interaction proportional to (2.14) The first contribution in eq. (2.13), proportional to P V i,I , denotes the virtual contribution to the FF evolution, while the second contribution is the real contribution from the splitting of parton i into parton j. Notice that i and j are interchanged here relative to the PDF evolution equations, because d k j represents the fragmentation of the outgoing parton from the splitting, rather than the distribution of the incoming one. The maximum value of z in the integration of the real contribution depends on the type of splitting and interaction, 4 In section 3 we present a modification of the evolution equations to achieve full leading-logarithmic accuracy. 5 Note that [. . .] I is only introduced for notational convenience and should not be interpreted as setting all other couplings to zero. In particular, the FFs appearing on the right-hand side of eq. (2.13) still depend on the values of all coupling constants. JHEP11(2018)030 and we choose that is, we apply an infrared cutoff m V , of the order of the electroweak scale, when a B or W boson is emitted. This regulates the divergence of the splitting function for those emissions as z → 1. Such a cutoff is mandatory for I = 2 because there are FF contributions that are SU(2) non-singlets. The evolution equations for SU (3) are regular in the absence of a cutoff, as hadron FFs are color singlets. Similarly for U(1), the unpolarized FFs have zero hypercharge, 6 but we include the same cutoff for I = 1, since the B and W 3 are mixed in the physical Z and γ states. It was shown in [38] that the virtual corrections for the fermion, unmixed scalar and unmixed, unpolarized vector boson PDFs, which are the same for the corresponding FFs, are given by The virtual corrections for the individual vector boson helicity states are the same as the unpolarized ones. For the mixed vector boson FFs one has while the virtual contribution for i = BW is zero for the other interactions. The virtual contributions for the mixed Higgs FFs are the same as those for the unmixed states: Thus for the unmixed FFs we have simply This implies that the DGLAP equations are defined by the splitting functions P R ji,I (z) and the coefficients C ji,I . Most of the splitting functions can be obtained from the seminal paper of Altarelli and Parisi [44]. For the gauge interactions of the Standard Model, I = 1, 2, 3 and the mixed 6 Although there can be contributions with non-zero hypercharge for transversely polarized fermions [31]. interaction M , which we denoted collectively by I = G, one finds The factor of 1/2 in P R f V has to be included since we are considering fermions with definite chirality. For splitting to and from antifermions we have, from CP invariance, Finally for the Yukawa interaction (Y ), one has We now give the necessary coefficients C ij,I for the five interactions. I=3: SU(3) interaction. We start by considering the well known case of SU(3) interactions. The relevant degrees of freedom are the gluon, as well as left and right-handed quarks. The coupling coefficients are where C F = 4/3, C A = 3, T R = 1/2. Note that since SU(3) has the same coupling to left-and right-handed quarks, it does not produce a polarization asymmetry on its own, unless an initial asymmetry is present due to polarized initial states. However, due to the different electroweak evolution of the left-and right-handed fermions, even the gluon FFs develop a polarization asymmetry above the electroweak scale. where the hypercharges of the different fermions are given by The color factor N f is equal to N c = 3 for quarks and 1 for leptons. The coefficients involving the Higgs bosons are where h here stands for any of the four Higgs boson FFs. I=2: SU (2) interaction. Denoting by u L and d L any up-and down-type left-handed fermion, one finds where as before the color factor N f = 3 for quarks, 1 for leptons. The coupling coefficients of the Higgs bosons are given by The couplings for the charge-conjugate states are the same. I=Y: Yukawa interaction. In this work we only keep the top Yukawa coupling, setting all others to zero. This gives the following coefficients: where q 3 L denotes either the left-handed top or bottom quark. We furthermore need Finally, we need to consider the evolution involving the mixed BW boson FF. The non-vanishing couplings are where, as for the fermions, the same relations hold for the charge-conjugate states. The resulting evolution equations in the {T, CP} basis are given in full in appendix B. Double logarithmic evolution Any combination of FFs that is not SU (2) satisfies the evolution equation The mismatch between the coefficients of d 1− q (x/z, q) and d 1− q (x, q) on the right-hand side of eq. (2.52) induces a logarithmic sensitivity to m V and hence a double-logarithmic term in the evolution. In fact, noting that the SU(2) contribution to the fermion Sudakov factor is (2.55) The integrals are now independent of m V and therefore only produce single-logarithmic evolution. All the double-logarithmic dependence comes from the Sudakov factor and we can write whered 1− q has only single-logarithmic evolution. Similarly, all other FF combinations that are not SU(2)-symmetric are suppressed at high energy by powers of the corresponding SU(2) Sudakov factor [31]. While for fermions there are only isospin 0 and 1 combinations possible, for vector bosons one can also form combinations with T = 2: The double-logarithmic dependence in fact only depends on the value of the isospin, and in general one finds (2.59) Momentum conservation The total momentum fraction carried by particle species k in a jet initiated by a parton of type i at scale q is given by This is a set of ordinary differential equations that can be solved by finding the eigenvalues and eigenvectors of the matrix F ij (q). One of the eigenvalues, corresponding to the eigenvector (1, 1, . . . , 1), is zero, so for every species k and unmixed interaction I there is a linear combination of the momentum fractions d k i that is scale-independent. Furthermore, since the sum of momenta of all species k in the jet must equal that of the initial parton i, for the unmixed FFs we have so the momentum sum is conserved by each interaction separately. For the mixed vector boson FF, i = BW , of either helicity, the real emission term involves the difference between the momentum sums for up-and down-type fermions and scalars, which vanishes, so that, from eq. (2.17), where ∆ BW (q) is the BW Sudakov factor. Now from eq. (2.3) we have and, as will be discussed in section 4, the mixed γZ FFs d k γZ all vanish at the electroweak scale q = q 0 . Hence the momentum sum for the mixed FF, of either helicity, vanishes at all scales: Similarly the mixed Higgs FFs (2.12) make equal and opposite contributions to the Higgs and Z 0 L , and do not mix with other FFs, so they also do not contribute to the momentum sum. 3 Achieving full (next-to-) leading-logarithmic accuracy We have seen in section 2.3 that fragmentation functions that are not iso-singlets experience double-logarithmic evolution. This is due to the fact that the soft singularity as z → 1 in the splitting functions P R ii,G (z) do not cancel between the virtual and real contributions. This is the origin of the SU(2) Sudakov factor, which according to eq. (2.16) is given by The leading logarithmic contribution arises from the term in the splitting function that is divergent as z → 1 and one can write where C f,2 = C H,2 = 3/4 and C W,2 = 2. For a fixed coupling α 2 we then obtain the double-logarithmic (DL) approximation to the Sudakov factor, However, it is well known that in general, Sudakov factors take the form ∆ i,2 (q) = exp [L g 1 (αL) + g 2 (αL) + α g 3 (αL) + . . .] (3.4) where in the case at hand α ≡ α 2 (q). The functions g i (x) are polynomial functions satisfying g i (0) = 0 and determine the logarithmic terms necessary in the expansion when the size of the log is such that αL ∼ 1. The DL approximation amounts to only keeping the O(α) term of the g 1 function. It is only sufficient if the size of the log satisfies αL 2 ∼ 1 but αL 1. The leading logarithmic (LL) approximation amounts to keeping the whole function g 1 , which sums all terms of order α n L n+1 while the next-to-leading logarithmic (NLL) approximation requires keeping in addition the function g 2 , which sums logs of order α n L n and so on. At the highest energies reachable at the LHC and a future 100 TeV collider the logarithm can be as large 5 and 7, respectively. Given that α 2 ∼ 0.03, this means that α 2 L 2 ∼ 1, but one still has α 2 L 1. In [38,39] and so far in this paper we have given DL accurate results that only reproduce the term of order αL 2 in the exponent, even though they also produce an incomplete set of subleading terms. In the absence of large numerical factors, which can spoil the naive scaling of the logarithmic terms, DL accurate results would be sufficient to describe the physics at these energies. However, it is known [11] that for fermion scattering processes the single logarithmic coefficient is about a factor of 3 larger that the double logarithmic coefficient, such that these simple scaling rules might not provide reliable answers. In the following we will therefore only specify the dominant -14 - JHEP11(2018)030 DL LL NLL no match match no match match no match match missed term Table 3. Scaling of the dominant missed term in the perturbative expansion, for the double log expansion, where only the leading αL 2 term in the exponent is kept, the LL expansion, where the whole function g 1 (αL) is kept, and the NLL expansion, where the functions g 1 (αL) and g 2 (αL) are kept. For each of these, we show the scaling of the first missed term if just the logarithmic resummation is used, and also the scaling if the resummed result is matched with the fixed order NLO calculation (such that the full α dependence is reproduced). term missed in a given logarithmic expansion (assuming αL 1), without discussing the actual size of the effect. In table 3 we show the dominant term missed when using DL, LL and NLL resummation. For each we also give the first missed term if the resummed results are matched to the full O(α) calculation. One can see that using the full LL resummation vs DL accurate results [using the complete function g 1 (αL), rather than just its O(α) term] does not improve the situation, since one is still missing a term of order αL or α 2 L 3 if the result is matched with the fixed order calculation at NLO, as described in [39]. The α 2 L 3 in the matched calculation comes from the missed αL term of the function g 2 (αL) multiplying the αL 2 term of Lg 1 (αL). This term is only reproduced once the complete NLL resummation is taken into account. This of course makes sense, since the full LL resummation is only formally improving the accuracy of the DL resummation when counting αL ∼ 1. In that limit, however, the NLL resummation provides an O(1) effect, which needs to be included as well. Note that the two different choices for the scaling of the logarithm were already discussed in some detail in [16]. From this pure logarithmic counting, one expects DL accuracy to be the same as LL accuracy, and NLL effects to be subdominant as long as αL 1. But as mentioned before, large numerical coefficients can spoil this behavior, with the details depending on the process being studied. Even though the full LL resummation does not improve the situation over matched DL resummation for feasible collider energies, we will show how it can be obtained in the DGLAP formalism by choosing the scale of the running SU(2) coupling appropriately. It is well known in standard QCD resummation and parton shower algorithms, that for double logarithmically sensitive observables the evolution should be angular-ordered and the running coupling should be evaluated at the transverse momentum of gauge boson emission [45,46]. This means that instead of using α 2 (q) as we have been doing in the DGLAP evolution, one should use α 2 (q(1 − z)). Then since with β (2) 0 = 19/12, the ratio of these two scale choices is given by the expansion JHEP11(2018)030 Note that these logarithmic terms in 1 − z only give rise to large logarithms if integrated against a singular function f (z) ∼ 1/(1 − z). Thus, in standard DGLAP evolution in QCD, where the soft divergence as z → 1 cancels between the virtual and real contributions, the difference between these two scales do not lead to logarithmic terms that need to be resummed. For the case of SU(2) DGLAP evolution of PDFs or FFs that are not isosinglets, however, this cancelation does not happen, and one finds which generates the LL function g 1 (α 2 L). The full LL resummation is therefore obtained by changing the SU(2) splitting functions that are singular as z → 1 as (3.10) By making one more change one can in fact also reproduce the full NLL resummation of the collinear evolution. The only missing term is the 2-loop cusp anomalous dimension, which can be included using the CMW prescription [47] for the coupling constant. This amounts to changing JHEP11(2018)030 inclusive over the final state, where no soft function is required, this therefore reproduces the full NLL resummation. For less inclusive observables, it misses the logarithmic resummation coming from the evolution of the soft function, which was discussed in [40] and is not included here. As we have explained, including the full LL resummation, compared with only the DL resummation, does not improve the formal accuracy of the calculation, unless the full NLL effects are included at the same time. Since the NLL contributions that were obtained above by including the 2-loop cusp anomalous dimension only include the NLL effects from the collinear evolution, but miss the NLL contributions from the soft evolution from [40], their inclusion does not raise the formal accuracy either. Nevertheless, we compare the results from the collinear NLL resummation discussed above with DL accurate results obtained in previous work when presenting results in section 5. Implementation details and input FFs For simplicity we start the evolution of all FFs at the electroweak breaking scale q 0 ∼ m V , which in practice we take to be 100 GeV. Each value of the fragmentation product k requires a separate run of the evolution code. For a quark or charged lepton, k = f , assuming that the helicity of the fragmentation product is not detected, we take as input setting all other initial FFs to zero. Then the FFs for all 60 SM states i fragmenting into f are generated by evolving these input FFs to higher scales using the SM DGLAP equations given in section 3. To obtain FFs at scales below q 0 , the resulting FFs d f i (x, q > q 0 ) should be convoluted with the SU(3) ⊗ U(1) em -evolved and hadronized FF of a jet of flavor f produced at scale q 0 . The neutrinos k = ν have no right-handed states, so the initial condition becomes for evolution from scale q 0 . The resulting FFs can be interpreted directly as neutrino momentum fraction distributions, since the neutrinos do not evolve below the electroweak scale. For fragmentation into a gauge boson V we again assume the helicity is not detected, so the input is For the gluon, the SM-evolved FFs at higher scales then need to be convoluted with the FFs of a gluon jet produced at scale q 0 . For the W ± , on the other hand, the boson can simply be allowed to decay at scale q 0 . For the neutral gauge bosons V = γ, Z 0 we resolve them into the unbroken B, W 3 and BW states according to eq. (2.2) at scale q 0 and evolve these upwards. Again, the heavy bosons can then decay directly at scale q 0 , while the photon can either be treated as a stable particle or fragmented by U(1) em evolution at lower scales. Similarly the Higgs and longitudinal gauge boson FFs are resolved as in equations (2.10) and (2.11), and the unbroken FFs are evolved to higher scales using the unbroken SM. JHEP11(2018)030 The DGLAP evolution equations were solved directly in x-space using a two-stage Runge-Kutta method (Heun's method). Evaluations were on a grid of (501,71) points in (x, q) for each FF. The x-grid was uniform in log x from 10 −7 to 0.5 and uniform in log(1 − x) above 0.5: The q-grid was uniform in log q: The input FFs at the matching scale are proportional to δ(1 − x), which was approximated by the following smooth function of x: with a = 10 −3 . Results were confirmed to be stable for this value of a. Integrals were evaluated to relative precision 10 −6 by adaptive gaussian integration with five-point polynomial interpolation in the x-grid. Notice that the momentum conservation relations (2.66) and (2.71) involve sums over independent runs of the evolution code for the 30 possible fragmentation products k, and must hold for each one of the 60 fragmenting objects i, which provides a valuable check on the correctness and precision of the code. Results As already mentioned, there are a total of 60 × 30 = 1800 distinct FFs, and we can clearly only show a small subset of all possible results. We therefore choose a few illustrative choices of i (left-and right-handed down quarks, the left-and right-handed electron, the SU(2) bosons W + and W 3 , the U(1) boson B and the gluon), and for each i group the 30 possible values of k into a few representative sets. Readers interested in more details can request all data as LHAPDF files from the authors. The main results use the full NLL accuracy of the DGLAP evolution that was discussed in section 3. Note that to obtain full NLL accuracy for a cross section prediction requires the inclusion of single logarithms arising from the evolution of the soft function that were computed in [40]. We begin by showing in figure 1 the results for the momentum fractions d k i (q) defined in eq. (2.60). For each species i, we show how the total momentum is shared between fragmentation particles k at scales q ranging from 100 to 10 6 GeV. We stack the various sets for k on top of each other, such that momentum conservation implies that each plot sums to unity for all values of q once all particles are included. To show the size of the difference between DL and NLL accuracy, we show in dashed lines also the results obtained using DL accuracy. The reason that for several curves the DL result is not visible is because JHEP11(2018)030 it is indistinguishable from the NLL result. 7 One can also clearly see that at q = 100 GeV, the only contribution is for i = k. Since i and k are chosen in the unbroken and broken basis, respectively, for the W 3 and B the relative probability of Z 0 and γ are given by the weak mixing angle. As we evolve to larger values of q, other flavors k are generated. In the first row we show the fragmentation of left-and right-handed down quarks, i = d L , d R . In the left-handed case (a) one can see that the SU(2) interaction has a significant effect. Left-handed up quarks are generated with double logarithmic probability, such that at large enough values of q the amount of u L and d L become of the same order of magnitude, and SU (2) bosons are produced at an appreciable rate. Gluons are produced at a larger rate, which is obviously due to the relative strength of the SU(3) and SU (2) interactions. For the right handed down quark (b), the fragmentation is completely dominated by QCD evolution, such that a large fraction of gluons and a smaller fraction of quarks other than d R get generated. Other particles, shown by the remaining contribution in cyan, only make up a tiny fraction, even at q = 10 6 GeV. The fragmentation of left-and right-handed electrons is shown in the second row of figure 1. In the left-handed case (c) one can again see the importance of the SU (2) interactions at large values of q, and for q ∼ 10 6 GeV the relative fraction of electrons and neutrinos becomes comparable, with the momentum fraction contained in gauge bosons at the 10% level. For the right-handed electron (d), the evolution is only given by the U(1) interaction, such that one generates only a small fraction of U(1) bosons, and an even smaller fraction of other particles. Gauge boson fragmentation is shown in the third and fourth rows of figure 1. For the W + boson (e), one sees that the other SU(2) gauge bosons are generated quite rapidly as q becomes larger than 100 GeV. Asymptotically, for q → ∞, the three SU(2) gauge bosons will evolve to have equal momentum fractions, and while one can see the trend for them to become equal, one needs to go to much higher values than are shown here. Quarks and leptons are also produced at an appreciable rate, with more quarks owing to the colour factor. For the U(1) boson (f), the only non-vanishing fragmentation at q = 100 GeV is into Z bosons and photons, with relative fraction tan −2 θ W . Since the coupling constant α 1 is smaller than α 2 , quarks and leptons are produced at a lower rate than for the W + boson. However, the quark and lepton rates are more equal, because the colour factor of the quarks is largely compensated by the higher hypercharges of the leptons. For an initial W 3 boson, shown in (g), one again starts off with only Z bosons and photons, with relative fraction tan 2 θ W . Quickly the neutral SU(2) boson evolves into charged W s, and also into quarks and leptons. Finally, we show in (h) the evolution of the gluon. As expected, it is completely dominated by the strong interaction, such that it mostly evolves into quarks. While figure 1 illustrates the evolution of the total momentum fractions carried by various particles in a given species of jet, it does not show how the evolution looks for fixed values of x. This is shown in figures 2 and 3 for the same set of particles as before, and 7 However, given that the results with DL accuracy include an incomplete set of higher logarithmic terms, these results do not allow the conclusion that NLL terms are numerically subdominant to the DL terms. Rather, one can only conclude that the NLL terms that arise from the improved scale setting procedure in the SU(2) running coupling are subdominant. JHEP11(2018)030 Rest g Figure 4. The absolute value of the polarization asymmetry, defined as for fragmentation into (left) u and (right) d quarks, for the vector bosons W ± , W 3 , B and the gluon. Note that the gluon asymmetry is scaled by a factor of 50, and that for the SU(2) bosons the negative of the asymmetry is shown. The results use the NLL accuracy as discussed in section 3. using the values x = 0.9 (shown on the left) and x = 0.1 (on the right). As in figure 1, solid (dashed) lines correspond to NLL (DL) evolution. As explained in section 4, the initial condition at q = 100 GeV is a δ-function at x = 1 for i = k, such that the fragmentation at x = 0.9 is overall much more dominated by k = i than at lower values of x. Notice that the constraint x < 1 − m V /q for emission of a heavy vector boson means that at x = 0.9 no such emission can occur below q = m V /(1 − x) ∼ 1 TeV, depressing the evolution of leptons and heavy bosons below that scale. At x = 0.1, fragmentation into vector bosons is dominant at all scales, because of the low-z enhancement of the corresponding splitting functions. In figure 4 we show some results on the polarization asymmetry of vector bosons fragmenting into up and down quarks. For W + , W 3 → u and W − , W 3 → d the asymmetry is large and negative, due to the dominance of the V − → f L splitting function. The W − → u and W + → d asymmetries are also negative but increase from zero as they start at higher order. The B asymmetries are positive because of the dominance of fragmentation into the right-handed quarks in that case. The gluon asymmetry is a secondary effect of the different evolution of left-and right-handed quarks, the latter evolving more slowly and so remaining at higher x. Notice that there is even a slight difference between the gluon asymmetries for fragmentation into up and down quarks, due to their different electroweak evolution. -23 -In this paper we have discussed the evolution of fragmentation functions in the full Standard Model, which requires resummation of leading logarithms arising from final-state radiation and the associated virtual corrections. At energy scales far above the electroweak symmetry breaking scale, short distance processes can be described in terms of 60 objects in the unbroken Standard Model: 12 lefthanded quarks, 12 right-handed quarks, 12 left-handed leptons, 6 right-handed leptons, 2 transversely polarized gluons, 2 transversely polarized U(1) gauge bosons, 6 transversely polarized SU(2) bosons, 4 Higgs fields, 2 mixed neutral Higgs states and 2 transversely polarized states that mix the U(1) and neutral SU(2) boson. In hard interactions at such energies, any subsequent radiation is dominated by emissions that are either soft or collinear to the colliding or produced particles. Processes that only depend on the flavor of one particle in each of these "jets" of radiation can be described solely in terms of parton distributions and fragmentation functions, which have to be evaluated at the short-distance scale of the hard interaction. The DGLAP evolution of the PDFs and FFs from the electroweak symmetry breaking scale to the hard scale q resums the logarithmic dependence on the ratio m V /q. If the observed particles are not SU(2) singlets, one encounters double logarithms in the evolution. We have presented the evolution of FFs in the complete Standard Model, where all three gauge interactions and the Yukawa interaction of the third generation contribute significantly to the DGLAP evolution. Together with the evolution of the PDFs, which was presented in [22], this provides all details necessary to resum the dominant logarithms for all cases where one is inclusive over the kinematics of the final state particles. Combining this with the running of soft functions presented in [40], full NLL accuracy of the electroweak evolution can be obtained. While the dominant terms are of double logarithmic origin (scaling as α n L 2n in a cross section), we also showed how the complete LL resummation (terms scaling as α n L n+1 in the logarithm of a cross section) may be achieved by an appropriate choice for the scale of the running SU(2) coupling in the singular terms of the evolution. While this does not improve the accuracy in the relevant limit α 2 L 2 ∼ 1, and has a small numerical effect on the resulting FFs, it is necessary when the results from the DGLAP evolution are combined with the soft function evolution to obtain full NLL accuracy. Numerically, the electroweak logarithms lead to appreciable effects at the highest energy scales that can be reached at the LHC and a future 100 TeV pp collider, but they still tend to be slightly smaller than what might be expected from the naive scaling of α 2 L 2 . For example, a left handed lepton produced at 3 TeV (30 TeV) has a 6% (15%) probability to fragment into a different particle defined at the electroweak scale q 0 ∼ 100 GeV. The effect is larger for SU (2) bosons produced at the high scale. A charged W boson produced at 3 TeV (30 TeV) has a 14% (30%) probability to fragment into a different particle defined at 100 GeV. We have also studied for the first time the phenomenology of electroweak boson polarization in the FFs. Although large polarizations are generated, they have minimal effects as long as the polarization of fragmentation products is not detected. -24 -As already explained in section 1, the set of 60 evolution equations can be decoupled to some degree by switching to a basis of well-defined isospin T and CP. Writing a fermion FF with given {T, CP} as d TCP i , the left-handed fermion FFs are where u L and d L refer to left-handed up-and down-type fermions. Right-handed fermion FFs are given by The SU(3) and U(1) boson FFs have T = 0, with the unpolarized and helicity asymmetry combinations having CP = + and −, respectively: The SU (2) The mixed BW boson FFs are a combination of 0 − and 1 − states, and therefore they have the opposite CP to the corresponding W boson FFs: For the Higgs boson, one writes similarly to the fermions JHEP11(2018)030 There are also the mixed H 0 /H 0 FFs, which carry non-zero hypercharge and have unit isospin: The even-CP mixed FF d 1+ HH represents the difference between the Higgs and longitudinal Z 0 FFs. The longitudinal vector boson and Higgs FFs are given by The odd-CP mixed FF d 1− HH does not contribute to any physical states and does not mix with any other FFs, so it remains zero and we do not consider it further. The resulting evolution equations are collected in appendix B. B Equations used in the forward evolution As in [38], we define For splittings involving gauge bosons, we define The '+'-prescription is where C i,I is the coefficient in the corresponding Sudakov factor: and . . . represents less divergent terms. For convenience we also define the isospin suppression factors ∆
10,899
2018-11-01T00:00:00.000
[ "Physics" ]
Location-based health information services: a new paradigm in personalised information delivery Brute health information delivery to various devices can be easily achieved these days, making health information instantly available whenever it is needed and nearly anywhere. However, brute health information delivery risks overloading users with unnecessary information that does not answer their actual needs, and might even act as noise, masking any other useful and relevant information delivered with it. Users' profiles and needs are definitely affected by where they are, and this should be taken into consideration when personalising and delivering information to users in different locations. The main goal of location-based health information services is to allow better presentation of the distribution of health and healthcare needs and Internet resources answering them across a geographical area, with the aim to provide users with better support for informed decision-making. Personalised information delivery requires the acquisition of high quality metadata about not only information resources, but also information service users, their geographical location and their devices. Throughout this review, experience from a related online health information service, HealthCyberMap , is referred to as a model that can be easily adapted to other similar services. HealthCyberMap is a Web-based directory service of medical/health Internet resources exploring new means to organise and present these resources based on consumer and provider locations, as well as the geographical coverage or scope of indexed resources. The paper also provides a concise review of location-based services, technologies for detecting user location (including IP geolocation), and their potential applications in health and healthcare. Introduction The concept that location can influence health is well known in medicine. Certain diseases tend to occur in some places and not others. Health information needs and services also vary with location. In fact, different places on Earth are usually associated with different profiles that can also change with time: physical, biological, environmental, economic, linguistic, social, cultural, and sometimes even spiritual profiles, that do affect and are affected by health, disease, and healthcare [1]. Caregivers need to know not only the history of patients they treat but also information about the social and environmental context within which those patients live. Patients and the public in general also have similar needs that vary with location. The Internet offers a wealth of health information resources that can answer most of the knowledge needs of clinicians and their patients, and the public in general, but also carries with it the risk of overloading them with unnecessary information. A big challenge remains to find and push location-specific knowledge (e.g., local disease rates and guidelines Table 1) to users based on their location and needs [2]. Location-based information means information that is immediately relevant, which is the essence of the Semantic Web (the next-generation World Wide Web -http:// www.w3.org/2001/sw/). Research literature discussing health and healthcare-specific potentials and applications of location-based services is currently very scarce (none in MEDLINE/PubMed as of January 2003). Throughout this review, the author's experience in designing and implementing a related online health information service, HealthCyberMap http:// healthcybermap.semanticweb.org/, is referred to as a model that can be easily adapted to other similar services. The potential and applications of location-based services in health and healthcare Different user (device) localisation technologies exist today that can locate in real time mobile (wireless) and desktop Internet users to their country, town or city of access, and even to their exact location on Earth (with an accuracy of metres) depending on the technology used. Geobytes, Inc. provides one such technology (see below). Departing from their generic list of applications of user localisation technologies http://www.geobytes.com/applications.htm, one can think about the following health and healthcare-specific scenarios: -Geographical customisation of Web services, Web sites, and e-mail newsletters (personalisation by location), for example to serve up language relevant to the viewer's location, to cite only drugs, drug trade names and their prices as found in the viewer's country, and to deliver local health news, local weather and air quality maps, travellers' health information, and other health content specific to the viewer's location. -Geographically targeted banner and email marketing and advertising. This can prove very useful to commercial online health service providers and pharmaceutical companies. -Web site traffic analysis to determine the geographical provenance of health sites' visitors - Figure 1. This information can then be used to tailor site content to match the needs of actual visitors and the characteristics of their location, as well as to refine site-marketing strategies and monitor advertising campaigns (especially in case of commercial online health services). Determining the most active countries accessing a site can help prioritising development plans of new health information service interfaces and content in other languages, e.g., French language content and user interface, besides English (giving higher development priority to languages of most active countries). This becomes especially important when development resources (human and financial) are scarce, and to ensure rapid and sustained leadership over competitor services that might be also exploiting the potential of multilingual interfaces and content. -Fraud detection and user authentication by confirming that users are actually present where they claim they are. -Digital rights management to ensure compliance with distribution rights of copyrighted health information and media. Mobile location-based services and their applications in health and healthcare Besides transmitting real-time information on their precise location, next generation mobile phones currently entering the market (e.g., Microsoft Smartphone -http:// www.microsoft.com/mobile/smartphone/default.asp) also provide users with wireless Internet access [3]. This makes them very versatile and powerful devices. Examples of Location-specific Medical/Health Information -Local disease rates and information*, maps and guidelines; -Targeted health education; -Addresses of local healthcare facilities; -Local health news; -Local health risks and hazards; -Travellers' health information; -Local drugs/drug trade names and prices (in local currency); -Information whose digital distribution rights are limited to some location(s); -In addition to serving up content (and interface) in language(s) relevant to the viewer's location. * For example, the most common cause of splenomegaly in Kenya is malaria, while in the UK the most common causes of Slenomegaly are cytomegalovirus infection and toxoplasmosis. Mobile health applications require an understanding of where consumers are, where they have been and where they are going. Wireless mobile devices can continually transmit device (user) location to such applications, which must then make use of the transmitted information in a sensible way. According to Berkowitz and Lopez [4], location-awareness refers to applications or services that make use of location information provided by suitable devices or software (location need not be the primary purpose of the application or service), while location-sensitivity refers to location-enabled mobile devices that can be used by location-aware applications and services such as mobile phones, personal digital assistants and pagers. Such devices rely on GPS (Global Positioning System)/mobile phone-related technologies. One can also add to the second category of location-sensitive tools any software whose primary function is to de-tect the locations of "static" Internet users (e.g., in the office or at home) based on their IP (Internet Protocol) address as demonstrated in HealthCyberMap (IP geolocation [5,6] -see below). Following are some health and healthcare application examples of mobile location-based technologies. -Health information and service providers can (if needed) react immediately to the changed location of a mobile user by delivering personalised timely information and services for his/her new roaming region [4]. -An online healthcare facility locator service can assist users in finding the nearest hospitals or clinics based on their location and health needs, and even provide them with driving directions and real-time traffic information [4]. For example, a mobile patient checking on the availability of a dental clinic in a given city might access geocoding services that identify the location of the patient and nearest clinics, and would cull data from real-time booking services to check for clinics' working hours and book a suitable appointment, and from driving directions and real-time traffic information services to route the patient to the clinic. -Helping ambulance and rescue teams precisely and quickly locate and track people who are in a medical emergency, injured, or lost, and also for ambulance fleet management. New FCC rules (Federal Communications Commission -http://www.fcc.gov/911/enhanced/) mean that GPS receivers will very soon become an integral part of all mobile phones. -Mobile patient monitoring and automated emergency calls with very precise information on patient location if the system detects any medical problem requiring intervention. It should be also possible to transfer all monitoring data to a secure Web-based patient record accessible from anywhere in the world [3]. Digital Angel Technology http://www.digitalangel.net/ belongs to this category of services. -Sampling real-time data on environmental exposure to irritants and pollutants with information on the individual's physical reactions in different situations and locations. One practical application of such exercise could be to send alarm messages to mobile asthmatic patients if they start moving to low air quality locations within large cities [3]. Important issues related to location-based services User devices used to access a service might change with location, e.g., a desktop or laptop computer at home or in the clinic and a more limited mobile device on the road. The drawback of the small size of mobile devices is that display is considerably smaller and input much more difficult (e.g., no full-scale keyboard). Location-based services must take into consideration the input and output characteristics of different user devices by carefully choosing, personalising and formatting the content to display on such devices [4]. Location capability also poses service providers with the challenge of responsibly handling consumers' personal privacy, especially if they use cookies to track their users [3,4,7]. Services should publish their Privacy Policy and respect consumers' choices in this regard. See for example HealthCyberMap's Privacy Statement http://healthcybermap.semanticweb.org/privacy.htm. Experience from HealthCyberMap: first steps toward developing a customisable locationbased medical/health information service HealthCyberMap http://healthcybermap.semanticweb.org/ is a geographic information system (GIS)driven Web-based directory service of health resources that explores new means to organise and present health and healthcare related information on the Internet based on consumer and provider locations, as well as the geographical coverage or scope of indexed resources. It tries to develop an online information service that should ultimately allow better presentation of the distribution of health and healthcare needs and Internet resources answering them across a geographical area. The service is aimed to provide better support for informed decisionmaking by the public, patients and their caregivers. HealthCyberMap geographical mapping of medical/ health information resources Location-based services draw heavily on GIS and geoinformatics [3] as illustrated in the "dental clinic" example above. Another novel use of GIS is to map conceptual spaces occupied by collections of medical/health information resources as demonstrated in HealthCyberMap [8]. Besides mapping the semantic and non-geographical aspects (e.g., subject or topic) of these resources using suitable spatial metaphors (e.g., human body maps -http:// healthcybermap.semanticweb.org/bodyviewer/), Health-CyberMap also maps some geographical aspects of these resources - Figure 2. The resultant maps can be classified as conceptual information space maps and can be used as navigational aid for browsing mapped resources [9]. Two geographical aspects of health information resources are considered in HealthCyberMap, namely coverage and provenance. Coverage deals with the spatial extent or scope of the content of a given resource (this aspect is important if we want to be able to select resources that are appropriate for a particular user location). Provenance refers to the geographical location (city/country) of a resource publisher or author(s), whichever is more relevant, and can be very useful as an index to information resources and for some kinds of studies [10,11] - Figure 2. Coverage and provenance are not necessarily the same as the physical location of hosting servers, e.g., a British Patient Support Group offering UK-specific advice on some condition might have its Web site hosted on a server in Arizona, US; however, the site remains more relevant to patients and their caregivers living in the UK. Technologies for detecting user location To begin developing a location-based service we have to adopt a method for detecting user location. A user's location can be collected through a Web form that the user fills (can be also used to collect other user information/ International Journal of Health Geographics 2003, 2 http://www.ij-healthgeographics.com/content/2/1/2 preferences besides location), or automatically detected based on the user's IP address (IP geolocation -[5,6]). Tools exist that allow mapping user IP address to a coarse location (city/country) on Earth (this should be enough for basic geographical customisation purposes of an online information service like HealthCyberMap). These tools include: More information and a demonstration of the last two tools can be found at: http://healthcybermap.semanticweb.org/ip.htm. IP address to city/country mapping is not always successful [5]. Available tools sometimes fail to map a user IP address to a city/country or map it to a wrong location depending on the accuracy and coverage of their underlying lookup databases. In wireless Internet, mobile devices can continually send their very precise location, e.g., via assisted global positioning services (AGPS - [7]), which can be used in more sophisticated mobile location-based services. However, as outlined above privacy issues must be observed whenever any kind of user location information is gathered (e.g., when a user does not want to reveal his/her location or accept cookies, this should be respected). HealthCyberMap location-based customisation possibilities It should be possible to customise (personalise) Health-CyberMap based on a user's geographical location as determined by his/her IP address used to access HealthCyberMap server. Moreover, HealthCyberMap users should be allowed to override this and manually set their preferences (including personal preferences unrelated to location) following "My Yahoo!" http://my.yahoo.com and BBC News and Sport (http://news.bbc.co.uk - Figure 4) examples. A user input form can be used to capture (and store) a user's profile. User's descriptors in this profile can then be used to tailor the content delivered to that user according to some predefined content selection model or rules (Figure 5). Two main location-based customisation categories are possible: 1. Language and interface customisation: a. Setting HealthCyberMap Web interface language to match user's location language; and b. Retrieving/giving more importance to Web resources in user's location language http://healthcybermap.semanticweb.org/language.htm - Figure 6. However, it should be noted that some users might move from their native country to another country, e.g., from the UK to France, either temporarily or permanently. It is not always the case that such users will want the language of HealthCyberMap interface and retrieved resources to be changed to reflect their new location, e.g., from English to French. Content customisation: Customisation should also (ideally) address any locationspecific information needs and match these needs to suitable online resources covering the concerned location and its known health and healthcare makeup (not just its language). This mandates that knowledge about the characteristics and health needs of different locations (location profiles) be available in a form suitable for use by a customisation engine against HealthCyberMap's metadata base of resource pointers to select location-specific information resources ( Figure 5). Customisation parameters Customisation depends upon many factors and parameters [12]. Some of these have already been discussed above. Following is an incomplete list of parameters that might be relevant to the customisation of health and healthcare information services. -Personal and health profiles: user role (patient, physician, nurse, etc.); user's demographic and health profiles (might be automatically inferred from his/her health record if linked to HealthCyberMap); health and healthcare makeup of user's location (location can influence health; also disease rates, management guidelines and health information needs vary with location); user address (addresses of health services presented to the user should correspond to his/her address, e.g., address of nearest hospital or pharmacy); user country and region (urban/rural); user's socio-economic group; lifestyle; user dietary style/preferences; user culture; language; literacy; user education; previous knowledge; -Computer and Internet profiles (per user): user computer skills (technophobes/technophiles); user privacy and security needs; user accessibility needs, e.g., large type for sight-impaired patients; user device type, e.g., WAP (Wireless Access Protocol) phone, personal digital assistant, or PC, and device processing power, screen resolution, colour depth, and other display parameters; user agent (browser), supported character encoding sets, supported scripting languages, supported tag sets and data types, and installed plug-ins and versions; available input modalities, e.g., keyboard vs. mouse/pen vs. voice, and output modalities, e.g., text vs. images/video vs. audio only; network capabilities such as bandwidth; and -Other user preferences, e.g., acceptable language (might differ from actual location language), acceptable cost of content (for paid content) and preferred payment mechanisms, and interface appearance (e.g., choosing a colour theme). Metadata are important for customisation Pointers to good quality resources need to be described in a central database and organised in such way to allow a content management and customisation engine to easily and suitably recall and re-use them in different customisation scenarios ( Figure 5). Metadata are information about information. HealthCy-berMap metadata base of resource pointers uses fields (elements) from the well-known Dublin Core (DC -http:// www.dublincore.org/) metadata set scheme for resource description with HealthCyberMap own extensions [13]. A DC language field makes possible the selection of resources based on their language to match a user's preferred language. A DC coverage field is used to store the spatial extent or scope of the content of a given resource; information in this field should help selecting those resources that are appropriate for a user's location. DC cannot be used to describe resource quality or the geographical location of a resource publisher or author(s) (to be differentiated from coverage, although both are ge- Figure 6 HealthCyberMap pilot French language interface HealthCyberMap pilot French language interface for browsing French language resources http://healthcybermap.semanticweb.org/language.htm. ographic elements), and so HealthCyberMap had to extend the standard DC elements by introducing its own quality and location elements [13]. Resources can be also selected and classified based on different combinations of two or more DC elements as necessary [14]. This could help filtering and focusing content presented to users in different situations to much smaller, more relevant and more easily manageable sets. User profiles and device descriptions Besides collecting metadata describing information resources, two other types of metadata should be gathered, namely user profiles (including user's location profile which directly affects user's health) and device descriptions. An ideal service should be able to reason with all three types of metadata to personalise and optimise a Web user's experience - Figures 5 and 7. Customisation engine A customisation engine or content management server will use all collected metadata types to select suitable online resources for a particular user and his/her particular needs at a given time (dynamic match), and also present them in a form that is appropriate for this user and the device he/she is using to access the resources ( Figure 5). The content management and customisation engine will depend on a customisation knowledge base or set of rules to "know" which resource(s) and presentation form are suitable for which situation, location or profiles. Mapping health problems in HealthCyberMap and identifying information needs and gaps In addition to mapping medical/health Web resources, an opportunity exists to map health problems and relate them to available online information resources. This can be of great help to healthcare policy makers, planners and managers (helping them make informed decisions). Moreover, any existing knowledge gaps in current Web resources can be identified and health information needs can be efficiently and effectively determined and addressed. For example, we might identify a lack of/a need for adequate Web resources presenting patient education material on a particular health problem. Health and healthcare information providers can then develop any required Web resources or modify existing ones in the light of this information (on current medical/health information needs and corresponding gaps on the Web). Conclusion The main goal of location-based medical/health information services is to allow better presentation of the distribution of health and healthcare needs and Internet resources answering them across a geographical area, with the aim to provide users with better support for informed decision-making. To enable proper service customisation, three main types of metadata have to be collected and processed: information resource descriptions, user profiles (including user location profile which directly affects user health), and user device descriptions. Delivering real-time, location-enhanced and personalised information (i.e., information that is immediately relevant to users) can help consumers and providers accelerate and optimise their decision-making process in many medical/ health situations and problems. The author believes that the integration of a carefully selected variety of medical/ health Internet information services and resources with users' tasks, needs (as determined by users' locations and other parameters), preferences and their device capabilities will enable users to focus more on informed decisionmaking. Throughout this review, experience from a related online health information service, HealthCyberMap http:// healthcybermap.semanticweb.org/, has been referred to as a model that can be easily adapted to other similar services. HealthCyberMap is a Web-based directory service of medical/health Internet resources exploring new means to organise and present these resources based on consumer and provider locations, as well as the geographical coverage or scope of indexed resources. Figure 7 Three interrelated categories of metadata are required for personalised information delivery Three interrelated categories of metadata are required to optimise (personalise) a user's experience.
4,883.2
0001-01-01T00:00:00.000
[ "Medicine", "Computer Science" ]
& 2007 EMBO and Nature Publishing Group All rights reserved 1744-4292/07 www.molecularsystemsbiology.com NEWS AND VIEWS Networks from drug–drug surfaces Mol Syst Biol. 3: 85 Multi‐drug combinations are vital in modern medicine (Keith et al , 2005; Fitzgerald et al , 2006). Such drug combinations can also be used to probe the relationships between proteins in a network, and progress towards using drug interactions to infer network connectivity has been made in recent years. A current study by Lehar et al (2007) takes this effort a large step further by developing tools to use the entire data in a drug–drug interaction dose–response surface to give useful information on the networks in which the drug targets are embedded. Classically, combinations of perturbations—drugs or mutations—have been categorized into one of three interaction types: additive, synergistic, or antagonistic (Bliss, 1939; Loewe, 1953; Hartman et al , 2001). The expected null interaction is called additive, although exactly how this should be defined has been a subject of some controversy (Bliss, 1939; Loewe, 1953; Greco et al , 1995). Synergy occurs when the combination of two perturbations has an effect greater than expected from the individual effects of the single perturbations. Antagonism describes a combination with less than expected effect. These classifications have proved powerful in dissecting the modularity and connectivity of the underlying biological networks (Tong et al , 2001; Schuldiner et al , 2005; Segre et al , 2005; Yeh et al … Multi-drug combinations are vital in modern medicine (Keith et al, 2005;Fitzgerald et al, 2006). Such drug combinations can also be used to probe the relationships between proteins in a network, and progress towards using drug interactions to infer network connectivity has been made in recent years. A current study by Lehár et al (2007) takes this effort a large step further by developing tools to use the entire data in a drug-drug interaction dose-response surface to give useful information on the networks in which the drug targets are embedded. Classically, combinations of perturbations-drugs or mutations-have been categorized into one of three interaction types: additive, synergistic, or antagonistic (Bliss, 1939;Loewe, 1953;Hartman et al, 2001). The expected null interaction is called additive, although exactly how this should be defined has been a subject of some controversy (Bliss, 1939;Loewe, 1953;Greco et al, 1995). Synergy occurs when the combination of two perturbations has an effect greater than expected from the individual effects of the single perturbations. Antagonism describes a combination with less than expected effect. These classifications have proved powerful in dissecting the modularity and connectivity of the underlying biological networks (Tong et al, 2001;Schuldiner et al, 2005;Segre et al, 2005;Yeh et al, 2006). There are some intuitive expectations for combined effects of two drugs. Let us say, for example, that drugs A and B block two alternative metabolic pathways, of which at least one is needed. In this case, each drug may have very little effect, but the combination will be strongly synergistic. Similarly, antagonistic interaction may result from drugs acting on two parallel pathways that are both needed: inhibition of one pathway can make its product a limiting factor, thereby neutralizing the effect of inhibition of other pathways. This simple intuition can be elaborated and applied at the system level to analyze a complex interaction network composed of additive, synergistic, and antagonistic links (Tong et al, 2001;Schuldiner et al, 2005;Segre et al, 2005;Yeh et al, 2006). But the three classical interaction types represent a radical simplification for drug combinations. Many drug combinations exhibit different types of interactions 'within a drug pair' depending on dose. Drugs may be additive at one set of combined concentrations and synergistic or antagonistic in another. 'Response surfaces' (also represented by 'isobolograms'), which show combined drug effects over a 2-D gradient of concentrations, can exhibit a rich set of patterns. Can we use the abundant information in these response surfaces to infer more specific and detailed information regarding the underlying biological network? Lehár et al (2007) used simulated Michaelis-Menten metabolic pathways to ask whether drug-response surfaces can yield information regarding the underlying connectivity of molecular targets (Figure 1). They experimentally tested the results of their simulations in the well-studied sterol metabolism pathway. Indeed, in both simulation and experiments, different drug pairs showed different complex response surfaces. Knowing the underlying connectivity of the system, it was then possible to establish links between various local network topologies and the different response surfaces. Lehár et al (2007) produced a reference set of four response surface models to which experimental data could be classified. They found in their simulations that particular target connectivities produced distinct responses in terms of these shape models. The first reference model, based on the standard Loewe dose additivity (Loewe, 1953), is a best fit for combinations of drugs that affect the same target. A second shape model, 'Bliss Boost', an extension of the statistical Bliss independence (Bliss, 1939), provided the best fits for separate targets in an unregulated pathway. A third model, 'Highest Single Agent', assumes that the inhibition for the combined drugs equal the highest (the most limiting) single drug inhibition (Yeh et al, 2006), and was found to predominate for cross-pathway combinations. A fourth model of 'Potentiation', which allows an inherent asymmetry in the response surface where the presence of a certain drug increases or decreases the effective concentration of another, was found to be a better fit when cellular targets of drugs are in pathways regulated by negative feedback. This paper highlights the utility of models and simulations for a task as critical as that of rationalizing and modeling the effect of drug combinations. Indeed, quantitative approaches to drug interactions have had impact in pharmacology for more than a century. In the mid-nineteenth century, Fraser (1872) published a landmark paper that showed how two drugs, atropia and physostigma, can have hyperantagonistic effects that resulted in one drug reversing the effects of the other (Fraser, 1872). Fraser termed this 'physiological antidote'. Such suppressive interactions also exist in antimicrobial agents (Yeh et al, 2006). It would be interesting to explore whether the Lehár et al's (2007) approach can be extended to link such hyperantagonistic suppressive interactions to possible underlying connectivities of the biological network. This study offers a 'proof of principle' that we can link the rich functional information encoded in complete drug-drug dose-response surfaces to the connectivity between biological targets. It will be interesting to see how far this kind of analysis takes us in the real world: how much will 'off-target' effects confound the analysis? To what extent is the mapping between network topologies and response surfaces a one-to-one function? Further, it would be important to extend this idea to three or more drug components. Such multi-drug treatments are already being used, but there is little understanding of how their response surface (or, more accurately in this case, 'response spaces') should behave. In the case of three-drug combinations, the question of how to define additivity is even more complicated than for two-drug combinations, as the null expectation for additivity must be based not only on the effect of the single drugs but also on pre-existing knowledge of all their pairwise interactions. Lehár et al's (2007) surface responses and shape models offer an excellent starting point for conceptual and experimental explorations of such combinations of three or more drugs. Figure 1 The relationship between target connectivity and synergy for paired inhibitors. The underlying connectivity for two inhibitors A and B is shown schematically along with intuitive expectations for their combined effect and the simulated dose-dependent response surfaces. (A) If their targets are serial in a pathway, A and B should help each other reduce production. If they inhibit parallel pathways (B, C), the combination effect ought to reflect the rate-limiting reaction for required ('AND') junctions but should be very synergistic for alternative ('OR') pathways. (D) More complex network topologies without an intuitive expectation can still be simulated. Lehár et al (2007) sought to categorize and link the set of possible response surfaces to underlying network connectivity (Figure courtesy of J Lehár).
1,869
0001-01-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Soluble polymorphic bank vole prion proteins induced by co-expression of quiescin sulfhydryl oxidase in E. coli and their aggregation behaviors Background The infectious prion protein (PrPSc or prion) is derived from its cellular form (PrPC) through a conformational transition in animal and human prion diseases. Studies have shown that the interspecies conversion of PrPC to PrPSc is largely swayed by species barriers, which is mainly deciphered by the sequence and conformation of the proteins among species. However, the bank vole PrPC (BVPrP) is highly susceptible to PrPSc from different species. Transgenic mice expressing BVPrP with the polymorphic isoleucine (109I) but methionine (109M) at residue 109 spontaneously develop prion disease. Results To explore the mechanism underlying the unique susceptibility and convertibility, we generated soluble BVPrP by co-expression of BVPrP with Quiescin sulfhydryl oxidase (QSOX) in Escherichia coli. Interestingly, rBVPrP-109M and rBVPrP-109I exhibited distinct seeded aggregation pathways and aggregate morphologies upon seeding of mouse recombinant PrP fibrils, as monitored by thioflavin T fluorescence and electron microscopy. Moreover, they displayed different aggregation behaviors induced by seeding of hamster and mouse prion strains under real-time quaking-induced conversion. Conclusions Our results suggest that QSOX facilitates the formation of soluble prion protein and provide further evidence that the polymorphism at residue 109 of QSOX-induced BVPrP may be a determinant in mediating its distinct convertibility and susceptibility. mechanism underlying the conversion from PrP C into PrP Sc remains poorly understood. The structure of human PrP C molecule has been characterized and it is composed of three helices (α1, α2, and α3) and two anti-parallel β-sheets (β1 and β2) folded into a characteristic β1-α1-β2-α2-α3 antiparallel beta-ribbon [2][3][4][5][6][7]. An intramolecular disulfide bridge (Cys 179-Cys 214) between α2 and α3 plays an important role in the folding and stability of PrP C [2,8,9]. To fully understand the key molecular event in the pathogenesis of prion diseases, the structural conversion of PrP, generation of a soluble monomeric recombinant PrP C in Escherichia coli that can be used for monitoring conformational conversion in vitro has been one of the important steps. However, expression of recombinant prion proteins (rPrP) in the cytoplasm of E. coli often forms inactive aggregates (termed inclusion bodies) [10]. These inclusion bodies must be solubilized in harsh reducing and denaturing conditions and then subsequently refolded in mild oxidizing conditions in order to restore the normal intramolecular disulfide bridge [11]. A consequence of the refolding process is some generation of incorrect intramolecular disulfide bridge between two molecules of the recombinant PrP. Furthermore, there is no enzymatic method to determine whether or not the correct folding occurs. Proper intramolecular folding can only be determined through additional characterization of the protein using circular dichroism (CD) or thioflavin T (ThT) fluorescence assays to validate the quality of the product [2]. Quiescin sulfhydryl oxidase (QSOX) is an enzyme that generates and transfers disulfide bond to protein substrates [12]. We have previously demonstrated the ability of QSOX to introduce a disulfide bond to the human and mouse prion proteins and also to facilitate the expression of soluble PrP in E. coli [13]. In this work, we described the production of soluble human and mouse prion proteins for the first time in the E. coli cytoplasm by the co-expression with QSOX [13]. Recently, we further observed that QSOX can highly and efficiently interact with PrP Sc , but not PrP C , isolated from the human brain by inhibiting PrP Sc formation in vitro [14]. In the current study, we report that QSOX can be used to produce a large amount of soluble recombinant bank vole PrP (rBVPrP) with two different polymorphisms either rBVPrP-109I or rBVPrP-109M in E. coli. We also compare the aggregation behaviors of the two QSOX-induced rBVPrP molecules by amyloid fibril assay and electron microscope (EM). We investigated the ability of soluble rBVPrP-109I or rBVPrP-109M to be used as a substrate for hamster and mouse prion strains under real-time quaking-induced conversion (RT-QuIC). Bank voles have been demonstrated to be an animal model that has the least species barrier to other prion species and the rBVPrP has been reported to be a universal substrate for amplification of various prions by RT-QuIC assay [15,16]. QSOX-dependent expression of soluble BVPrP in E. coli To evaluate the effect of QSOX on expression of BVPrP-109M, E. coli Rossetta (DE3) pLysS containing the QSOX plasmid was transformed with pET28a-BVPrP-109M. A small-scale culture was grown to OD = 0.7, then induced with 1 mM IPTG at 15 °C for 16 h. Following lysis, the soluble and insoluble fractions were separated and analyzed on SDS-PAGE, then stained with Coomassie blue and Western blot probed in parallel with anti-His tag antibody (Fig. 1). In bacteria with QSOX, equivalent amounts of BVPrP were detected in both the soluble and insoluble fractions. Bacteria without QSOX, in contrast, exhibited detectable BVPrP-109M only in the insoluble fraction ( Fig. 1). This result revealed that the generation of the soluble PrP is associated with the co-expression of QSOX in E. coli. Effect of QSOX on expression of rBVPrP in E. coli at different time points To further investigate the effect of QSOX on the production of rBVPrP, expression levels of soluble and insoluble BVPrP were monitored in the presence or absence of QSOX co-expression at different time points. SDS/ PAGE and Western blot analysis showed that both cells co-expressed with and without QSOX posed equivalent insoluble fractions in the first 5 h after induction. However, upon a 16 h of induction, insoluble rBVPrP began to accumulate in cells without co-expression of QSOX ( Fig. 2a-c). Only cells expressing QSOX produced soluble rBVPrP-109M, which was detectable after 2 h of induction (Fig. 2d, e). The level of the soluble BVPrP increased and reached a plateau after 8 h of induction (Fig. 2f ). Purification of large amounts of rBVPrP by immobilized metal affinity chromatography and size exclusion chromatography To generate a large amount of purified protein for structural and functional studies, a liter of E. coli culture was induced for 16 h at 15 °C. The soluble fraction of fulllength rBVPrP-109M was then subjected to immobilized metal affinity chromatography (IMAC) (Fig. 3a), followed by size exclusion chromatography (SEC) in order to maximize purity (Fig. 3b, c). The eluted fractions from SEC were tested by SDS-PAGE and Western blotting. The same procedure was used to purify rBVPrP-109I, but with a yield that was tenfold lower than that for rBVPrP-109M (1 mg/L vs. 10 mg/L per liter of culture) (Fig. 3d, e). Circular dichroism spectroscopy We used UV circular dichroism (CD) spectroscopy to confirm that the secondary structure of both purified rBVPrP-109M and rBVPrP-109I was primarily α-helical in structure. The CD spectra of both proteins are similar to those reported for prion proteins from other species [17,18], exhibiting double minimum at 208 and 222 nm characteristic of α-helical secondary structure (Fig. 4). These results confirmed that the purified BVPrP molecules were correctly folded. Interaction of QSOX with BVPrP proteins Our recent study demonstrated that QSOX is able to bind different species of PrP Sc and inhibit PrP Sc formation in vitro [14]. Surface Plasmon Resonance (SPR) was used to determine the dissociation constant (Kd) of this interaction. QSOX was immobilized on the surface of a CM5 chip and full-length rBVPrP-109M or rBVPrP-109I was analyzed at various concentrations to determine binding kinetics. The dissociation constant was determined from the change of the refractive index following the interaction of QSOX with rBVPrP. The Kd values for full-length BVPrP-109M was 23 and 11 nM for BVPrP-109I, respectively (Fig. 5). Aggregation of rBVPrP To determine the aggregation kinetics of rBVPrP-109M or rBVPrP-109I, we used thioflavin T (ThT) as a fluorescence probing dye to monitor protein aggregation. ThT has been widely used to determine protein aggregation and the fluorescence increases upon the binding of ThT to β-rich amyloid like structure. In the seeded reaction using wild-type recombinant mouse PrP fibrils, both recombinant bank vole PrP molecules were able to form amyloid fibrils with virtually no lag phases, through with different kinetics (Fig. 6a). BVPrP-109M more rapidly reached the plateau than BVPrP-109I. In unseeded reactions, neither soluble recombinant bank vole PrP proteins were able to form de novo fibrils after 90 h (Fig. 6a). These data suggest that soluble rBVPrP-109M and rBVPrP-109I can be seeded by aggregates from different species to form ThT-positive fibrils (Fig. 6). To gain further insights into the aggregated morphology, we used electron microscope (EM) to visualize the structure of rBVPrP aggregates. EM images confirmed the formation of long mature fibrils with 100 nm scale bars (Fig. 7). Interestingly, more mature PrP fibrils were observed in rBVPrP-109M than in rBVPrP-109I, which may be associated with their distinct aggregation pathways in the seeded reaction. Recombinant BVPrP serving as substrates in RT-QuIC assay Recent studies have revealed that BVPrP is highly convertible to PrP Sc or aggregates by prions from a variety of species in vivo and in vitro [15,16]. To determine whether QSOX-induced soluble rBVPrP carrying either 109M or 109I can be used as a substrate in real-time quaking-induced conversion (RT-QuIC), we conducted RT-QuIC analysis of hamster-adapted scrapie prion strain 263 K and mouse-adapted scrapie prion strain 139A as previously described [16]. Both rBVPrP forms can be induced into PrP aggregates by either 263 K or 193A prion strain via RT-QuIC assay (Fig. 8). For hamster 263 K strain, rBVPrP-109I exhibited a near twofold greater prion-seeding activity compared to rBVPrP-109M; in contrast, rBVPrP-109I had almost a threefold lower prion-seeding activity compared to rBVPrP-109M when with mouse 193A strain (Fig. 8a, b). Nevertheless, rBVPrP-109I revealed a much more rapid seeding activity than rBVPrP-109M with both 263 K and 139A strains (Fig. 8a, c). Discussion Bacteria are simple and cost effective systems for producing recombinant proteins. However, the over-expression of recombinant proteins in bacteria often generates misfolded proteins, which form inclusion bodies in the cytoplasm [10]. As more applications require higher amounts of high-quality recombinant proteins, new systems and refinements to recombinant protein expression have been developed, notably simultaneous over-expression of chaperones for proper folding and solubility of generated proteins during expression [19]. For example, an established strategy to overcome the formation of inclusion bodies is to introduce a chaperone that helps solubilize the recombinant proteins [19]. Different proteins have been observed to interact with prion protein [20], namely, molecular chaperones from the endoplasmic reticulum (ER), such as Pdia3, Grp58, and Hsp60 [20][21][22]. This is suggestive of an important role for molecular chaperones in early protein folding. Most of these chaperones possess protein disulfide isomerase-like properties or folding activities [23]. In the ER, disulfide bond formation is catalyzed by Ero1 and the PDI [24,25]. The over-expression of the molecular chaperones (GroELS, Skp or trigger factor) and isomerase (DsbC) has been shown to significantly increase the production yield of recombinant disulfide bond-containing antibodies [26]. Indeed, we have previously observed that co-expression of the chaperone QSOX prevents human or mouse PrP from aggregation by producing soluble PrP in E. coli [13]. In bacteria, correct oxidative protein folding depends on both the disulfide bond protein A and B (DsbA/ DsbB) pathway, which catalyzes disulfide bond formation and disulfide bond isomerization [11]. In eukaryotes, ER oxidoreductin 1 (Ero1) and the protein disulphide isomerase (PDI) catalyze the equivalent reactions in the ER. Surprisingly, QSOX has been found to be the only known enzyme that is able to implement both reactions to generate and transfer the disulfide bridge for protein substrates [12]. Because proper folding and normal physicochemical properties of proteins produced in E. coli rely on formation of disulfide bonds, it is conceivable that co-expression of QSOX may facilitate generation of soluble properly folded PrP. Our current study with coexpression of BVPrP and QSOX in E. coli further showed soluble BVPrP, confirming our previous study. Additionally, the CD spectra of the rBVPrP proteins are in agreement with the NMR and X-rays structures of the soluble rBVPrP containing mostly α-helical structures [5,11]. Bank voles (Myodes glareolus) have been well-demonstrated to be an important animal model for prion research because it is highly susceptible to a wide range of prion strains including human, cattle, elk, sheep, mice and hamsters [15]. Compared to other animal models, such as mice, hamsters, or humanized transgenic (Tg) mice, prions from several species are transmissible to bank voles with a higher attack rate and shorter incubation time [27][28][29][30]. The molecular mechanism underlying this phenomenon remains unknown. It is conceivable that this highly efficient susceptibility is attributable to the sequence and structure of BVPrP. Consistent with this hypothesis, interestingly, rBVPrP-109M has been found to be a universal substrate for determining seeding activity of prions from a diverse range of species by real-time quacking-induced conversion (RT-QuIC) assay in vitro [16]. BVPrP contains a polymorphism at residue 109, with either methionine (M) or isoleucine (I) [27,31]. Notably, the transmission of CWD to Tg mice expressing BVPrP-109M exhibited a longer incubation time than Tg mice carrying BVPrP-109I [31]. We observed that E. coli containing QSOX and BVPrP-109M plasmids expressed soluble PrP 10 times more than the one transformed with QSOX and BVPrP-109I. It would be interesting to know whether the distinct susceptibility between BVPrP-109I and BVPrP-109M is related to a different expression level in both strains. Watts et al. have reported that Tg mice expressing BVPrP-109I spontaneously developed prion diseases, whereas Tg mice expressing BVPrP-109M did not [27]. It is possible that this polymorphism may affect the expression level or structure of the protein as human PrP polymorphisms do [32][33][34]. Conclusion We successfully produced soluble rBVPrP-109M or rBVPrP-109I with the co-expression of QSOX in cytoplasm of E. coli. Interestingly, we found that rBVPrP-109M and rBVPrP-109I experienced different aggregation kinetics, with the former reaching the plateau quicker than the latter in the presence of recombinant mouse PrP aggregate seeds. Moreover, there was a difference in morphology of aggregates-rBVPrP-109M formed mature fibrils, whereas rBVPrP-109I mainly generated shorter protofibril-like morphology. With the RT-QuIC system, we revealed that the two rBVPrP responded differently to the infectious hamster and mouse prion strains. Whether their distinct aggregation behaviors or different responses to seeding are associated with different structural features remains to be further determined. Ethics approval and consent to participate All experiments conducted in this study were monitored and approved by VIB Center for Structural Biology, Brussels, Belgium and Case Western Reserve University School of Medicine, Cleveland, Ohio, USA. Small-scale expression of BVPrP in E. coli To avoid formation of inclusion bodies and increase the yield of soluble full-length BVPrP, we co-transformed E. coli Rossetta (DE3) pLysS with each BVPrP construct and the QSOX plasmid as previously described [13,35]. The transformed cells were plated on LB-agar supplemented with 100 μg/mL ampicillin and 50 μg/mL kanamycin. Fresh transformed cells were used to inoculate a 10 mL pre-culture (LB medium supplemented with above antibiotics). The next day, a 40 mL culture was inoculated at 37 °C with 1 mL of the pre-culture and induced with 1 mM isopropyl-b-d-thiogalactopyranoside (IPTG) at an optical density (OD 600 ) of 0.7. After induction, the culture temperature was shifted to 15 °C and incubated overnight (16 h). Cells were pelleted by centrifugation and resuspended in ice cold lysis buffer: 0.1g of cell paste/mL of 50 mM potassium phosphate, pH 7.5, 300 mM NaCl supplemented with 0.1 mg/mL lysozyme, 0.1 mg/mL ABESF and 1 μg/mL leupeptin. Cells were lysed by sonication for 4 times, each time 30 s at 4 °C and were subsequently centrifuged for 20 min at 18,000g. The supernatant was collected and the pellet was resuspended in the initial volume using lysis buffer. To analyze the expression of the soluble BVPrP, SDS/PAGE and immunoblotting were performed for total, supernatant, and pellet fractions as previously described [13]. Quantification of BVPrP expression in E. coli To quantify the amount of BVPrP produced at the different time points during the growth, 1 L culture was induced as described earlier and a 40 mL of sample was collected at 0, 1, 2, 4 and 16 h after induction. Cells were collected by centrifugation at 15,000g for 10 min, weighted and resuspended in (0.1 g of cell paste/ mL) volume of lysis buffer to normalize the cell content for each time point. For estimating the production of total prion protein, a 4 μL of lysis was mixed with 1 μL SDS loading buffer (5 ×) and boiled for 5 min prior to loading onto gels for Western blotting. To determine the amount of total soluble proteins expressed, resuspended cells were lysed by sonication and centrifuged at 18,000g for 20 min. A 4 μL of supernatant was mixed with 1 μL SDS loading buffer (5 ×) and boiled for 5 min prior to loading to gels for staining with Coomassie blue. Collected samples were analyzed on SDS-PAGE and by immunoblotting probed with monoclonal anti-His antibody (Sigma Aldrich) on nitrocellulose membranes (MACHEREY-NAGEL). The PrP bands were visualized by goat anti-mouse IgG, alkaline phosphatase conjugate (Sigma) using NBT/BCIP as substrate (Roche Diagnostics, GmbH). Intensity of the blot signals was quantified using the software LabImage 1D Gel Analysis (Kapelan GmbH, Germany). Large-scale protein expression and purification The E. coli Rossetta (DE3) pLysS were initially co-transformed with full-length BVPrP-109M or BVPrP-109I plus QSOX for a large scale production. E. coli precultures cells (25 mL) were grown overnight at 37 °C in LB medium with supplemented ampicillin (100 μg/mL) and kanamycin (50 μg/mL). A 10 mL of pre-culture was used to inoculate 1 L of LB medium supplemented with ampicillin and kanamycin. The bacteria were induced at A 600 = 0.6 by adding 1 mM isopropyl-b-d-thiogalactopyranoside (IPTG) and then subsequently grown at 15 °C for 16 h. Cells were collected by centrifugation (15 min at 15,000g). The bacterial pellets were resuspended as 0.1 g of cell paste/mL in lysing buffer (50 mM potassium phosphate, pH 7.5, 300 mM NaCl supplemented with 0.1 mg/mL lysozyme, 0.1 mg/mL AEBSF and 1 μg/ mL leupeptin). Mechanical disruption was used to lyse Fig. 8 rBVPrP-109M and rBVPrP-109I used as substrates for RT-QuIC analysis of hamster 263 K and mouse139A. a RT-QuIC spectra of 263 K and 139A in the presence of rBVPrP109I or rBVPrP109M as a substrate, respectively. 2 µL of brain homogenate diluted at 10 −3 from either 263 K-infected hamster brain or 139A-infected mouse brain was added into each well of the 96-well plates as seeds. Each well contained 98 µL RT-QuIC reaction solution [10 mM phosphate buffer at pH 7.4, 300 mM NaCl, 10 µM thioflavin T (ThT), 1 mM EDTA, and 0.1 mg/mL of either rBVPrP109I (top four rows) or rBVPrP109M (bottom four rows)]. Negative controls were the samples without PrP Sc seeds. b Comparison of ThT fluorescence intensity of RT-QuIC prion seeding activity of rBVPrP with 109I or 109M polymorphism seeded by 263 K or 193A strains. The ThT fluorescence intensity is plotted as a function of reaction time (hours). c Comparison of the lag phase of the RT-QuIC seeding activity of rBVPrP with 109I or 109M polymorphism seeded by 263 K and 193A stains. Percentage of maximal ThT fluorescence is plotted as a function of reaction time (hours) the cells using a French press (10,000 psi) and followed by centrifugation at 4 °C for 60 min at 40,000g. The collected supernatant was loaded on a 5 mL Histrap Ni-NTA column (GE-healthcare) previously equilibrated with equilibration buffer (50 mM potassium phosphate pH 7.5, 300 mM NaCl, 10 mM imidazole). The column was washed with five column volumes (CV) of washing buffer: 50 mM potassium phosphate pH 7.5, 1 M NaCl, 50 mM imidazole, followed by ten CV volumes of 50 mM potassium phosphate pH 6.0, 1 M NaCl, and 50 mM imidazole. The protein was eluted with a gradient of imidazole from 50 mM to 1 M in 50 mM potassium phosphate pH 7.5. The eluted soluble BVPrP fractions were loaded on a SDS/PAGE to evaluate protein purity and then pooled and concentrated for a second purification step. The concentrated pool was applied onto a Superdex 75 HR 10/300 GL (GE Healthcare) and eluted with 20 mM Tris-HCl pH 7.5 containing 150 mM NaCl. The elution peak fractions were loaded on SDS/PAGE. The fractions only containing BVPrP were collected and dialyzed against 10 mM sodium acetate pH 4.6 and 1 mM EDTA followed by the final dialysis buffer of 10 mM sodium acetate pH 4.6. Protein aliquots were stored at − 80 °C until further usage. Circular dichroism spectroscopy The far-UV circular dichroism (CD) spectra of rBVPrP were recorded on a Aviv 215 spectropolarimeter (Tokyo, Japan) as previously described [36]. The measurements were performed in 20 mM sodium citrate buffer (pH 5) at 25 °C using a 1-cm path length cell. Protein concentration used to normalize the spectra was determined using a molar extinction coefficient at 280 nm of 62,005/M/cm. Aggregation of rBVPrP monitored by thioflavin T fluorescence assay To monitor the amyloid fibrils formation of QSOXinduced rBVPrP, the thioflavin T (ThT) fluorescence assay was used. We first used 0.5 mg/mL of rBVPrP-109M incubated in 2 M GdnHCl, 100 mM potassium phosphate buffer pH 6.5, and 20 μM ThT. The reaction volume was 200 μL per well in 96-well plates (Corning). Seeding was achieved by adding 1 μL of seeds (0.5 mg/ mL), to each well in 96-well plates. The seeds were prepared previously from recombinant moPrP23-230 fibrils in the same buffer conditions [37]. The 96-well plate was incubated at 37 °C with continuous shaking on a plate reader (SYNERGY2, BioTek). The fibril kinetics was monitored by measuring ThT fluorescence intensity every 15 min by using 440-nm excitation and 480-nm emission. The amyloid-formation kinetics were calculated from four replicates. Electron microscopy of rBVPrP-109M and rBVPrP-109I Transmission electron microscopy of rBVPrP was performed as previously described [38]. Formvar/carbon coated EM nickel grids (400 mesh) were placed formvar/ carbon side down on top of a drop of the amyloid fibrils solution (0.5 mg/mL) for 1 min. The grids were removed, blotted with filter paper and placed onto a drop of 2.0% uranyl acetate (UA) solution for 1 min. The excess UA was removed, and the EM grids were air-dried. The grids were observed by an FEI Tecnai Spirit (T12) electron microscope and the images were captured by a Gatan US4000 4k × 4k CCD camera. Surface plasmon resonance experiments The interaction between QSOX and the full-length rBVPrP-109M or rBVPrP-109I was studied using surface plasmon resonance (SPR) with a BIAcore 3000 instrument as described previously [14]. QSOX was diluted to 2 µg/mL in 10 mM sodium acetate, pH 5.2, and covalently linked to a Sensor Chip CM5 (carboxymethylated dextran surface) using the amine coupling chemistry. A surface density of 1500 RU was created after immobilization and blocking with ethanolamine. Different concentrations of the full-length rBVPrP-109M or rBVPrP-109I (0-500 nM) were injected in a running buffer (PBS, pH 7.4, 0.005% surfactant P20 and 3 mM EDTA) at 25 °C at a flow rate of 5 µL/min. All analytes were run subsequently over a control flow cell containing a blank surface (with no immobilized protein). After each cycle, the surface was regeneration with 60 s pulse of 100 mM glycine, pH 1.5. Association rates (K on ) and dissociation rates (K off ) were obtained using a 1:1 Langmuir binding model (Bicore evaluation software version 4.1). The equilibrium dissociation constant (K d ) was calculated using steady state fitting. RT-QuIC assay RT-QuIC assay was conducted as previously described [16]. The seeds used in this study were prepared as 10% (w/v) brain homogenates from hamster-adapted scrapie strain 263 K and mouse-adapted scrapie strain 139A. They were diluted 1000-fold in a solution containing 0.1% sodium dodecyl sulfate (SDS), 1X phosphate buffered saline at pH 5.8, and 1X N2 media supplement. The RT-QuIC reaction solution was prepared to contain 10 mM phosphate buffer at pH 7.4, 300 mM NaCl, 10 µM thioflavin T (ThT), 1 mM ethylenediaminetetraacetic acid (EDTA), and 0.1 mg/mL of either rBVPrP23-231 (109M) or rBVPrP23-231 (109I). 98 µL of reaction solution was loaded into a black clear-bottomed 96 well plate (Nunc). Wells were subsequently seeded with 2 µL of either hamster 263 K or mouse 139A diluted brain homogenate for a final reaction of 100 µL with a final SDS concentration of 0.002%. Plates were then sealed and placed in a BMG FLUOstar Omega plate reader at 55 °C for 50 h and subjected to 60 s intervals of double orbital shaking at 700 rpm, alternating with 60 s intervals of rest. ThT fluorescence measurements, excitation at 450 nm and emission at 480 nm, were taken every 15 min. Four replicates were run for each condition and the total assay was repeated once. After compensating for baseline measurements, fluorescence values were normalized to percentage of maximal response and plotted versus time.
5,733
2017-10-04T00:00:00.000
[ "Biology" ]
Geographic and economic influences on benralizumab prescribing for severe asthma in Japan Benralizumab, a monoclonal antibody targeting IL-5 receptors, reduces exacerbations and oral corticosteroid requirements for severe, uncontrolled eosinophilic asthma. In Japan, geographic disparities in asthma outcomes suggest differential prescribing and access. This study aimed to quantify regional prescribing variations for benralizumab nationwide. Using Japan’s National Database (NDB) of insurance claims (2009–2019), benralizumab standardized claim ratios (SCRs) were calculated for 47 prefectures. Correlations between SCRs and other biologics’ SCRs, economic variables like average income, and physician densities were evaluated through univariate analysis and multivariate regressions. Income-related barriers to optimal prescribing were examined. Wide variation emerged in benralizumab SCRs, from 40.1 to 184.2 across prefectures. SCRs strongly correlated with omalizumab (r = 0.61, p < 0.00001) and mepolizumab (r = 0.43, p = 0.0024). Average monthly income also positively correlated with benralizumab SCRs (r = 0.45, p = 0.0016), whereas lifestyle factors were insignificant. Respiratory specialist density modestly correlated with SCRs (r = 0.29, p = 0.047). In multivariate regressions, average income remained the most robust predictor (B = 0.74, p = 0.022). Benralizumab SCRs strongly associate with income metrics more than healthcare infrastructure/population factors. Many regions show low SCRs, constituting apparent prescribing gaps. Access barriers for advanced asthma therapies remain inequitable among Japan’s income strata. Addressing affordability alongside specialist allocation can achieve better prescribing quality and asthma outcomes. Severe asthma is characterized by chronic airway inflammation that remains inadequately controlled despite high-dose inhaled corticosteroids (ICS) and additional controller medications, with or without systemic corticosteroids 1 .In the literature, reported prevalence rates for severe asthma range widely from 1.8 to 38% of individuals with asthma, likely reflecting differences in study populations and case definitions 2 .The persistence of symptoms in such cases highlights the clinical challenge of managing this patient population and underscores the need for alternative treatment strategies. Benralizumab (BRZ), a humanized monoclonal antibody, targets the interleukin-5 receptor alpha (IL-5Rα) on eosinophils, thereby reducing eosinophilic inflammation central to asthma pathogenesis.In Japan, BRZ was added as a standard treatment for severe asthma following Omalizumab, an IgE antibody that specifically targets immunoglobulin E to reduce allergic asthma symptoms, and Mepolizumab, an antibody against IL-5, which plays a critical role in the development and activation of eosinophils.Clinical trials show BRZ can decrease exacerbations and oral corticosteroid requirements, while improving lung function for patients with severe, uncontrolled eosinophilic asthma 3,4 .However, BRZ is not indicated or effective for all severe asthma patients 5 .Treatment guidelines recommend BRZ for those with blood eosinophil counts ≥ 300 cells/μL or ≥ 2 exacerbations requiring www.nature.com/scientificreports/systemic corticosteroids in the prior year.Furthermore, the decision to prescribe BRZ may be influenced by a range of other factors, encompassing patient preferences, comorbidities, cost-efficiency, and availability [6][7][8] . Therefore, we aimed to quantify regional differences in BRZ prescribing patterns for severe asthma patients in Japan using the National Database (NDB).Established in 2009, the NDB contains reimbursement claims from healthcare institutions across Japan.Analyzing this dataset, we investigated potential associations of BRZ prescription with patient demographics and health system variables. Regional difference in the number of asthma patients and standardized claim ratio of benralizumab in Japanese prefectures In assessing the regional variation in asthma management in Japan, a detailed analysis of the number of asthma patients and the Standardized Claim Ratio (SCR) of BRZ was conducted, as illustrated in Table 1.The data reveals significant inter-prefectural variability in both the number of patients treated and the SCR for BRZ, Omalizumab, and Mepolizumab.In Japan, the healthcare system is underpinned by universal public health insurance, ensuring affordability and equal access to medical services for all citizens.Under this system, patients pay only a portion of medical expenses determined by their income level, while the remaining costs are covered through the public insurance scheme.Importantly, the fees for medical consultations, procedures, and treatments like BRZ are uniformly regulated nationwide.This obligates all hospitals and clinics to offer services at the established rates, regardless of institution or geographic location.Furthermore, BRZ can be prescribed by respiratory specialists as well as general practitioners for eligible patients.This universal coverage coupled with broad prescribing eligibility aims to facilitate access to advanced asthma therapies across Japan.However, the wide ranges observed in the SCRs suggest potential disparities in the actual utilization of BRZ between prefectures. Ibaraki Prefecture emerged as the region with the highest SCR for BRZ at 184.2, suggesting a greater than average prescription rate when normalized against the national level, whereas Kochi Prefecture recorded the lowest SCR at 40.1.This demonstrates a remarkable difference, with the highest SCR being approximately 4.59 times greater than the lowest.Furthermore, Tokyo, with the largest patient population (10.1 thousand), showed a disproportionately high SCR for Omalizumab (197.4) and Mepolizumab (264.3),reflecting a preference or greater accessibility to these treatment options in the metropolitan area.Conversely, Kagoshima Prefecture, despite having a moderate patient population size (1.5 thousand), reported the lowest SCRs across all three medications. The intensity map (Fig. 1) presents a detailed visualization of the SCR for BRZ across the prefectures in Japan, highlighting significant inter-prefectural differences.Notably, the shades of gray depict the variance in each prefecture's prescription rates relative to the national average, with darker tones indicating a higher SCR.Upon broader regional analysis, however, the data do not reveal a distinct pattern in the distribution of SCR between the larger northern and southern areas of Japan.Despite the notable disparities at the prefectural level, the absence of a clear gradient or trend when assessing larger geographical entities suggests that factors influencing BRZ prescription rates may not be related to these broad regions' characteristics. Such disparities in SCR of BRZ among different regions might reflect the accessibility, affordability, physician availability, preferences for biologics, or acceptance of this specific treatment and may warrant further investigation to ensure equitable care across all prefectures. Univariate analysis of factors related to SCR of benralizumab The univariate analysis presented associations between BRZ SCRs and several demographic, economic, and healthcare factors across Japanese prefectures (Table 2).This analysis indicates a significant positive correlation with the SCR of Omalizumab (median: 79.8, range 35.6-197.4) and Mepolizumab (median: 71.8, range 41.0-264.3),with correlation coefficients of 0.61 (p < 0.00001) and 0.43 (p = 0.0024) respectively.The average monthly income (median: 281.9, range 240.5-303.6)also shows a positive correlation with a coefficient of 0.45 (p = 0.0016), which may reflect economic influences on the availability or the prescribing patterns of BRZ. Figure 2a indicates that average monthly income has a small negative impact on SCR, explaining 20% of its changes.Notably, the smoking rate among individuals aged 40 and above (median: 0.22, range 0.19-0.28),and the obesity rate (median: 0.29, range 0.26-0.40),are not significantly correlated with BRZ SCR.This could imply that lifestyle factors may not directly affect the prescription rates of this specific treatment within the studied regions. The density of respiratory specialists per population (median: 0.51, range 0.29-0.85)presents a mild positive correlation (r = 0.29, p = 0.047), suggesting that the presence of healthcare professionals specializing in respiratory diseases may have a modest impact on BRZ prescription rates.The university enrollment (median: 0.52, range 0.41-0.68)rate also has a positive correlation (r = 0.32, p = 0.028), which might be indicative of the role of educational attainment in healthcare access or decision-making regarding advanced therapies.Scatter plots indicate associations between the SCR of BRZ and respiratory specialist density (Fig. 2b), or university enrollment rates (Fig. 2c). Figure 2b models the association between respiratory specialist density and BRZ SCR, with a regression slope of 66.1 and R 2 of 0.085.Figure 2c examines the link with university enrollment rates, finding a positive correlation (slope = 160.7)that accounts for 10% of SCR variability (R 2 = 0.10). In Fig. 2d, a scatter plot with the incidence of exacerbations as the independent variable, presents a regression line characterized by y = 95.7 − 0.18x and an R 2 of 0.00078.This strikingly low coefficient of determination underscores the absence of a meaningful relationship between the frequency of asthma exacerbations and the use of BRZ.This raises a significant concern regarding the adequacy of BRZ prescription for patients with severe asthma who experience frequent exacerbations-a patient group that would typically benefit from biologic treatments.The data suggest a potential underutilization of BRZ in these patients, indicating a gap in the management of severe asthma that warrants urgent attention.www.nature.com/scientificreports/ Multivariate analysis of factors related to SCR of benralizumab To identify independent predictors of regional differences in BRZ standardized claim ratios (SCRs), we conducted a multivariate regression analysis including respiratory specialist density, average monthly income, university enrollment rates, and exacerbation rates requiring steroids (Table 3).www.nature.com/scientificreports/ The number of respiratory specialists per population did not emerge as a significant independent predictor (B = 47.88,95% CI [− 16.31 to 112.08], p = 0.14) after adjusting for the other factors.However, average monthly income remained significantly associated with SCRs (B = 0.74 [95% CI 0.11-1.37],p = 0.022) with the highest standardized regression coefficient (β = 0.59), indicating it had the greatest relative effect on BRZ prescription rates.Though income, specialists, and education had correlations in univariate tests, the multivariate results suggest income bears the most robust predictive relationship with BRZ usage patterns across regions. University enrollment rates and exacerbation rates showed negligible regression coefficients in the multivariate model.Collectively, these results underline pronounced variability among prefectures explained predominantly by income-related factors, rather than healthcare infrastructure or population characteristics.Strategies addressing financial barriers may be warranted alongside specialist resource allocation to achieve more equitable BRZ prescribing practices. Discussion The present study is to understand the regional disparities in the prescription of BRZ among patients with severe asthma in Japan.The observed regional differences in the prescription of BRZ have been identified in this study, with the range of SCR (40.1-184.2) demonstrating a substantial variance across different prefectures.A significant association was observed between the SCR of BRZ and the average monthly income in various prefectures. Our findings emphasize the vital role of average monthly income in the prescription pattern of BRZ.The significant correlation identified (p = 0.022) indicates that financial factors substantially influence the accessibility and use of BRZ therapy among severe asthma patients.Treatments by biologics remains uncommon among patients with severe asthma 9,10 .A comparative examination of insurance types revealed that higher biologic treatment visits among privately insured individuals 11 www.nature.com/scientificreports/independently contributed to adverse asthma outcomes, transcending factors such as education, perceived stress, race, and medication adherence 12 .The efficacy of biologics, specifically those targeting type 2 cytokines, has been well-established 13 .However, the observed disparities in access based on income and insurance type may hinder the broader application of these efficacious treatments, leading to suboptimal patient outcomes.While Japan's public healthcare system provides uniform social security benefits to all citizens, the high cost of BRZ means that patients still incur some out-of-pocket expenses. To overcome this economic barrier, some financial support programs may be needed to help patients with severe asthma access BRZ treatment, because appropriate use of biologics reduce exacerbation of asthma and cost for medication related to exacerbation 14,15 .For instance, in the United States, there are patient assistance programs that provide biologics to eligible patients with limited income or no insurance coverage.These programs may reduce the financial burden and improve adherence and satisfaction among patients who receive BRZ treatment.Similar programs may be beneficial for Japanese patients as well. An alternative strategy to reduce the cost of biologics could be to adjust the treatment duration or reduce the dose for specific patients.Similar approaches have been successfully implemented in the treatment of rheumatic diseases.Studies have shown that spacing biologic treatments may be feasible for some patients, resulting in reduced costs and number of injections without compromising disease control 16 .Recent findings also support the safety and effectiveness of progressively reducing biologic drugs in rheumatoid arthritis patients in persistent remission 17 .Such strategies may be worth exploring in asthma treatment, with careful consideration of individual patient needs.Some reports indicated that patients with access to a specialist are more likely to adhere to biologic treatment [18][19][20] .Race-ethnic differences are also identified as factors that influence the treatment and outcome of asthma 21,22 .Patient-related barriers to treatment adherence include a myriad of factors, such as understanding the need for treatment, confidence in clinicians or medication, and the severity of asthma.These barriers, while not directly investigated in our study, are important for personalized care approaches in the management of asthma. There are some limitations in this study that need to be acknowledged.First, we used the SCR of BRZ as a proxy measure of its prescription rate, which may not reflect the actual number of patients who receive BRZ treatment in each prefecture.Second, we did not have access to individual patient data, such as age, sex, asthma severity, eosinophil count, comorbidities, and previous treatments, which may affect the decision to prescribe BRZ.Third, we did not consider other factors that may influence the prescription of BRZ, such as patient preference, physician preference, availability of biologics, and regional guidelines.Importantly, the national data utilized in this study does not provide information on the prescribing behaviors of individual physicians or referral patterns among physicians.This limitation suggests that the variations observed could be partly due to the differing tendencies of physicians to prescribe biologics, influenced by their specialization in severe asthma.Therefore, further studies with more detailed data are needed to confirm our findings and explore other factors related to the prescription of BRZ. In conclusion, the interaction between income, insurance, and the access to and utilization of biologics for severe asthma treatment presents multifaceted challenges.Understanding these dynamics is crucial for healthcare providers, policymakers, and stakeholders to implement strategies that promote equitable access to these promising therapies.The evidence assembled in this study, alongside previous research, forms a compelling basis for further inquiry and policy action to mitigate these disparities. National health insurance system and NDB open data In Japan, the healthcare system is underpinned by public health insurance, where the patient's financial contribution towards medical expenses is contingent upon their income level 23 .The residual costs are invoiced by individual medical institutions to the respective Claims Review and Reimbursement Organizations situated within each of Japan's 47 prefectures, with payments being issued upon verification of claim validity.Furthermore, the fees for medical consultations, diagnostic procedures, and treatments are uniformly regulated across Japan, obligating all medical facilities to offer healthcare services at these established rates 24 . The NDB was set up in 2009 under the "Act on Assurance of Medical Care for the Elderly People".It's among the world's largest databases, gathering data on medical receipts since 2009 and on targeted health checkups and guidance since 2008.To encourage data use, Japan's medical care state and health checkup outcomes were first shared as NDB open data in 2016.This open data includes seven key sections: "Medical Practice", "Dental Practice", "Dental Injuries and Diseases", "Drugs", "Specific Health Care Materials", "Specific Health Examination (Laboratory Test Values)", and "Specific Health Examination".The data, available freely, is organized by fiscal year.As of 2022, it includes information up to the 2019 fiscal year. Study design This research utilized an ecological study approach at the prefectural level, employing the same methodologies as those used in our prior study 25 .Using data on the number of prescriptions for oral medications generated from almost all insurance data, we examined differences in the propensity to prescribe BRZ between prefectures.We also examined the impact of differences in medical resources (number of specialists and hospitals) and conditions at diagnosis in each prefecture on prescribing trends. Data sources SCRs for BRZ and omalizumab mepolizumab in 2020 were obtained from the Cabinet Office webpage "Regional Differences in Healthcare Delivery Status 26 .Information on average monthly earnings in each prefecture was obtained from the Basic Survey on Wage Structure published by the Ministry of Health, Labor and Welfare 27 .The obesity and smoking rates for those aged 40 and over in each prefecture were obtained from the 7th NDB 28 , the university enrollment rate in each prefecture was obtained from the Basic School Survey 29 , and the number of respiratory specialists and allergy specialists were obtained from the Japanese Respiratory Society, and Japanese Society of Allergology website. Indicators As an indicator of the number of BRZ prescriptions, the standardized claim ratio (SCR) was calculated using the following formula 14 . w h e r e A = number of residents in each prefecture by age and sex, B = number of prescriptions in Japan by age and sex, C = number of residents by age and sex in Japan. The SCR is used to adjust for differences in the age and sex composition of each prefecture, and a score of 100 or more indicates that the number of cases is higher than the national average. We also investigated the correlation between the SCR of BRZ and the SCRs of Omalizumab and Mepolizumab, as well as the average monthly income, smoking rate among those aged 40 and above, obesity rate among those aged 40 and above (BMI ≥ 25), the number of respiratory specialists certified by the Japanese Respiratory Society per population, the number of allergy specialists certified by the Japanese Society of Allergology per population, university enrollment rate, and the proportion of steroid-requiring exacerbations in each prefecture. Visualization of benralizumab standardized claim ratio (SCR) by prefecture To visualize the regional distribution of benralizumab standardized claim ratios (SCR) across Japanese prefectures, we utilized Python along with the 'japanmap' and 'matplotlib' libraries.The dataset containing benralizumab SCR values for each prefecture was processed to ensure consistency and accuracy.A colormap was defined using 'matplotlib' to visually differentiate SCR values, and normalization techniques were applied to scale the SCR values appropriately.Using the 'japanmap' library, we generated a map of Japan where each prefecture was colored according to its SCR value.The intensity of the color represented the magnitude of the SCR, with a gradient from lighter to darker shades indicating lower to higher SCRs, respectively. Figure 1 . Figure 1.Intensity map showing SCR for each prefecture.The darker the color, the larger the SCR.A part of Hokkaido prefecture and Okinawa prefecture is different from the actual position.This figure was created by the authors using the 'japanmap' library (version 0.3.1)and 'matplotlib' library (version 3.9.1). Figure 2 . Figure 2. Scatter plot with the value of each explanatory variable on the x-axis and SCR on the y-axis.(a) The explanatory variable is the average monthly income.y = − 64, 5 − 0.56x, R 2 = 0.20.(b) The explanatory variable is the number of respiratory specialists per population.y = 57.8+ 66.1x, R 2 = 0.085.(c) The explanatory variable is the university enrollment rate.y = 9.0 + 160.7x,R 2 = 0.10.(d) The explanatory variable is the incidence of exacerbation.y = 95.7 − 0.18x, R 2 = 0.00078.R 2 coefficient of determination, SCR standardized claim ratio. Table 1 . Numbers of patients and SCR of drugs in each prefecture.SCR standardized claim ratio, BRZ Benralizumab. Table 2 . Correlation between SCR and each factor in 2020.BMI body mass index, SCR standardized claim ratio, r Pearson's correlation coefficient.*p < 0.05. Table 3 . Multiple regression analysis with SCR as the objective variable.95% CI 95% confidence interval, B partial regression coefficient, β standardized partial regression coefficient, VIF variance inflation factor, R 2 coefficient of determination, SCR standardized claim ratio.*p < 0.05.
4,445
2024-07-02T00:00:00.000
[ "Medicine", "Economics", "Geography" ]
Establishment and analysis of coupled dynamic model for dual-mass silicon micro-gyroscope This paper presents a coupled dynamic model for a dual-mass silicon micro-gyroscope (DMSG). It can quantitatively analyze the influence of left-right stiffness difference on the natural frequencies, modal matrix and modal coupling coefficient of the DMSG. The analytic results are verified by using the finite element method (FEM) simulation. The model shows that with the left-right stiffness difference of 1%, the modal coupling coefficient is 12% in the driving direction and 31% in the sensing direction. It also shows that in order to achieve good separation, the stiffness of base beam should be small enough in both the driving and sensing direction. Introduction The Dual-mass Silicon Micro-gyroscopes (DSMG) are employing electrostatic actuation and capacitive detection. Limited by the current MEMS fabrication technology, the stiffness of the support beams for the left and right structures of DSMG will be different with the fabrication error [1]. The left-right stiffness difference (stiffness difference) will change the mechanical properties especially the bias and vibration sensitivity of the gyroscope. Therefore, it is important to analyze the impact of stiffness difference on bias and vibration sensitivity of the DMSG. The fabrication error impacting on the driving direction of the DSMG had been analyzed in some literatures [2][3][4]. Take into considered of both the drive and sense mode, a non-ideal dynamic model was studied in the literatures [5][6]. However, the stiffness difference of the DSMG in the above mentioned papers was only considered in its driving direction and the modal coupling coefficient was not be proposed. In this paper, firstly, the coupled dynamic model is proposed for a DMSG. Second, the natural frequencies and modal matrix are calculated. Then the modal coupling coefficient is deduced from the proposed modal matrix. Additionally, the coupled dynamic model can be solved by modal decoupling method. At last, the reliability of the theory is verified by finite element method (FEM) simulation. Fig.1 is the schematic of a dual-mass silicon micro-machined gyroscope. By applying the alternating voltage on the driving comb capacitor, the left and right mass will move in opposite direction along Xaxis (driving direction). When the sensor rotates about Z-axis, the resulting Coriolis force causes the left and right mass to move in opposite direction along Y-axis (sensing direction). The relative motion between the movable detection comb and the fixed detection comb forms the differential capacitance for detection. Ideally, the amount of differential detection capacitance is proportional to the input angular rate. Structure and operation principle Herein, msl and msr are the left and right proof masses, mdl and mdr are the left and right drive comb masses, mb and Ib are the proof mass of the base beam, kdl=kd+Δkd/2 and kdr=kd-Δkd/2 are the bending stiffness of the left and right drive beams, ksl=ks+Δks/2 and ksr=ks+Δks/2 are the bending stiffness of the left and right sense beams. kbd and kbs are the bending stiffness of the base beam in its driving and sensing direction. The parameters of the DMSG designed are list in the Tab. 1. Here r1 and r2 are defined as the mode function. Now, the modal coupling coeffecient which reflects the coupling between the in-phase and anti-phase motion can be express as (7) Finally, the system can be decoupling by applying the modal matrix. As a result the motion equations change to ( 8) where [Mp]=[A] T [M] [A] and [Kp]=[A] T [K] [A] are the modal mass and modal stiffness. {q}=[A] T {u} is the modal freedom. As a consequence, the motion equations can be solved independently. Drive mode The natural frequencies and mode functions in its driving direction are shown as: Sense mode The natural frequencies and mode functions in its sense direction are shown as: Obviously, the relation in (15) means that the in-phase motion and the anti-phase motion are complete independence. According to (9)-(12), Δkd and Δks are determined by the processing precision. kd and ks determine the drive mode and sense mode. As a result, in order to achieve good separation, the stiffness of base beam should be small enough in both the driving and sensing direction. However, these low-rigidity beams present challenges and often introduce other problems. Fem simulation The Finite element software ANSYS was employed to verify the coupling dynamic model of the DSMG. The schematic of a dual mass silicon micro-machined gyroscope is shown in Fig.3. Frist, the modal analysis of the DSMG structure with stiffness difference is done to verify the natural frequencies. The natural frequency variation due to the stiffness difference of 1% is shown in Tab. 2. The data in Tab. 2 shows that the natural frequency variations due to the stiffness difference of 1% are 0.5Hz in its driving direction and 1.5Hz in its sensing direction. It also shows that the natural frequency variation can be accurately represented by the coupling dynamic model. Then the harmonic response simulation is done to verify the modal coupling coefficient. Fig. 4 shows the frequency response of the anti-phase drive mode. The frequency response of other modes has the similar results. Fig. 4 shows the in-phase and anti-phase displacements are not independent from each other. Fig. 5 shows the modal coupling coefficient of theory and simulation in each mode. According to Fig. 5, with the stiffness difference of 1%, the modal coupling coefficient is 12% in the driving direction and 31% in the sensing direction. It also shows that the theoretical result and simulation result are in good agreement. Conclusion Base on the structural characteristics, a coupling dynamic model is proposed and established for the dual mass silicon micro-gyroscope in this paper. The effect of the stiffness difference on the natural frequencies and mode functions are demonstrated. The modal coupling coefficient is calculated based on the mode functions. The natural frequency variations due to the stiffness difference of 1% are 0.5Hz in its driving direction and 1.5Hz in its sensing direction. The modal coupling coefficient is 12% in the drive direction and 31% in the sense direction due to the stiffness difference of 1%. These theoretical results are verified by the FEM simulation. The good agreement between the theory result and simulation result prove the accuracy of our model and its theory.
1,447.6
2017-12-01T00:00:00.000
[ "Engineering", "Physics" ]
Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics Beyond LambdaCDM We present an improved determination of the Hubble constant (H0) from Hubble Space Telescope (HST) observations of 70 long-period Cepheids in the Large Magellanic Cloud. These were obtained with the same WFC3 photometric system used to measure Cepheids in the hosts of Type Ia supernovae. Gyroscopic control of HST was employed to reduce overheads while collecting a large sample of widely-separated Cepheids. The Cepheid Period-Luminosity relation provides a zeropoint-free link with 0.4% precision between the new 1.2% geometric distance to the LMC from Detached Eclipsing Binaries (DEBs) measured by Pietrzynski et al (2019) and the luminosity of SNe Ia. Measurements and analysis of the LMC Cepheids were completed prior to knowledge of the new LMC distance. Combined with a refined calibration of the count-rate linearity of WFC3-IR with 0.1% precision (Riess et al 2019), these three improved elements together reduce the full uncertainty in the LMC geometric calibration of the Cepheid distance ladder from 2.5% to 1.3%. Using only the LMC DEBs to calibrate the ladder we find H0=74.22 +/- 1.82 km/s/Mpc including systematic uncertainties, 3% higher than before for this particular anchor. Combining the LMC DEBs, masers in NGC 4258 and Milky Way parallaxes yields our best estimate: H0 = 74.03 +/- 1.42 km/s/Mpc, including systematics, an uncertainty of 1.91%---15% lower than our best previous result. Removing any one of these anchors changes H0 by<0.7%. The difference between H0 measured locally and the value inferred from Planck CMB+LCDM is 6.6+/-1.5 km/s/Mpc or 4.4 sigma (P=99.999% for Gaussian errors) in significance, raising the discrepancy beyond a plausible level of chance. We summarize independent tests which show this discrepancy is not readily attributable to an error in any one source or measurement, increasing the odds that it results from a cosmological feature beyond LambdaCDM. Introduction Cepheid variables in the Magellanic Clouds (Leavitt & Pickering 1912;Hertzsprung 1913) have long played a starring role in the distance scale and the determination of the present value of the expansion rate of the universe, the Hubble constant (H 0 ). With knowledge of the distance to the Large Magellanic Cloud (LMC), our nearest Cepheid-rich neighbor, we can directly determine the absolute magnitudes of these pulsating stars. Cepheids have been observed with the Hubble Space Telescope (HST) in the hosts of SNe Ia at distances of up tõ 20 Mpc with WFPC2 (Freedman et al. 2001;Sandage et al. 2006) and up to~40 Mpc with ACS and WFC3 to measure the far greater luminosities of these exploding stars. The resulting ability to determine distances to SNe Ia deep into the Hubble-Lemaître flow completes a distance ladder that provides the most precise, model-independent, and direct means for determining H 0 . Knowledge of this cosmological parameter remains central to describing the present state of our universe and setting expectations for its fate. In addition to characterizing the state of our universe, refined measurements of H 0 may also be pointing toward a new wrinkle in the cosmological model. Measurements of the distance ladder with improved precision and control of systematics from the SH0ES Team (Riess et al. 2016, hereafter R16) demonstrate that the universe is expanding at present about 9% faster than inferred from the ΛCDMmodel calibrated by Planck CMB data (Planck Collaboration et al. 2018) from the early universe, with a significance of 3.6σ (Riess et al. 2018a, hereafter R18a). The higher local value results from the use of any one of five independently determined, geometric distance estimators used to determine the luminosity of Cepheids, including masers in NGC 4258 (Humphreys et al. 2013;Riess et al. 2016), eight detached eclipsing binaries (DEBs) in the LMC (Pietrzyński et al. 2013), and three distinct approaches to measuring Milky Way (MW) parallaxes (Benedict et al. 2007), with the most recent from HST spatial scanning (Riess et al. 2018b, hereafter R18b) and Gaia DR2 (R18a). Further out, the distance ratios to SNIa hosts provided by Cepheids have been confirmed to a precision of 2%-3% by a dozen measured with the Tip of the Red Giant Branch (Jang & Lee 2017;Jang et al. 2018;Hatt et al. 2018aHatt et al. , 2018b and Miras (Huang et al. 2018;Yuan et al. 2017;C. Huang et al. 2019, in preparation). Strong-lensing results from the H0LiCOW team (Birrer et al. 2018) are fully independent of all rungs of the distance ladder yet find a similar value of H 0 from the late universe, one that is 2.3σ (P=98%) higher than the Planck-calibrated value. At the other end of time, the low value of H 0 predicted from the early universe is corroborated by independent measurements of the CMB or Ω B with baryon acoustic oscillation (BAO) data (Addison et al. 2018) and from "inverse distance ladders," such as the one built by the Dark Energy Survey that is calibrated from the sound horizon at z∼1000 (Macaulay et al. 2019, see Discussion for further consideration of these results). With multiple, independent checks now established at both ends of cosmic history, this "H 0 Tension" between the early and late universe, as it is widely known, may be interpreted as evidence for a new cosmological feature such as exotic dark energy, a new relativistic particle, dark matter-radiation or neutrinoneutrino interactions, dark matter decay, or a small curvature, each producing a different-sized shift (Khosravi et al. 2017;Renk et al. 2017;Aylor et al. 2018;D'Eramo et al. 2018;Di Valentino et al. 2018;Mörtsell & Dhawan 2018;Barenboim et al. 2019;Kreisch et al. 2019;Pandey et al. 2019;Vattis et al. 2019) with some proposals spanning the full discrepancy while improving the agreement between the model and CMB data (Poulin et al. 2018). Pinpointing the cause of the tension requires further improvement in the local measurements, with continued focus on precision, accuracy, and experimental design to control systematics. New measurements of 20 late-type DEBs in the LMC from Pietrzyński et al. (2019) offer the most precise foundation to date to geometrically calibrate this distance ladder. The current approach to measuring these geometric distances uses longbaseline near-infrared (NIR) interferometry of individual latetype giants to measure their angular sizes and derives a purely empirical relation between their surface brightness and color relation, with a scatter of only 0.018 mag. That such a relation exists is a direct consequence of the Planck Law. Applying this relation to a late-type giant in a DEB system yields the geometrically calibrated angular diameter of the star from its color and brightness. Combining the physical radius derived from radial velocities and eclipsing light curves yields a purely geometric distance with a typical precision of ∼2% per system. These DEB measurements appear quite robust, as the variance of the sample is fully characterized by the method and there is no dependence on astrophysical models; the details can be found in Pietrzyński et al. (2013Pietrzyński et al. ( , 2019. The most recent result uses an improved surface brightness and color, whose calibration otherwise systematically limits the measurement, and an expanded sample of 20 DEBs to measure the distance to the center of the LMC to 1.2% (0.0263 mag) precision. To fully exploit the improved precision of the LMC distance, we need greater control of systematic errors than past measurements. Simply comparing the brightnesses of Cepheids in the LMC to those in SNIa hosts measured from two different telescopes would incur a net systematic 2%-3% error, just from the use of different photometric systems with their individual zero-point uncertainties. Measurements of Cepheids in the NIR are especially important in order to mitigate systematic errors from extinction and metallicity variations. Yet photometric errors are larger in the NIR, as ground-based bandpasses are unstable and are redefined nightly by the atmosphere. Even for the best calibrated ground-based system in the NIR, 2MASS, the systematic uncertainty in the transformation to the best-match WFC3 F160W band (after accounting for bandpass differences) is found empirically to be σ≈0.03-0.04 mag (Riess 2011). This is not surprising, as the absolute zero-points of each are (claimed to be) known to only 0.02-0.03 mag (Skrutskie et al. 2006;Kalirai et al. 2009). Thus, mixing ground-based photometry from Cepheids in the NIR from (Macri et al. 2015, hereafter M15) or Persson et al. (2004) with those from HST in SNIa hosts incurs a ∼1.4%-1.8% systematic error in distance measurements, swamping the improved LMC distance precision unless a single, stable system is used to measure both sets of Cepheids and nullify zero-point errors. Even using a single photometric system, it is necessary to calibrate its ability to measure relative fluxes across the 10 mag range between Cepheids in the LMC and SNIa hosts. Fortunately, concurrent work has now calibrated the linearity of the WFC3-IR detector to a precision of 2.4 mmag across this range, making the higher precision sought feasible (Calamida et al. 2018;Narayan et al. 2019;Bohlin & Deustua 2019;Riess et al. 2019). In the past, the prospect of observing many Cepheids in the LMC directly with HST was hampered by the high cost of observatory overheads. Because the LMC is nearby, its Cepheids are far apart in angle, and thus observing each with HST required a dedicated pointing (with attendant guide star acquisition overhead). However, using a newly available commanding and control sequence under purely gyroscopic control called "DASH" (Drift And SHift; Momcheva et al. 2017), we observed up to a dozen LMC Cepheids in three filters in a single orbit, obtaining HST photometry for 70 widely separated LMC Cepheids with WFC3 in three filters. This photometry establishes a new, zero-point-independent link between LMC Cepheids and those in the hosts of SNe Ia. In Section 2 we present the observations and measurements of these LMC Cepheid standards, their P-L relations in Section 3, and the impact on the Hubble constant in Section 4. We consider systematics related to the determination of H 0 in Section 5. We point out that all the photometric measurements of the LMC Cepheids presented here were completed and this manuscript was finalized in advance of the availability of the new LMC distance, its uncertainty, and the revised geometry of the LMC based on the new DEBs measurements in Pietrzyński et al. (2019). After this became available, only the final determination of H 0 was completed. DASHing through the LMC The 70LMC Cepheids presented here were imaged in three bands, two in the optical with WFC3-UVIS (F555W and F814W) and one in the NIR with WFC3-IR (F160W), in two HST programs: GO-14648 and GO-15146 (PI: Riess). All data frames are available in MAST. 5 The observations were taken between 2017 January 9 and 2018 December 16 and are identified and described in Table 1. Measuring the mean magnitudes of a large number of Cepheids in the LMC with the narrow-angle instruments on HST poses unique challenges. The mean separation of P>6 day LMC Cepheids is ∼10′ (∼15′ for P>10 day Cepheids), well in excess of the full 2′ WFC3 field of view, so in almost all cases only a single Cepheid can be observed per image. Although the necessary exposures times are only a few seconds, with normal observing procedures, each Cepheid observation requires a new fine guidance sensor guide star acquisition with an overhead of 6 minutes, by far the longest interval in the observation. In addition, WFC3 can hold only one full UVIS frame in memory before requiring a memory buffer dump, which requires 350 s. Thus, full-frame imaging of LMC Cepheids with short exposures is extremely inefficient and time consuming; given the demand for the use of HST, such observations are unlikely to be undertaken. However, we can observe these Cepheids far more efficiently by using the new DASH mode of observing, available since Cycle 24 (2016), which uses the HST gyroscopes for both slewing and guiding. This mode is highly efficient for our short integrations of 2-2.5 s; during this time, the smearing of the point-spread function (PSF) from the expected gyro drift of 1-2 mas s −1 is ∼0 003, which is negligible compared to the 0 04 size of pixels in WFC3-UVIS or the 0 128 pixels for WFC3-IR, thereby saving the overhead of repeated guide star acquisitions. By selecting groupings of Cepheids, typically within less than a degree, it is possible to slew HST 5′-10′ between successive Cepheids with an overhead of 2 minutes per slew, observe each on a subarray in the NIR, flip the channel select mechanism no more than once per orbit (an observatory requirement), reverse the path, and observe each target again in two optical filters (also in subarray mode), collecting up to 12 Cepheids in three filters in one orbit-and without exceeding the memory buffer. This strategy requires that the accumulated pointing error during the orbit remain smaller than half the subarray size (∼10″), which is consistent with the aforementioned expected gyro drift under normal conditions. Ground-based light curves can be used to adjust each single-epoch magnitude to the epoch of mean intensity as done in R18a and R18b. By definition, these phase corrections are zero-point-independent because they are calculated relative to the average magnitudes of each Cepheid (R18). Our first attempt at implementing this observing sequence, visit 8 of GO-14648, succeeded as planned on 2017 January 1, collecting the images of 12 LMC Cepheids spread over 0.5 degrees in three bands, 36 exposures in all, obtained within a single HST orbit (just over 1 hour of elapsed time). The accumulated drift from the commanded position did not exceed 6″, and we observed a mean drift of 1.2 mas s −1 (see Figure 1); thus, all Cepheids landed well inside their chosen apertures. We selected a sample of 100 LMC Cepheids from the sample of M15, assigning greater priority to Cepheids with longer periods (better analogs of those in SN Ia hosts) and those requiring shorter slews. Unfortunately, the start of this program and its follow-on, GO-15146, coincided with a period of severe degradation in the performance of HST Gyro 2, which caused its typical drift rate to exceed its nominal value by more than an order of magnitude. The specific size and direction of the gyro drift at any point in time was found to be highly unpredictable. To overcome increasing gyro drift rates with time, we redesigned our subsequent observations to use larger subarrays and thus provide greater margin for erratic gyro slewing. The larger frames required more onboard memory for storage, and in this state we could observe ∼6-10 Cepheids before filling the memory. By mid-2018, Gyro 2 drifts increased so that the accumulated pointing errors under gyro control reached 20″-70″ after 2000 s (see Figure 1), in some cases exceeding the radius of the full WFC3 frame and thus ensuring that the target would miss the field regardless of the array size used. A few targets were subsequently observed under fine guidance sensor control to complete the intended three bands of imaging. We concluded the observing program with 70 completed targets each with three colors, somewhat less than the 100 targets expected with nominal gyro performance, but representing only a net 17% loss in the statistical power of the sample. Gyro 2 stopped operating at the end of this program and was replaced in the operational chain by Gyro 3, which may have elevated noise; it is unclear how this mode will perform in the new gyro configuration. Should HST drop to fewer than three gyro controls, the feasibility of DASH-mode observations is unlikely. The erratic gyro slewing also made it challenging to determine exactly where HST was pointed in each subarray and, thus, where the target star was located (under gyro control, HST cannot use the astrometry of a known guide star to establish the pointing coordinates). We developed and Note. a MJD-57,000.0. (This table is available in its entirety in machine-readable form.) employed an algorithm for matching the apparent positions of stars to a catalog of the LMC to identify the location of each target. The X and Y pixel positions of each Cepheid in the observation as they actually occurred are given in Table 1. Photometry We measured the photometry of the Cepheids using small apertures with radii of 3 pixels for WFC3-UVIS and WFC3-IR to reduce source contamination (from cosmic rays or nearby stars) and to minimize sky noise. Aperture photometry has other advantages over PSF fitting for this application, including lower systematics if the inner PSF varies (which could potentially result from gyro drifts) and less variation from PSF undersampling in single frames. In practice, we did not find any measurable variation in the size or shape of the PSF due to gyro drifts. During the period of degraded Gyro 2 performance, the measured mean drift rate for targets landing in the arrays was 5 mas s −1 (with a peak of 13 mas s −1 ). The corresponding mean drift over the 2.5 s exposure was 0.3 pixels for WFC3-UVIS and 0.1 pixels for WFC3-IR, well within the aperture and compensated by the use of an aperture correction between r=3 and r=10 pixels derived from the mean of all Cepheid PSFs. Measurements were made on the fully calibrated frames from the MAST archive, using the charge transfer efficiency (CTE)-corrected frames for WFC3-UVIS images. Each calibrated image was multiplied by the pixel area map, a necessary step to obtain correct point-source photometry for an image flat-fielded to constant flux per unit area. For WFC3-UVIS data, we derived and applied an aperture correction from 3 to 10 pixel radius (0 12-0 4) in order to utilize the 10 pixel zeropoint provided by STScI for each UVIS CCD. Our subarrays all used CCD2, for which the adopted zero-points (magnitude of a star which produces 1 e − s −1 ) are 25.727 mag for F555W and 24.581 mag for F814W (Vega system). This procedure matches the calibration used for optical Cepheid photometry in R16 and Hoffmann et al. (2016), and the application to the distance scale remains independent of the accuracy of these zero-points as long as the same consistent value is used to measure all Cepheids along the distance ladder. For WFC3-IR F160W, we used aperture corrections of 0.063 mag from a radius of 10 pixels to infinity (provided by STScI) or equivalently 0.200 mag from a 3 pixel radius aperture (measured by us). We adopted a zero-point of 24.71 mag at infinite radius (Vega system), derived to yield the same mean photometry whether measured from these apertures on the original frames or from PSF fitting on resampled images, which is the methodology employed by R16 and Riess et al. (2011, hereafter R11) for SN Ia host images using a scale of 0 08 pixel −1 and a flux drop fraction of 0.6. R16 and R11 used images of the CALSPEC reference star P330 as the reference for the shape and scale of the PSF; this is also one of the stars used to set the zero-point by STScI and has a color similar to that of Cepheids. As a result, the zero-point we derived to ensure uniformity of Cepheid photometry matches the official STScI result to 0.01 mag. By comparing photometry measured with apertures on the original pixel scale and PSF fitting on the resampled images used by R16, we estimate a systematic uncertainty between measurement techniques of 3 mmag. We also found a small systematic difference in the WFC3-UVIS photometry of Shutter A versus Shutter B images for these very short exposures. Sahu et al. (2014) established that Shutter B causes extra instrument vibrations and can affect the PSF and move some flux outside the aperture for very short exposure times. We determined and corrected for a difference of ±6 mmag in F555W and ±3.5 mmag in F814W depending on which shutter was used. Next, we applied a correction for the expected difference between the magnitude of each Cepheid at the observed phase and the magnitude at the epoch of mean intensity of its light curve. These phase corrections are derived from ground-based light curves of each Cepheid in filters with wavelengths best corresponding to the WFC3 filters. Because the phase corrections are relative quantities, they do not change the zero-point of the light curves, which remain on the HST WFC3 natural system. 6 We derived and applied these phase corrections following the same methodology described in R18b. The periods and phases for F555W and F814W were determined using the V-and I-band light curves from OGLE surveys (Szymanski 2005;Udalski et al. 2008Udalski et al. , 2015. For some Cepheids (OGL0434, OGL0501, OGL0510, OGL0512, OGL0528, OGL0545, OGL0590, OGL0712, OGL0757, OGL0966, and OGL0992), we also included V-band light curves from the ASAS survey (Pojmanski 1997) and/or ASAS-SN survey (Shappee et al. 2014;Kochanek et al. 2017) to increase the baseline coverage. We made use of the Jand H-band light curves from M15 and Persson et al. (2004) to correct the F160W random phased measurements to mean intensity. The standard deviations of these corrections are 0.29, 0.17, and 0.11 mag in F555W, F814W, and F160W, respectively, decreasing with the smaller light-curve amplitudes at redder wavelengths. Phase corrections also account for the difference between the Cepheid light-curve magnitude mean (the average of many measured magnitudes) and the magnitude at the epoch of mean intensity (the standard convention for distance measurements). This expected difference is consistent with our sample average correction of −0.048, −0.013, and −0.001 mag, in F555W, F814W, and F160W, respectively. The uncertainties in these phase corrections depend on the quality of the ground-based light curves; the average uncertainty is 0.013, 0.008, and 0.029 mag per epoch in F555W, F814W, and F160W, respectively, which dominates over the statistical photometry errors (i.e., photon statistics) in a single epoch. The differences between repeat measurements for the same target, available for a subset of 19 epochs and filters, is consistent with these uncertainties. The final mean individual uncertainty for these 70 Cepheids is 0.016, 0.012, and 0.029 mag in F555W, F814W, and F160W, respectively. The final mean photometry for each Cepheid in three colors is given in Table 2. For distance measurements and for the determination of H 0 , it is useful to combine these three bands into the same single, reddening-free Wesenheit index (Madore 1982) used by R16 for measuring extragalactic Cepheids in SNIa hosts: where the value of 0.386 is derived from the reddening law of Fitzpatrick (1999) with R V =3.3. R16 and Follin & Knox (2018) considered a broader range of reddening laws. Using the Wesenheit magnitude thus defined has the additional benefit of tightening the resultant P-L relation, because across the instability strip, intrinsically redder Cepheids are fainter. For a P-L that is well-sampled across the intrinsic color range, the difference in the color ratio for dust and intrinsic color has no impact on relative distance measurements, because the intrinsic component cancels out when comparing the P-Lrelations of different hosts. The m H W values for our 70 targets have an individual mean uncertainty of 0.030 mag, including photometric measurement errors, phase corrections, and error propagation to the Wesenheit magnitude. We can also compare the P-Lrelations we obtain from HST photometry with ground-based results, by transforming the ground (V, I, and H) magnitudes from M15 to the HST natural system (F555W, F814W, and F160W) via the equations given in R16. We emphasize that this transformation is primarily for comparison purposes and is not required in the calibration of the Cepheid P-Lin the HST photometric system. The results are shown in Figure 2, where we excluded three Cepheids, OGLE0712, OGLE1539, and OGLE1677, whose HST images reveal a relatively bright and nearby star (Δmag<3, Δr<1″) that would significantly contaminate ground-based photometry of the Cepheid for typical ground-based image quality. We find mean differences (in the direction Ground-HST) and sample dispersions (SDs) of 0.036and0.030 magin F555W, 0.018and0.036 magin F814W, −0.032and0.039 magin F160W, and −0.040and0.040 magin m H W . There are a couple of outliers in the comparison between space and the ground (two of 67 in each filter); these are marked in Figure 2. These mean differences in zero-points are similar to those found when comparing ground and HST magnitudes of MW Cepheids in Riess et al. (2018a): 0.024 magfor F555W, 0.038 magfor F814W, −0.056 magfor F160W, and −0.051 magfor m H W . While the dispersion between these ground and HST zeropoints for this LMC sample is 0.04 mag, the dispersion between these differences for the LMC and MW is <0.02 mag, suggesting these HST-to-ground zero-point differences are primarily systematic. The presence of systematic errors in zero-points from either space or ground facilities reinforces the value of maintaining a single, stable photometric system to nullify these errors along the distance ladder. A comparison of NIR photometry of Cepheids along the distance ladder, even within the same photometric system, must also account for another systematic difference that appears when measuring bright and faint sources with HgCdTe detectors. This effect, called Count-Rate Nonlinearity (CRNL) or reciprocity failure, is different from the more commonly considered nonlinearity of total measured counts (often approaching saturation), which is already corrected in the MAST pipeline through calibrations determined from varying the length of integrations. In contrast, CRNL causes photons at low count rates to be detected or collected less efficiently than photons at high count rate, regardless of the length of the exposure. Recently, Riess et al. (2019) derived a more precise characterization of the CRNL of WFC3 through a combination of comparisons of cluster star photometry between WFC3-IR and WFC3-UVIS at overlapping wavelengths and by comparing observed and synthetic magnitudes of white dwarfs (Calamida et al. 2018;Narayan et al. 2019) and by further extending the results to brighter count rates. Combining these results with previous measurements and those from the WFC3 grism (Bohlin & Deustua 2019) provides a consistent and improved characterization of the CRNL of WFC3-IR, of 0.75±0.06% per dex, with no apparent wavelength dependence, now measured across 16 astronomical magnitudes. This improves by a factor of 4 on the previous measurement of 1.00±0.25% per dex (Riess 2010). The correction for CRNL is more properly (i.e., physically) considered as accounting for dimming of fainter sources (e.g., Cepheids in SN Ia hosts) relative to brighter sources (such as HST reference stars) as a result of charge trapping. However, because calculations involving the distance ladder are only sensitive to the difference in measured flux and because the photometry of the faint Cepheids in SN Ia hosts in R16 did not (by convention) include a correction for CRNL, we account for the net difference here. We have therefore added to the m H W photometry in Table 2 the 0.0300±0.0024 mag mean correction to the the bright LMC Cepheids to account for the 4.0 dex flux ratio in F160W between these LMC Cepheids and the sky-dominated Cepheids observed in SNIa hosts and NGC 4258. This is the same convention used for the MW Cepheids measured with HST and presented in R18. Finally, because of the inclination of the LMC, we expect some Cepheids to be closer or farther than average by a few hundreths of a magnitude in distance modulus, depending on their projected distance from the LMC line of nodes. Pietrzyński et al. (2019) used the DEBs to constrain a tilted Table 2 include these individual corrections to account for the projected distance of the Cepheids from the line of nodes. To evaluate the systematic uncertainty in this correction, we consider other geometries. van der Marel & Kallivayalil (2014) used kinematic constraints to derive an inclination of i=26°.2 and position angle of the line of nodes Θ=154°.5, which produced a mean correction for our Cepheids of 0.013 mag. An alternative model derived from the positions and fluxes of 2000 LMC Cepheids by Nikolaev et al. (2004), with α 0 =79°.4, δ 0 =−69°.03, i=30°. 7, and Θ=151°.0, yields a mean correction of 0.016 mag. We will consider the standard deviation of these three models, 0.002 mag, to be a systematic uncertainty associated with the geometry of the LMC. As we will show, this term is subdominant to other error terms. Period-Luminosity Relations In Figure 3 we show the relations between Cepheid period and magnitudes in F555W, F814W, and F160W as well as two Wesenheit indices, m H W =m F160W −0.386(m F555W −m F814W ) and m I W =m F814W −1.3(m F555W −m F814W ) . The slopes of these relations (Table 3) match well those derived from larger ground-based samples of LMC Cepheids from Soszynski et al. (2008) and M15. We determined the intercepts and scatter for m H W from a 2.7σ clipped mean (this threshold is derived from Chauvenet's criterion where we expect 0.5 outliers at >2.7σ from a normal distribution with N=70 Cepheids). The m H W relation gives the lowest scatter, SD=0.075 mag, with only two marginal outliers, OGL0992 and OGL0847, at 2.8 and 3.1σ, respectively (including both of these outliers results in a slightly increased SD=0.086). For the same set of Cepheids, their ground-based magnitudes transformed to m H W give a somewhat higher scatter of 0.084 (or 0.103 with no outliers removed); the increased scatter may be due, in part, to occasional light contamination from nearby stars in the lower quality images from the ground. To determine the intrinsic scatter in the HST-based relations we subtract the estimated measurement errors, finding a result of 0.069 mag (equivalent to 3.2% in distance) for a single Cepheid, which is at or near defining the lowest apparent dispersion of any known sample of Cepheids. The scatter, after removing the same two outliers as above, is seen to decline with increasing wavelength, as may be expected because of the reduced impact of differential extinction and the narrower intrinsic width of the P-L relation. Two of the Cepheids are particularly red, with F555W−F814W of 1.3 and 1.6 mag for OGL1945 and OGL1940, respectively, or more than 3σ redder than the mean (áF555W−F814W ñ=1.00, SD=0.09 without these two), but neither are outliers in the reddening-free m H W . Although the Cepheid ground sample from M15 is an order of magnitude larger in size than the one observed with HST, the latter is more heavily weighted toward longer period Cepheids, which offers advantages when comparing to Cepheids in SN Ia hosts. The HST LMC sample has a mean period of 16 days (log P=1.21) and 45 Cepheids with P>10 days, while the M15 ground sample has a mean period of 6.7 days (log P=0.83) and 109 Cepheids with P>10 days. The mean Cepheid period in the SN Ia hosts is log P∼1.5 (R16), so the difference in sample mean log P for the HST sample is less than half the ground sample. Thus, an uncertainty in the slope of the Cepheid P-Lrelation will propagate less than half the error in H 0 from the HST sample as from the ground sample. The formal error in the LMC Cepheid sample mean m H W is 0.0092 mag, equivalent to 0.42% in distance. The total formal uncertainty for the Cepheid LMC calibration is obtained by combining this uncertainty with the 0.0263 mag (or 1.20% in distance) error from the DEBs (Pietrzyński et al. 2019), the 0.0024 mag systematic uncertainty in the CRNL correction, the 0.003 mag difference between methods of measuring photometry, and the 0.002 mag uncertainty due to the LMC geometry. Together these yield a total uncertainty of 1.28% for the geometric calibration of the Cepheid distance ladder and H 0 , which is the smallest error for any Cepheid calibration to date (see Table 4). Additional uncertainty when comparing these Cepheids to those with different metallicity or mean period will be considered in the next section. Previous studies (Bhardwaj et al. 2016, and references therein) have suggested a change in slope of the P-L at P=10 days at optical wavelengths, though its significance has been marginal and debated. As in R16, our primary fit for H 0 in the next section allows for a change in slope at P=10 days; if the change in slope is real, not allowing for this degree of freedom in the solution could otherwise introduce a systematic error when comparing samples of Cepheids with differing mean periods. However, R16 found no evidence of a break in the m H W relation with slopes of −3.25±0.02 and −3.26±0.02 mag/dex below and above P=10 days, respectively, averaged across all extragalactic Cepheids (including the 785 LMC Cepheids from M15). For only the 70 LMC Cepheids studied here, there is a slope difference of Δ=0.20±0.30 mag/dex across P=10 days with the measured slope at P>10 days (N=43) of −3.38±0.07 mag/ dex, which is 2.1σ steeper than the R16 mean of extragalactic Cepheids. Because the sample is relatively small, the slope is sensitive to the low number of rare long-period Cepheids. Persson et al. (2004) measured a similar number (N=39) of P>10 day LMC Cepheids from the ground in the H-band and found a slope of −3.16±0.10 mag/dex; they included two Cepheids at P=99 and 134 days, neither of which was observed here and which pull the slope to lower values. The opposite has been seen for Cepheids in M31, with the longer period end being ∼2σ shallower than the shorter period end, though the combination produces no evidence of a break to 0.02 mag uncertainty (R16 and Kodric et al. 2015). As in R16, allowing for a break slightly reduces H 0 by 0.4% and is included, with other variants pertaining to the fitting of the P-Lrelation, in the systematic error. Table 3. The Hubble Constant Following the distance ladder and additional constraints provided in R16, we can use this LMC Cepheid P-L relation to help calibrate the luminosity of SNeIa and determine the value of H 0 . Among the three geometric sources previously used by R16 for this purpose, masers in NGC 4258, MW parallaxes, and LMC DEBs, the latter yielded the lowest individual value of H 0 , 72.04±2.67 km s −1 Mpc −1 . This LMC-derived calibration employed 785 Cepheids observed from the ground from M15 and the eight DEBs from Pietrzyński et al. (2013). R16 assumed a systematic uncertainty between the ground and HST-based zero-point of σ=0.03 mag, an estimate in good agreement with the empirical result of a 0.04 mag ground-to-space difference found in Section 2. This relative zero-point error limited the available precision well above the apparent error in the mean distance of the LMC Cepheid ground sample of 0.08 mag/ 785 or 0.003 mag, an order of magnitude lower than the zero-point uncertainty. This uncertainty was also significant compared to the previous LMC distance uncertainty from Pietrzyński et al. (2013) of 0.045 mag, resulting in a combined error of 0.054 mag from the prior LMC calibration route. Here we use both the ground and HST sample of LMC Cepheids together, as each provides an important and complementary constraint. The 70 LMC Cepheids observed with HST alone constrain the error in the mean relative to Cepheids observed by HST in SNIa hosts to a precision of just under 0.01 mag. After including the 0.0263 mag uncertainty of the DEB-based distance from Pietrzyński et al. (2019) and the uncertainty from the CRNL, the total error becomes 0.028 mag (1.28% in distance; see Table 4), about half the uncertainty of the LMC combination used in R16. The much larger groundmeasured sample is still very valuable, considered simultaneously, to constrain the slope of the P-L. The slope was constrained in R16 from the ground-based sample to σ∼0.01 mag/dex (or σ∼0.02 mag/dex for two slopes if a break was allowed), which is independent of its σ=0.03 mag zero-point uncertainty. Fitting the distance ladder with the system of equations given in R16 and retaining the systematic zero-point uncertainty for only the ground-based sample optimally leverages both samples. For each Cepheid with a ground and space-based measurement, we include a covariance term equal to the square of the intrinsic LMC dispersion measured here of 0.07 mag to account for this correlated error. For the ground-based sample, we include the differences in projected distance to the line of nodes using the model of Pietrzyński et al. (2019) and their mean LMC distance of μ=18.477±0.0263 mag based on the DEBs, which have already been corrected to the line of nodes. For all LMC Cepheids, we assume a mean [Fe/H]=−0.30 dex, which is chosen to be between the mean of −0.33 dex from 22 objects observed spectroscopically by Romaniello et al. (2008) and −0.27 dex, which is the mean of the photometric metallicity map of Choudhury et al. (2016) at the positions of the Cepheids from M15. This is slightly different than the value of −0.25 dex adopted by R16. Following Anderson & Riess (2018), we have also included a small correction of 0.0074 mag for the additional mean flux statistically and physically associated with Cepheids that is not resolved at the distances of the SNIa hosts but that is resolved in the closer LMC, as discussed in Section 4. This is in addition to the use in R16 of artificial star measurements to account for mean additional light due to chance superposition on crowded backgrounds. Using only the LMC distance from Pietrzyński et al. (2019) to geometrically calibrate the Cepheid luminosities, we find 74.22±1.82 km s −1 Mpc −1 including its systematic uncertainty calculated by the analysis variants method given in R16. The value is higher than the value of 72.04±2.67 km s −1 Mpc −1 from R16 by 2.2 km s −1 Mpc −1 (about 0.8σ or 2.9%) due primarily to the 1% decrease in the LMC distance between Pietrzyński et al. (2013Pietrzyński et al. ( , 2019 (which increases H 0 by 1%) and a ∼2% increase from the use of the HST photometric system for the Cepheids. The overall uncertainty in H 0 using the geometric LMC calibrations has declined by 40%, 25% because of the improved LMC distance and 15% because of the use of a single photometric system, which nullifies the relative zero-point uncertainty. Because the LMC Cepheids have a lower metallicity than those in SN Ia hosts (or the MW or NGC 4258), there is an additional uncertainty of 0.9% when using the LMC as an anchor due to the uncertainty in the empirically constrained luminositymetallicity relation. These improvements together make the LMC, with a 2.4% total uncertainty in H 0 , comparable in precision (actually better) as an anchor of the distance ladder to the use of all MW Cepheid parallaxes, with the masers in NGC 4258 somewhat lower at 3.4% uncertainty; all are individually consistent within 1.3σ in terms of their independent geometric distance uncertainties. In Table 5 we also list the result of combining the LMC with the MW Cepheid parallaxes (R18a), with the masers in NGC 4258, and every combination of using only a pair of anchors. These two-anchor (or one-anchor-out) combinations now have a smaller range of 1.07 km s −1 Mpc −1 (compared to 2.42 km s −1 Mpc −1 in R16) because of the increase in the result from the LMC. Indeed, leaving out any one of three anchors by choice, which is a reasonable test of robustness, changes H 0 by only ∼0.5 km s −1 Mpc −1 or <0.7%. For those inclined to disfavor any one anchor, these combinations offer a best result without the influence of the given anchor. However, the best and preferred result comes from including all three anchors, giving 74.03±1.42 km s −1 Mpc −1 , a total uncertainty of 1.91%including systematics. In Table 6 and Figure 5 we give a detailed breakdown of all sources of uncertainty in the determination of H 0 here and compared to R16. The primary changes between the present uncertainties in H 0 and those in R16 result from improvements in the anchor measurements from the LMC and MW. The contributed uncertainty from MW Cepheid parallaxes has decreased from 2.5% to 1.7% because of new parallax measurements from HST spatial scanning (R18b) and from Gaia Data Release 2 (R18a) and from the use of WFC3 to measure their photometry on the same photometric system as Cepheids in SN Ia hosts. These improvements in the MW anchor alone reduced the overall uncertainty in H 0 from 2.4% to 2.2% (R18a). An even greater improvement in the LMC anchor is now realized, decreasing its contributed uncertainty from 2.6% to 1.5%. While there is a small increase in uncertainty in the P-Lintercept because of the smaller sample of LMC Cepheids here, this is more than offset by the smaller systematic uncertainty in their photometric zero-point. We also note that there is an increase in the overall uncertainty due to the relation between Cepheid metallicity and luminosity. The metallicity term we derived from our analysis of all Cepheid data (R16) is −0.17±0.06 mag per dex, similar to Gieren et al. (2018), who find −0.22 mag per dex in the NIR for a lower range of metallicity. The product of the mean, subsolar metallicity for the LMC Cepheids and the uncertainty in this term is 0.9%. The other two anchors have Cepheids with near solar metallicities that are much closer to those in the SN hosts, so the overall uncertainty in H 0 due to metallicity is weighted down by these anchors to 0.5%. Systematics: Cepheid Associated Flux The photometric measurements of Cepheids from R16 in SNIa hosts and NGC 4258 account for the mean additional light due to chance superposition on crowded backgrounds through the use of artificial star measurements. However, the possibility of light from stars that are physically associated with the Cepheids and unresolved at their distances for SN Ia hosts (5-40 Mpc) but that is resolved in the LMC at 50 kpc (or the MW at 2-3 kpc), and thus excluded from measurement, would have a differential effect that could bias the determination of H 0 . Anderson & Riess (2018) quantified this "associated-light bias" by studying its two plausible sources, wide binaries (a rel >400 au) and open clusters (closer binaries are unresolved in all cases). They found that the mean effect of wide binaries was negligible (0.004% in H 0 ) because Cepheids dominate companions in luminosity. Closer binaries, while more common, are unresolved in either anchor galaxies or SN hosts, so even the tiny contamination of Cepheid flux from a companion, ∼0.02% in distance, cancels along the distance ladder because of its presence for all Cepheids (assuming binarity is common in all hosts). To quantify the impact of open clusters, they analyzed the regions around a large sample of Cepheids in M31, 450 Cepheids with UV HST imaging from the PHAT program (Dalcanton et al. 2012). They found that 2.4% of Cepheids are in such clusters and that the photometric bias averaging over Cepheids in or out of clusters is 0.0074 mag for m H W . This value might be considered an upper limit to the bias because there is also a "discovery bias" to exclude even the small fraction of Cepheids in bright clusters from a distant sample. The additional constant flux that is unresolved for distant Cepheids in clusters would decrease the amplitude of Cepheid light curves. Anderson & Riess (2018) found that a mean bias for a Cepheid in a cluster in M31 of 0.30 mag in m H W corresponds to a bias of 0.8 mag at visual wavelengths, near or brighter than the limit of 0.5 mag contamination that Ferrarese et al. (2000) determined would preclude discovery of a Cepheid because of the flattening of its light curve. In the other direction, one might posit a somewhat larger clustered fraction in SNIa hosts than in M31 (M31 being somehow unusual), but this direction is limited by the greater ages of Cepheids (30-300 Myr) than clusters with only ∼10% of massive embedded clusters surviving for more than 10 Myr (Anderson & Riess 2018, and sources within). Indeed, M31 provides the best analog for the SNIa hosts (high metallicity spiral) for which an up-close, external view of Cepheid environments is available. Such accounting for the MW may await improved parallaxes. In this regard, the LMC is unusual, with a greater frequency of Cepheids in clusters and a higher concentration of massive clusters (likely due to its high rate of recent star formation), with 7.2% of P>10 day Cepheids in clusters (with fewer than four Cepheids per cluster). The LMC also harbors two Cepheid-rich clusters, each with 24 Cepheids, eight times the number of Cepheids as the richest MW cluster. Because of the great resolution of HST in the LMC, this excess of clusters around Cepheids in the LMC has no photometric impact on the measurement of H 0 . Here we have included the expected impact of such flux based on the example of M31 and Figure 4. The 4.4σ difference between local measurements of H 0 and the value predicted from Planck+ΛCDM. We show local results presented by Riess et al. (2016), reanalysis by C16 (Cardona et al. 2017), FK17 (Follin & Knox 2018), or FM18 (Feeney et al. 2017, the HOLiCOW lensing results from Birrer18 (Birrer et al. 2018), a replacement of optical SN data with NIR in DJL17 ) and B18 (Burns et al. 2018), and a revised geometric anchor from HST and Gaia DR2 parallaxes (R18a, b). Other early universe scales are shown in blue. Possible physics causes for a 2%-4% change in H 0 include time-dependent dark energy or nonzero curvature, while a larger 5%-8% difference may come from dark matter interaction, early dark energy or additional relativistic particles. conclude that it does not produce an impediment to measuring H 0 to 1%. Higher resolution imaging in the future from James Webb Space Telescope or large ground-based telescopes with adaptive optics may allow for measuring the fraction of Cepheids in clusters in more distant galaxies. Prospects for Reducing the H 0 Uncertainty: SN Statistics and Systematics As shown in Table 6 and Figure 5, with the new LMC anchor measurements in hand, the single largest source of uncertainty in H 0 now lies in determining the mean luminosity of the SN Ia calibrators. Future reductions in the uncertainty in H 0 will require increasing the number of these calibrations. The sample in R16 had 19 calibrators that were chosen according to SN Ia light-curve quality requirements and for their presence in nearby (z0.01), late-type, globally star-forming, nonedge on hosts that would thus be expected to yield a good sample of Cepheids. New observations with HST by the SH0ES program are underway, which will double this sample to 38 calibrators, making the sample complete to z∼0.011, and will reduce the uncertainty in this term by 2 and the likely overall error on H 0 to ∼1.5%. With the prospect of reaching subpercentage uncertainty in the mean of SN Ia calibrators, even greater attention must be given to controlling potential systematic uncertainties in the measurement of SN Ia distances. One path to both better standardize SNIa brightnesses and control systematics has been to search for additional observables that may correlate with distance residuals beyond light-curve shapes (Phillips 1993) and colors (Riess et al. 1996;Tripp 1998;Phillips et al. 1999). As the statistical leverage of larger and better calibrated SN samples has grown (Betoule et al. 2014;Scolnic et al. 2018), evidence has arisen of an environmental parameter that appears to correlate with SN distance residuals, albeit at a lower level than the preceding SN parameters. The most widely used form for this environmental parameter is to use the host galaxy mass as done by nearly all recent cosmology analyses including those from the SNLS (Sullivan et al. 2011 (Hicken et al. 2009;Kelly et al. 2010;Lampeitl et al. 2010;Sullivan et al. 2010), likely due to improved accounting for SN selection bias (Kessler & Scolnic 2017). Likewise, recent measurements of H 0 by R16 and Burns et al. (2018) have followed this approach and corrected for the empirical dependence of host mass in their samples. We can evaluate the systematic error resulting from SN Hubble residual dependence on a different environmental parameter, such as star formation rate, colors, metallicity and population ages. An environmental parameter can impact the determination of the Hubble constant if two conditions are met: (1) the parameter has a significant relationship with the distance residuals for the same SN sample and method of distance determination used to measure the Hubble constant and (2) there is a mean difference in the parameter between the SNe in Cepheid hosts and those used to calibrate the Hubble expansion. Jones et al. (2018a) looked for additional environmental dependencies in the same SN sample and distance fitting method used by R16 to determine H 0 (the only such study of the R16 sample) including local and global color (in the UV to address star formation), local and global mass, and local specific star formation (suggested by Rigault et al. (2018) to relate to environmental age). They found that none of these is significant (<2σ) in the R16 distance residuals (which already include a correction for host mass and SN selection bias) with the possible exception of local mass (∼3σ). Significance aside, Jones et al. (2018a) combined the sample differences and size of the dependencies and for different environmental parameters they found an impact on H 0 of 0.3% to 0.5%. Roman et al. (2018) also found no significant relation between Hubble residuals and local SN UV color for 123 SNe Ia at z<0.1 (1.7σ without a host mass correction). In a study of SDSS SNe Ia, Rose et al. (2019) performed a principal component analysis to create the optimal combination of multiple additional parameters that could correlate with distance residuals, finding an additional correlation between the inferred environmental age and Hubble residuals with 2.1σ significance, which would produce a ∼0.4% change to H 0 (after accounting for the present host mass correction, which is included in R16) if the same correlation exists in the nearby SNe used in R16. These residual environmental dependencies are much smaller than the present overall uncertainty, of low significance, and if true, would remain subdominant to the future statistical uncertainty in H 0 with a larger sample size of SNIa calibrators. The safest path to avoid possible (even unknown) environmental systematics would be to eliminate the potential cause for a mean difference in environments between the SNe in Cepheid hosts and those used to calibrate the Hubble expansion by employing the same criteria to select SNe in the two relevant samples. A simple approach would be to limit SNe in the Hubble Flow sample to those in globally late-type, star-forming hosts. These are the same criteria used to select likely Cepheid hosts, and because the nature of the local SN environment is not relevant and, thus, is not a factor in this selection, the statistics of local environments would be similar in both samples. R16 employed both a late-type only and globally star-forming only selection as a variant for the determination of H 0 and demonstrated a change of H 0 <0.3%. The advantage of this approach is illustrated in the study of Hα local to the SN site. In studies of the unpublished SN Factory sample, Rigault et al. (2013Rigault et al. ( , 2018 suggested that the presence of local Hα, a consequence of a very young population, may have an important dependence on SN distance with strong Hα (for a specific local mass) producing fainter SNe (a similar effect as having a low mass host). Because the Hubble flow sample may include SNe with early-type hosts with lower local Hα, a higher mean local Hα may be expected in the purely late-type (and globally star-forming) hosts of the Cepheid calibrator sample. Thus, if a local Hα relation existed in the SN sample used to determine H 0 , it could erroneously raise its measured value. However, including only late-type hosted SNe in the Hubble flow sample would remove the underlying cause of such a bias. Anderson et al. (2015) provide measurements of the relative strength of Hα at the sites of 98 SNe Ia in exclusively late-type, star-forming hosts, including 20 of the 38 selected for Cepheid measurements, which can be used to test this approach. Of the selected Cepheid hosts, 14 of 20 (70%) (or 8 of 11 from R16) have no detected Hα (at the 0% level relative to the host), while in the non-Cepheid selected sample, 43 of 78 (55%) show no Hα. The mean of the Cepheid sample is, thus, consistent with (albeit lower in) Hα, demonstrating that limiting the samples to the same host type removes the source of a sample mean difference of Hα to the level expected from statistical fluctuations (or another factor unrelated to selection). This negates the potential for biasing the value of H 0 , whether or not a dependence exists in the R16 Hubble flow sample (which has not been shown). In this case, an even balance of SNe with high, local Hα among late-type, star-forming hosts is expected because the presence of SN local Hα never enters their selection. Limiting the Hubble flow sample to the same criteria as Cepheid hosts reduces the sample by less than half and thus has little impact on the precision of H 0 . Present Status The recent landscape of model-independent, direct measurements of H 0 from the late universe and predictions from the early universe is shown in Figure 4. While it is difficult and perhaps debatable to identify the precise threshold at which a tension passes the point of being attributable to a fluke, the one presently involving H 0 appears to have passed that point. 7 The higher, local value of H 0 from the distance ladder has been determined through five independent geometric absolute calibrations of the Cepheid P-Lrelation, including new MW parallaxes from HST spatial scans (R18b) and from Gaia DR2 (R18a). As shown in Table 5, any one of the three types of anchors can be removed with a change in H 0 of <0.7%, much smaller than the present 9% discrepancy. The relative distances from Cepheids match those from the Tip of the Red Giant Branch (Jang & Lee 2017;Jang et al. 2018;Hatt et al. 2018bHatt et al. , 2018a and Miras (Huang et al. 2018;Yuan et al. 2017) to a mean precision of 2%-3%. Strong-lensing results from the HOLiCOW team have corroborated the local value of H 0 independent of the distance ladder (Birrer et al. 2018). At the other end of cosmic time, the low expected value of H 0 predicted from the early universe has been confirmed by different sources of CMB and BAO data and by an independent calibration of BAO from measurements of Ω B and knowledge of the CMB temperature (Addison et al. 2018). It is important to note that "inverse distance ladders" and calibrations of BAO or SNe originating from the sound horizon (z∼1000) or from CMB measurements all fall into this "early universe" category of H 0 inference. With multiple, independent corroborations now demonstrated at both ends of cosmic history, we may need to seek resolution in a refinement of the model that joins them, (Vanilla) ΛCDM. A new feature in the dark sector of the universe appears increasingly necessary to explain the present difference in views of expansion see from the beginning or from the present. The general considerations by Aylor et al. (2018) argue for an injection of energy or expansion preceding recombination, which would shorten the time for the Universe to become transparent and reduce the sound horizon inferred from the CMB and used to calibrate BAO with a specific scenario offered by Poulin et al. (2018). The dark radiation sector (i.e., neutrinos) also provides a possible source for alleviation of the H 0 difference through interactions of additional components (Kreisch et al. 2019). Continued pursuit in precision in the determination of H 0 is also needed to transition from the discovery of a difference to a diagnosis of its source. Additional observations of giants and pulsating stars in more hosts of SNeIa are underway and should further refine H 0 . Less predictable but highly sought are contributions from gravitational wave sources as standard sirens (Schutz 1986;Abbott et al. 2017;Chen et al. 2018). Improvements in parallaxes from future Gaia data releases are also expected to continue to increase the precision of the distance ladder in the near term.
13,127.8
2019-03-18T00:00:00.000
[ "Physics" ]
Cyberphysical Security for Industrial Control Systems Based on Wireless Sensor Networks In recent years, the security of cyberphysical system (CPS) has been focused on increasingly. The most common example of CPS is industrial control system (ICS), which is prevalent in almost every critical infrastructure, such as electricity, oil and gas, water, chemical processing, and healthcare. So ICS security has become a top priority in the security field. Based on a general description of the wireless sensor network (WSN), which is an important element of CPS, this paper first gives a comprehensive and deep understanding of CPS. Secondly, it provides a comprehensive description of the current situation of ICS security in the U.S. and the corresponding approaches the U.S. government and some industries have taken, including management, technology, standards and regulations, and researches of national laboratories. Thirdly, the paper shows the research on ICS in Europe, focusing on the most important report issued by ENISA. Then, compared with developed countries, it presents the grim situation of ICS security and describes the efforts of ICS security management in China. Introduction A cyberphysical system (CPS) is a system of systems, in which the cyber technologies and the physical processes are highly integrated, in order to add new capabilities into physical system.It is an emerging area in the 21st century, as most of the world's leading economies are seeking competitiveness in this field.U.S. President's Council of Advisors on Science and Technology (PCAST) in a report [1] found that cyberphysical systems "are now a national priority for Federal R&D.Improved methods are needed for the efficient development of these systems.These methods must assure high levels of reliability, safety, security, and usability." The implementation of CPS is largely dependent on the wireless sensor networks (WSNs), as it can help collect huge amount of data from the physical world and transmit intime control command from the cyberworld.It is because of the use of WSNs that CPS can make correct perception of the physical world and real-time reaction to its changes. A typical WSN is usually deployed in the interior or in the neighboring of the being-detected objects.The node of WSN consists of a sensing unit (including a sensor and an analog to digital converter), a processing unit, a transceiver unit, and a power unit [2].Many single nodes can build a multiple-hop network in a self-organized way, through an initial wireless communication and consultation.As shown in Figure 1, each WSN is equipped with a gateway connected to a transmission network, which is a made up of a series of wireless network nodes.Through this pathway, the sensed data can be sent from the sensing area to the sink, which has the function of remote access and data processing.Then, the sink will carry on bulk data transfer to the database or control center, with the use of local gateway which is connected to various networks [3]. CPS attaches more importance to timely perception, deep interaction, smart processing, and real-time reaction, compared with traditional wireless sensor networks.As shown in Figure 2, CPS Unit, in which there can be many CPS nodes connected wirelessly, is an individual divided by different tasks and different CPS Units could exchange information with each other to acquire a more clear cognition.Each CPS Unit timely perceives changes in the environment through the sensor function, makes an analysis on the data through the processing function, then exchanges information with the other CPS Units through the communication function, and finally realizes the information fusion to acquire a correct and comprehensive understanding of the environment.In some relatively simple conditions, the CPS Units could make decisions by themselves according to the information between different units and could make the execution command locally.In other conditions, the CPS Units need to transmit the fused information to the remote smart control center to acquire a more comprehensive and complex process and a real-time decision according to the cognition and rules mastered by the system, and after this the execution command will be transmitted to all the related CPS Units to respond to the changes in the physical world.The whole procedure, full of feedback, is real-time and the process of perception, communication, computing, and execution is in a closed circuit.In this way, a virtuous circle of timely feedback and decision-making can be obtained.According to the above, WSN is the medium of the interaction between cyberworld and physical world, so it essential is for the implementation of CPS.WSN focuses on the collection and management of perception data, while CPS pays more attention to the deep integration of cyberworld and physical world, which achieves real-time information collection and input from the physical world to the cyberworld and real-time decision output from the cyberlayer to the physical layer.Comparing with WSN, we can draw a deep understanding of CPS from the following aspects: (i) Physical Components.There is a big limitation to the energy, storage space, and computing capacity of the WSN nodes, so only simple operations could be carried out in the nodes and most analysis tasks are finished in the control center.However, in CPS, many decisions could be made in physical layer by the CPS Units as the interaction between the CPS Units and the CPS nodes in each unit and their relatively strong computing and communication capabilities. (ii) Resource Allocation.As sensors are usually deployed in unmanned area or in harsh environment, the continuous use of resources becomes a big problem for both WSN and CPS.WSN aims to save the energy by only providing some limited functions; actually, it is to acquire a longer use time at the cost of reducing the intelligence and it just gives little attention to resource allocation [4].However, CPS aims at how to finish the tasks with limited resources via a reasonable resource allocation.All resources could be dynamically reorganized and reconfigured according to different demands of different tasks; for example, different sensor groups may be in work status or sleep mode according to different tasks. Sink Sink Sink Cyberworld Physical world Real-time information collection and input Real-time decision output (iii) Network Convergence.The physical devices connected to WSN have to be within many rules to ensure a certain degree of connectivity and aggregation of the collected data [5].However, CPS could be connected to many different WSNs and different devices, compatible with different standards and protocols, and able to dispose of collected data with different connectivity and aggregation. (iv) Time Delay.In the traditional WSN, because of limited computing capacity and little interaction between nodes, the collected data which is transmitted to the control center is aways in its relatively original status.The control center needs to make a comprehensive analysis on all the original data, so it always takes an obvious time delay before the device obtains the decision of what to do next [6].As for CPS, each connected CPS Unit has a relatively strong computing and communication capacity, which allows it to share some tasks with the control center.In this way, a correct understanding of the environment and a quick feedback can be achieved.In a word, the whole CPS was involved in the calculation and procession of data, reducing the delay and ensuring a realtime reaction to the changes in the physical world.In a word, CPS is an intelligent large-scale integration control system, which achieves a seamless fusion of the cyberworld and the physical world, with the adaptability to respond to the uncertain changes in the environment.Though the theory of WSN greatly contributes to the implementation of CPS, it brings data security challenges at the same time, such as availability, authorization, authentication, confidentiality, integrity, nonrepudiation, freshness, forward secrecy, backward secrecy, and location awareness [7], because of the widespread invisible wireless communication, unrestricted International Journal of Distributed Sensor Networks large-scale deployment, and the increasingly powerful computing and storage capacities [8].From the perspective of information security, bad information detecting [9] in wireless network can be a new research field, and the theory of protecting user privacy in phoneprotector [10] may be available for reference.As CPS is "a system of systems" and its popularity is increasing rapidly, interconnections of CPS are growing in size, complexity, and vulnerability.Uncontrolled CPS risk will definitely not only bring significant damages to our economy but also cause great threat to the life of human beings.The most common example of CPS is the industrial control system (ICS), which is prevalent in almost every critical infrastructure, including electricity, oil and gas, water, chemical processing, and healthcare [11]. Studying the strategies and programs taken worldwide will help sort out the best practices of ICS security.As the ICS is touching the nerves of almost every critical economy entity around the world, many counties have taken efforts to cope with the risks.However, each country has its own distinct basic condition or state and their approaches towards ICS security will never be completely the same.The world's leading economies, such as the United States, Europe, and China, walk in the forefront of the world ICS security.This paper explicitly studies the research trend in the fields of management, technology, and standards and makes a comprehensive analysis on the status quo of ICS security researches in those countries. This paper is organized as follows: Section 1 gives an overview of cyberphysical system, based on a general description on WSN and a brief introduction to ICS.Section 2 presents the details of ICS and lists some serious ICS security incidents to show that it is easier to launch attacks in ICS and the impacts of the attacks are widened.Section 3 describes the relatively complete information security management system and technical system of ICS in the United States from four dimensions, that is, ICS management system, technology and research, emergency readiness, and standards and regulations.Section 4 shows some important measures of European countries to improve the security, safety, and resilience of European ICS systems.Section 5 presents that the ICS in China is facing a more severe situation, compared with developed countries, and that the government and correlated industries have attached great attempt to improve the security of ICS. Industrial Control System ICS are typically used in industries such as electrical, water and wastewater, oil and natural gas, chemical, transportation, pharmaceutical, pulp and paper, food and beverage, and discrete manufacturing (e.g., automotive, aerospace, and durable goods).These control systems are often highly interconnected and mutually dependent systems and they are always critical to those industries.The most common types of ICS include SCADA (supervisory control and data acquisition), DCS (data communication system), and PLC (programmable logic controller). (i) SCADA is generally used to control dispersed assets using centralized data acquisition and supervisory control [12].(ii) DCS is generally used to control production systems within a local area such as a factory using supervisory and regulatory control [13].(iii) PLC is generally used for discrete control for specific applications and provides regulatory control [14]. ICS used to be isolated systems as they are only applied to specific industries or specific areas [15].So, the software and hardware of ICS, including those protocols about them, are customized for each application.However, the widely spread Internet Protocol provides a possibility to integrate the already existing systems to form a more interconnected one in order to add new capabilities, such as remote access and business corporation into the traditional systems.It is a trend to use standard communication protocols, operating systems, and computing devices designed for compatibility [16].ICS is becoming more and more interconnected and accessible through public network, which has introduced a greater risk. In the past, the SCADA in ICS were independent systems with no connections to other systems, while, at present, ICS is widely distributed and networked since the systems are dependent on open protocols of the internet, which makes it vulnerable to a lot of external remote threats [17].As new IT related capabilities are added, ICS is no longer isolated from the outside world, compared with the predecessor systems, posing a greater risk to its security.According to a 2010 report [18], named "Security Incidents Rise in Industrial Control Systems, " only 10 percent of industrial control systems are actually connected to the Internet, while there are a growing number of cybersecurity incidents from 2005 to 2010 in the ICS of water, wastewater, and utility power plants.Therefore, there is an urgent need of ICS security measures which not only have to cope with the already known IT related vulnerabilities but also should be designed for specific ICS needs.Coupled cybersecurity threats appear in many complex forms, including physical damage, data tampering, sensor spoofing, code injection, cyberintrusion, theft, and vandalism.To date, very little work has been done to ensure the cyberphysical security of such systems deployed in unstructured, potentially adversarial environments. In recent years, security incidents of ICS have emerged constantly in many areas.Table 1 shows some main ICS security accidents.With the popularity of the ICS and the development of intelligent and networked trend of ICS, the harm is increasingly worsening.We can see from the list that the number of incidents is increasing these years.As the ICS is becoming larger and more interconnected, it is easier to launch attacks and the impacts of the attacks are widened.As the ICS is touching the nerves of almost every critical economy entity around the world, many counties have taken efforts to cope with the risks. ICS Security in the United States From a global perspective, the United States is in the forefront of the world in the field of safety and security of ICS.As the most developed country in IT and industrial automation, the United States has the leadership and discursive power in the field of IT and industrial automation. As early as 2002, the U.S. government attached great importance to ICS security.And, in the last decade, a lot of work has been done in ICS security management.Now, a complete information security management system and technical system of ICS have been established.In terms of the information security research of ICS, the U.S. focuses on petrochemical, power, and energy industry; and, in terms of management system, technical systems, and standards and regulations, the U.S. Department of Homeland Security (DHS), Department of Energy (DOE), and National Laboratory jointly promote industrial control system information security in the United States. ICS Management System.The U.S. developed special programs in the field of ICS information security, mainly led, respectively, by the Department of Homeland Security and Department of Energy.There has been significant development in SCADA systems in the electric sector, and noteworthy progress has been made in identifying and mitigating vulnerabilities under the sponsorship of the Department of Energy. In 2004, the Department of Energy and and the Department of Homeland Security established two multiyear programs to protect the nation's infrastructures against attacks from hackers, virus writers, disgruntled employees, terrorist organizations, and nation states.National Supervisory Control and Data Acquisition (SCADA) Test Bed was funded by the Department of Energy (Office of Electricity Delivery and Energy Reliability, DOE-OE) to provide real test environment to systematically analyze, test, and improve cybersecurity features for a variety of ICS and help the industry and government assess vulnerability of the ICS and test the security of the hardware and software in ICS [19].Control Systems Security Center was funded by the Department of Homeland Security (National Cyber Security Division, NCSD) to identify and develop solutions to protect vital infrastructures from a cyberattack.The center provides a centralized location for vulnerability assessment, tool development, research, and incident response to eliminate all the ICS security risks in critical infrastructure and key resources sectors.Both programs are key efforts in response to President Bush's National Strategy to Secure Cyberspace [20] and employ experts in control systems operation, design, cybersecurity, and risk analysis.State-of-the-art research and testing facilities enable the running of mock exercises and calculated scenarios on full-scale control systems.The testing results provide the owners and manufactures of the facilities with necessary data to improve cybersecurity standards within their control systems.At the same time, the Department of Homeland Security and the Department of Energy developed the ICS security roadmap.Since 2005, they began to cooperate with Energy Infrastructure Protection Division of Natural Resources Canada and completed Roadmap to Secure Control Systems in the Energy Sector (2006) [21].This roadmap provides a common vision and strategic framework for the government, research institutions, and universities to develop, deploy, and maintain control system to survive under the intended cyberattacks and without losing critical capabilities. Later, in 2011, they update the roadmap with Roadmap to Achieve Energy Delivery Systems Cybersecurity (2011) [22] in "Changing landscape, " "building on success and addressing gaps, " "advancing threat capabilities, " and "emphasizing a culture of security." Table 2 gives an insight of the roadmap and Table 3 attaches the projects or efforts made by the industries and government.Among the leading organizations, six national laboratories in the United States launched a systematic and comprehensive study on ICS, covering the ICS security standards, protocol development, research on industrial control security threat and vulnerability, research and development of security control technology, and so forth.They provide strong support for the U.S. security management of ICS.INL's SCADA Security Center in the auspices of the Department of Energy presided over the National SCADA Test Bed (NSTB) Program, which was started in 2003.NSTB has now completed the assessment of control systems and components and network vulnerability for 37 organizations, including assessment of control system components for 14 organizations and assessment of control system for 15 organizations, as well as on-site assessment of infrastructure system for 8 organizations. INL assists the Department of Homeland Security to develop self-assessment tools for the control system network security and train on the control system network security.INL has issued a series of industrial control security related research reports, including "The SCADA Network Security Assessment Methods" (2005) [23], "The Control System Network Security: Defense Strategy in Depth" (2007) [24], and "Common Network Security Vulnerabilities in the Assessment of Control Systems" (2008) [25].Its current research focuses on tool suite for sensing situation of control system. (ii) Sandia National Laboratories (SNL).SNL was established in 1949 under the U.S. Department of Energy, with the specific research direction on infrastructure, specifically the SCADA system security research and global critical energy infrastructure protection. In the course of fulfilling its national security mission over more than 50 years, Sandia has developed deep expertise in protecting critical infrastructure.Sandia National Laboratories and Los Alamos National Laboratory (LANL) are the prime contractors for National Infrastructure Simulation and Analysis Center (NISAC) and they integrate the two laboratories' expertise in the modeling and simulation of complex systems for evaluating national preparedness and security issues.NISAC is a modeling, simulation, and analysis program that prepares and shares analyses on critical infrastructure and key resources including their interdependencies, vulnerabilities, consequences of disruption, and other complexities.NISAC is under the direction of the Department of Homeland Security. The SNL set up the SCADA laboratories and research center to research the security of the SCADA system.SCADA focuses on analysis of vulnerabilities in the SCADA system and components to support highly assured SCADA system.The SCADA security research center consists of several test bed facilities, which can implement mold design, simulation, and verification of critical infrastructure.The research center focuses on security of the current control system and the development of next generation control system.The study of SCADA security research center includes SCADA assessment and SCADA engineering solutions. The SNL has many research results in ICS security, encompassing the assessment, framework, indicators, and protocols of ICS security.And it has published "Guide to Critical Infrastructure Protection Cyber Vulnerability Assessment" [26], "Security Framework for Control System Data Classification and Protection" [27], "Security Metrics for Process Control Systems" [28], and "Secure ICCP Integration Considerations and Recommendations" [29]. SNL currently focuses on the development of trust anchor (an independent testing and control equipment) and has carried out the program on protecting life cycle of the process control system from attack with the use of the trust anchor.SNL provides a diverse range of services from supplying basic necessities, such as power and water, to ensuring the information foundation for our economy.The systems, facilities, and functions that make up our nation's critical infrastructure are essential to our vitality, security, and quality of life and ensuring the smooth operation of the sophisticated and highly interdependent components of critical infrastructures is a crucial but complex challenge, especially in face of today's troubling threats.ORNL's scientific programs focus on materials, neutron science, energy, high-performance computing, systems biology, and national security.ORNL is currently undergoing the testing research on network security of SCADA system through portable acceptance tester and protocol.Meanwhile, ANL's research on ICS security focuses on the field of SCADA systems, mainly on the SCADA system of natural gas pipeline transportation.ANL has carried out the investigation and assessment of the SCADA system, and the development of a variety of tools, techniques, and methods for assessing and improving the SCADA system. (v) Pacific Northwest National Laboratory (PNNL).PNNL was established in 1965 in Washington.It is committed to solve the most intractable problems of energy, environment, and national security.PNNL is one among ten U.S. Department of Energy national laboratories. PNNL has proposed the concept of SSCP (secure SCADA communication protocols) and its ongoing research includes building secure communication architecture for energy industry, developing the field device management software and the encryption trust management software, and improving the protocol analyzer. (vi) Los Alamos National Laboratory (LANL). Established in 1943, LANL is also a part of the U.S. Department of Energy.It is famous for developing the world's first atomic bomb.The key research areas of LANL include national security, space exploration, renewable energy, medicine, nanotechnology, and supercomputers. LANL is currently focusing on the signature science.In order to advance the signature science, the cyberphysical security challenges associated with the forward deployment of measurements systems, such as wireless sensor networks, or robotic swarms carrying measurement payloads, must be addressed.LANL is carrying out researches on communication, and it is committed to the development of a detailed cost-benefit modeling tool for the SCADA communication framework of the next generation to help operators select an appropriate communication technology for each network node or level. Technology and Research. In the research mechanism, specific technologies, and measures, the U.S. has made a great effort.In mechanism, the U.S. established a technical system which was coordinated and managed by national departments and supported in technology by national professional teams.In the specific technologies and measures, the U.S. sets up assessment methods with the combination of onsite evaluation and laboratory evaluation relying on simulation platform.Simulation platform-based authentication service and self-controlled evaluation service have become an inevitable trend of the ICS security. Since 2006, the U.S. National Science Foundation (NSF) has awarded large amounts of funds to research projects for CPS.Many universities and institutes join these research projects.Table 4 is a brief outline of the projects NSF has funded for the universities. Power Infrastructure Cybersecurity Laboratory of Iowa State University focuses on cyberphysical systems framework for risk modeling and mitigation of cyberattacks on the power grid that accounts for dynamics of the physical systems, as well as the operational aspects of the cyber-based control network.PowerCyber, PENET Tool,and TraceDos have been developed to serve the research on power grid security. Penn Research in Embedded Computing and Integrated Systems Engineering (PRECISE) Center serves as the convergence of related research efforts by affiliated faculty spanning the CPS domain.Currently, PRECISE researchers are actively collaborating with CPS application-area experts to develop next generation medical systems, automotive systems, green energy buildings, wireless industrial automation, and avionics. In University of California-Berkeley, there are mainly three research institutions developing CPS related knowledge.Center for Hybrid and Embedded Software Systems (CHESS) is aimed at developing model-based and toolsupported design methodologies for real-time fault tolerant software on heterogeneous distributed platforms.Partners for Advanced Transportation Technology (PATH) is to develop solutions to the problems of California's surface transportation systems through cutting edge research.Cyber-Physical Cloud Computing Lab (CPCC) is to explore the interaction of ubiquitous computing, cloud computing, robotics, and oceanic science.Moreover, Industrial Cyber-Physical Systems Center (iCyPhy) is to identify and develop new engineering techniques that will make it easier to successfully build products and services that combine complex software, hardware, and mechanical components.Last but not least, TerraSwarm Research Center is addressing the huge potential of pervasive integration of smart, networked sensors and actuators into our connected world. The IMPACT mobile computing lab at Arizona State University focuses on developing protocols and middleware for pervasive and mobile computing applications.Currently, it is developing a sensor network-based medical monitoring infrastructure called Ayushman, which is to provide a dependable, secure, real-time automated health monitoring and to serve as a realistic environment (test bed) for testing communication protocols and systems for medical applications. Carnegie Mellon CyLab is a world leader in both technological research and the education of professionals in information assurance, security technology, business, and policy, as well as security awareness among cybercitizens of all ages.One of its main research areas is security of cyberphysical systems and there are about 40 projects up to now.Its objective is to build a new generation of technologies that will lead to measurable, available, secure, trustworthy, and sustainable computing and communications systems, as well as associated management and policy tools that enable successful exploitation of the new technologies., The Cyber-Physical Systems Laboratory (CPSL) at Washington University in St. Louis performs cutting-edge research on real-time systems, wireless sensor networks, embedded systems, and cyberphysical systems that cross-cut computing, networking, and other engineering disciplines.It has been awarded NSF 1329861 to do research on "Safety-Feature Modeling and Adaptive Resource Management for Mixed-Criticality Cyber-Physical Systems." The Cyber-Physical Systems Laboratory (CyPhyLab) of the University of California at Los Angeles is currently conducting research on the modeling, analysis, and control of real-time, embedded, networked, and distributed systems.It has been awarded NSF 1239085 "Correct-by-Design Control Software Synthesis for Highly Dynamic Systems" and NSF 1035916 "Foundations of secure Cyber-Physical Systems." Cyber Physical Systems Integration Lab of the University of Illinois at Urbana-Champaign focuses on reduced complexity architectural design to compose large-scale, safe, and secure cyberphysical systems, such as avionics and medical systems.It is undertaking the safe "MD PnP" (Medical Device Plug and Play) research program to identify the broad requirements for the integration of medical devices in highacuity settings.Another famous lab of this school is Cyber-Physical-Human (CPH) Systems Lab, which aims at development of robust, fault-tolerant architectures that would ensure predictable operation of the system with the given hardware constraints, despite the uncertainties in physical processes and cyber-and human faults. Trustworthy Cyber Infrastructure for the Power Grid (TCIPG), whose researches are mainly from the University of Illinois at Urbana-Champaign, Dartmouth College, the University of California at Davis, and Washington State University, focuses on securing the low-level devices, communications, and data systems that make up the power grid, to ensure trustworthy operation during normal conditions, cyberattacks, and/or power emergencies. Besides these, Bruce McMillin from Missouri University of Science and Technology, Matthew Might and Chris Myers form Utah University, Inseok Hwang from Purdue University, Yuhong Zhang from Texas Southern University, Francesco Bullo from the University of California at Santa Barbara, and Sriram Sankaranarayanan from the University of Colorado Boulder have also made contributions to the development of CPS security research.In addition, there is Cyber-Physical Systems Virtual Organization (http://cps-vo.org),supported by NSF, to foster collaboration among CPS professionals in academia, government, and industry of the United States.It can be regarded as a broad community of interest for people who work on a wide range of CPS related disciplines with different approaches, methods, tools, and experimental platforms.They are driven by a shared goal: to advance human's knowledge in the science and engineering of CPS. Although the researchers have made some progress in modeling, energy and security control, software design approaches, and so forth, the researches on CPS are just in an embryonic stage. Emergency Readiness. "Computer Emergency Readiness Team" was referred to the first team of CERT Coordination Center (CERT/CC), established at Carnegie Mellon University in 1988, which works jointly with DHS and contributes expertise to protecting the nation's information infrastructure by coordinating defense against and response to cyberattacks.Now, the CERT is licensed to other teams around the world.Worldwide, there are more than 250 organizations that use the name "CERT" for cybersecurity response; US-CERT (United States Computer Emergency Readiness Team) is independent of these but may coordinate with them to security incidents [30]. The Industrial Control Systems Cyber Emergency Response Team of the Department of Homeland Security (ICS-CERT) collaborates with the US-CERT, focusing on ICS security, and has carried out related work, including the response to control system incidents, execution vulnerability and malicious code analysis, on-site support for incident response and forensics analysis, situational awareness in the form of actionable intelligence, and reliable disclosure of coordination vulnerability. For control system security, US-CERT publishes documents to assist in determining vulnerabilities and improving control system security including vendor specific vulnerabilities and solutions.The "ICS-CERT Monthly Monitor" published by ICS-CERT not only includes news, reports, and announcements in ICS field but also timely reports the new vulnerabilities in ICS. Standards and Regulations. The United States has formed a set of national regulations and standards and industry standard specification. As shown in Table 5, it has established a series of management systems on the national level, developed a number of large special programs, developed a technical and research system which is coordinated and managed by national departments, set up assessment methods with the combination of on-site evaluation and laboratory evaluation relying on simulation platform, and passed a set of standards and regulations from national standards to regulations supporting key sectors of energy, nuclear facilities, chemical industry, and the people's livelihood.In 2011, The European Network and Information Security Agency (ENISA) conducted a study on the ICS Security, identified threats, risks, and challenges, and took stock of national, European, and international initiatives.Moreover, based on the active collaboration of experts belonging to ICS related sectors, the study released a report named "Protecting Industrial Control Systems" [47].This report, with a main report and six annexes, is currently the most important and comprehensive result of ICS security research in Europe. ICS Security in Europe The main report proposes good practices and recommendations which aim at helping to improve the security, safety, and resilience of European ICS systems.The 6 annexes of the report can be regarded as a reference manual to the relative persons who are undertaking the research on ICS security.Annex I presents the main results of a desktop research phase.It provides a comprehensive overview of the current panorama of ICS security.Annex II provides a detailed analysis on the data gathered from the interviews and survey which ICS security experts participate in.Annex III is a compilation of current security guidelines and standards for ICS.Annex IV includes a complete list of initiatives related to ICS security.Annex V provides a detailed description of the key findings.Annex VI includes the minutes of the workshop. The final report proposes 7 recommendations to the public and private sector involved in the area of industrial control systems.Recommendation ( 1) is creation of pan-European and national ICS security strategies.Recommendation ( 2) is creation of a good practices guide for ICS security.Recommendation (3) is creation of ICS security plan templates.Recommendation ( 4) is to foster awareness and training.Recommendation ( 5) is creation of a common test bed, or alternatively, an ICS security certification framework.Recommendation ( 6) is creation of national ICS-computer emergency response capabilities.Recommendation (7) is to foster research in ICS security leveraging existing research programs.These recommendations intend to provide useful and practical advice aimed at improving current initiatives, enhancing cooperation, developing new measures and good practices, and reducing barriers to information sharing. CyPhERS Program. The CyPhERS (Cyber-Physical European Roadmap and Strategy, http://cyphers.eu)project aims at combining and expanding Europe's competence in embedded and mobile computing and in control of networked embedded systems.The main objective of the project is to develop a European strategic research and innovation agenda for cyberphysical systems (CPS) to ensure Europe's competitiveness in this emerging field. CPSoS. CPSoS (Towards a European Roadmap on Research and Innovation in Engineering and Management of Cyber-physical Systems of Systems, http://www.cpsos.eu) is a 30-month Support Action supported by the European Commission under the FP7 programme.It aims to build constituencies for a European R&I agenda on SoS.CPSoS provides a forum and an exchange platform for systems of systems related communities and ongoing projects, focusing on the challenges posed by the engineering and the operation of technical systems in which computing and communication systems interact with large complex physical systems.The final outcomes of the project will be identification of industrial and societal needs and the state-of-the-art tools and theories for cyberphysical SoS and so on. ICS Security in China In China, the ICS has emerged in various fields, like industry, energy, transportation, water, and municipal sectors.In recent years, with the deep integration of information technology and industry and the rapid development of Internet of Things, the intelligent and networked control system becomes the development trend of industrial automation in Compared with developed countries, the ICS in China is facing a more severe situation: firstly, there are uncontrollable risks because of the low proportion of domestic ICS products.The proportion of domestic ICS products is very low, especially the PLC products.The domestic PLC market is mostly occupied by large international companies as Siemens, Mitsubishi, and Omron.Overreliance on foreign products leads to uncontrollable risk so that it is difficult for domestic enterprises to carry out the work of testing, assessment, prevention, and management of the ICS security.Secondly, domestic products are more focusing on efficiency and lack of the necessary security mechanisms.Thirdly, the enterprises pay insufficient attention to the ICS security, so it is difficult to respond effectively to security threats.Fourthly, domestic ICS risk assessment is relatively backward compared with developed countries. One main problem is that ICS test platform has not yet been built.For some reasons, the test environment for ICS security testing is not yet established, and the implementation of risk assessment will be restricted.Due to lack of technical capacity, the procurement of special tools for ICS is mainly from developed countries.Moreover, whether from the national level or at the industry level, China has not yet developed the ICS security standards.ICS risk assessment is an interdisciplinary field.Employees need to have not only the traditional information security skills but also the automation control and the industry knowledge, and the professionals of ICS risk assessment are scarcer. Faced with the grim situation of ICS security, China needs to comprehensively strengthen the ICS security management.Both the government and correlated industries have attached great importance to the security of ICS, especially since the strike of "Stuxnet" in 2010.Besides the concern on ICS security, the government takes lead on mitigating the risk of ICS.It takes measures on several aspects from management to technical support.The efforts of China ICS security management rest on three aspects as the following. Policy Guidance. In September 2011, the Ministry of Industry and Information Technology (MIIT) published "The Notice on Strengthening the Information Security Management of Industrial Control Systems" ([2011] 451), in order to raise wide awareness of the importance and urgency of ICS security and call for strengthening the security of basic control facilities and the SCADA system in major industrial areas. Three months later, in December 2011, the Ministry of Industry and Information Technology published "The Notice on the Survey of Critical Industry ICS Risk" ([2011] 1003), requiring that every region should make a basic survey on all the critical industries including steel, chemistry, oil, electricity, gas, water, heat, and all kinds of transportation on their ICS security.This is taken as a reaction to the Notice 451. In the middle of 2012, the State Council published "Several Opinions of State Council on Vigorously Promoting the Informationization and Secure the Information Security" ([2012] 23), demanding that local governments and related departments secure the ICS and launch periodically security inspection and risk assessment, especially those critical ICS related to the public safety.It also demands that critical products in critical areas should be inspected and timely reporting mechanism for the risks and vulnerabilities should be established.Also, in 2012, State Council published "The Circular of Carrying out Inspection Action in Network and Information Security in Important Fields" ([2012] 30) demanding strict inspection on critical IT system and ICS, including oil, water, and transportations. Moreover, besides these kinds of regular inspections or self-inspection, the government has also published circular demand to organize a spot-check team to help check the risks and vulnerabilities of ICS, assess the risks, and provide advices on ICS security.30), in order to prevent attacks on power grid system and ensure the power grid security.Later, State Electricity Regulatory Commission published several secondary system safety protection regulations and schemes at national level, provincial level, town level, and power plant level. Industrial As for oil industry, Petro China and China Petrochemical Industry have been far aware of the importance of ICS security and have made much effort on industry regulation, inspections, and safeguard.In October 2011, Petro China and China Information Security Evaluation Center held an evaluation on the schemes of Petro China ICS security.A month later, Petro China officially launched the investigation and inspection on its own ICS security, in order to figure out the status quo of its ICS security and sort out the most urgent issues to raise the foundation for coming works. Technical Research Support. In 2011, National Development and Reform Commission organized and implemented the national information security program.The National Development and Reform Commission also began to support industrial projects on ICS security services.On the precondition of enhancing the awareness of risks, China will speed up the development of ICS test bed and the information security standards.On the other hand, China will encourage innovation and advocate professional training in technology, to further enhance the security capabilities of domestic ICS.In fact, many universities and research institutions in China have begun to study the technical foundations for the ICS security. Conclusion ICS is the central nervous system of national critical infrastructures (such as power plants, power grids, oil refineries, oil and gas pipelines, chemical plants, urban transport, railways, shipbuilding, and defense,).If ICS collapse, the consequences could be disastrous.ICS security has become a top priority in the security field.The United States realizes the risks of ICS in the early phase of deployment and the U.S. government has appointed the DOD and DOE to deal with measures and strategies towards mitigating the risks of ICS, which developed NSTB and a center to assess and mitigate the risks.As for the technical countermeasures, quite a lot of universities and industry labs, along with national labs, have engaged in developing the models, theories, methods, and tools for ICS security, with the fund from NSF and other organizations.Meanwhile, EU has invested billions of dollars in the R&D of ICS and conducted the study of the best practices of mitigating the risk of ICS, from the information sharing and staff training to stressing on the technical researches.China government has also noticed the ICS risk and has claimed the demand for checking ICS risks and it also supports industrial projects on ICS security services. In this paper, we have reviewed the possible threats posed to the ICS and made a survey about related researches on the security countermeasures.Although many countries and areas have attached importance to the ICS security and taken steps to mitigate the risks, there are still many things to do. Figure 2 : Figure 2: A CPS framework based on WSN. (i) Idaho National Laboratory (INL).INL was established in 1949, located in Idaho, and under the U.S. Department of Energy.The lab has provided 60 years of national security service for customers including DOD (the Department of Defense of the United States), DHS, DOE, and industry.The lab's 890-square-mile site includes a utility-scale power grid, explosives range, wireless test bed, nuclear reactors, hot cells, and fuel treatment facilities.INL offers malware laboratory, focusing on analysis of the Stuxnet virus.INL has high-class capacity in vulnerability identification and vulnerability mitigation of ICS and has been internationally recognized.Backed by the U.S. Department of Energy and the Department of Homeland Security, the Idaho National Laboratory began to establish the Critical Infrastructure Test Range (CITR) in 2003 and officially put it into operation in 2005, which includes the National SCADA Test bed and the Electric Grid Test bed.This laid a good foundation for the smooth development of ICS security. ( iii) Oak Ridge National Laboratory (ORNL).ORNL is a large national laboratory under the U.S. Department of Energy, established in 1943, and has been comanaged by the University of Tennessee and Battelle Memorial Institute since April 2000. 4. 1 . ARTEMIS Program.The ARTEMIS Program in EU invests seven billion dollars in mid-2007 in R&D to achieve "world leadership in intelligent electronic systems" by 2016. Table 2 : ICS roadmap of the United States. Table 3 : The projects and efforts in the roadmap. Table 4 : NSF funded projects and corresponding universities (most of these projects are collaborated reaches and this table may not list all the universities in each project). Table 5 : Standards and guides of the United States. Efforts.Among several critical industries, the electricity power grid has the earliest insight on ICS security.Early on June 2002, former State Economic and Trade Commission has published "The Data Network Security Stipulation on Power Grid Surveillance and Dispatch"([2002]
9,527
2014-06-01T00:00:00.000
[ "Computer Science" ]
Wigner’s quantum phase-space current in weakly-anharmonic weakly-excited two-state systems There are no phase-space trajectories for anharmonic quantum systems, but Wigner’s phase-space representation of quantum mechanics features Wigner current J . This current reveals fine details of quantum dynamics —finer than is ordinarily thought accessible according to quantum folklore invoking Heisenberg’s uncertainty principle. Here, we focus on the simplest, most intuitive, and analytically accessible aspects of J. We investigate features of J for bound states of time-reversible, weakly-anharmonic one-dimensional quantum-mechanical systems which are weakly-excited. We establish that weakly-anharmonic potentials can be grouped into three distinct classes: hard, soft, and odd potentials. We stress connections between each other and the harmonic case. We show that their Wigner current fieldline patterns can be characterised by J’s discrete stagnation points, how these arise and how a quantum system’s dynamics is constrained by the stagnation points’ topological charge conservation. We additionally show that quantum dynamics in phase space, in the case of vanishing Planck constant ℏ or vanishing anharmonicity, does not pointwise converge to classical dynamics. Introduction Classical phase-space trajectories allow the viewer to characterise a system's dynamics at a glimpse. They also reveal rich structures such as the strange attractors of chaotic systems full of intricacies and beauty [1,2]. Anharmonic quantum systems do not feature trajectories [3], but fieldlines of Wigner's phase-space current J characterise quantum-mechanical phase-space dynamics at a glimpse (sects. 5 and 6), similar to classical phase portraits: this is underexplored. This is a gap this work aims to help fill. Wigner's quantum theory [4] (sect. 2) is a representation of quantum mechanics in phase space [5][6][7][8][9][10][11][12] (additionally pioneered by Groenewold [13] and Moyal [14]) equivalent to Heisenberg, Schrödinger and Feynman's representations of quantum theory. It is, historically speaking, the third representation of quantum physics and its importance is still not clear: "Some believe it will supplant, or at least complement, the other methods in quantum mechanics and quantum field theory" [6]. Here we investigate Wigner's quantum phase-space current J and its fieldlines for the three classes of weaklyanharmonic potentials: hard, soft and odd potentials (sect. 4). We emphasize the features the current patterns, associated with the three classes of potentials, have in common: it turns out that odd potentials are hybrids of hard and soft potentials and this is reflected in their phase-space current patterns (sects. 5 and 6). We particularly stress an intuitive understanding of how the Wigner current patterns emerge (sects. 5.1 and 5.2). Because the anharmonicities are weak, the J -fieldline patterns can partly be understood from the vantage point of the harmonic oscillator (sect. 3) and partly through perturbation analyses (sects. 6.1 and 5.3). For simplicity, our discussions are limited to one-dimensional conservative quantum-mechanical systems featuring nearly harmonic potentials. We only consider the bound energy eigenstates (sect. 5) of weakly-excited systems in pure two-state superpositions (sect. 6). To demonstrate the conceptual power of the use of J and collections of its fieldlines, we show that in the limit of vanishing anharmonicity the fieldlines of J do not converge pointwise (sects. 5 and 6) to those of the harmonic oscillator (sect. 3). This implies that in the limit of vanishing anharmonicity, or vanishing magnitude of Planck's constant, quantum and classical phase-space behaviour are qualitatively very different from each other [3,12], see sects. 5.2 and 7. a e-mail<EMAIL_ADDRESS> Wigner distributions and Wigner current We parameterize pure quantum two-state superpositions of energy eigenstates ψ m (with eigenenergies E m and energy difference ΔE = E n − E m ) by the mixing angle θ Ψ m,n (x, t; θ) = cos(θ)ψ m (x) + sin(θ)e − i ΔEt ψ n (x). (1) Here, x and t denote position and time, = h/(2π) is Planck's constant, and since this is a two-state superposition, the period time is Wigner's phase-space quantum distribution W (x, p, t) [4,10,11] is where (x − y, x + y, t) = Ψ (x − y)Ψ * (x + y), for pure states. W is real-valued, non-local (through y), and normalized Wigner's distribution W is set apart from other quantum phase-space distributions [10] by the fact that only W simultaneously yields the correct projections in position and momentum ( (x, x, t) = ¡ dp W and˜ (p, p, t) = ¡ dx W ) as well as state overlaps | ψ 1 |ψ 2 | 2 = 2π £ dx dp W 1 W 2 , while maintaining its form (3) when evolved in time. Additionally, the Wigner distribution's averages and uncertainties evolve momentarily classically [15,16] (fulfilling Ehrenfest's theorem [12]). This is why W is the "closest quantum analogue of the classical phase-space distribution" [17]. To study W 's dynamics one Wigner- here J (x, p, t) denotes the Wigner current [18] and the shortened notation ∂ 2 ∂x 2 = ∂ 2 x , etc., is used for partial derivatives. In the case of potentials that can be Taylor-expanded, J assumes the infinite-sum form [4,11,13,14,[19][20][21] where M is the mass of the particle. The term J p,l=0 is of classical form, the terms of higher order in l are known as quantum correction terms. In general J p in (5) has the integral form [4,12] For numerical stability we avoid the infinite-sum form of J p in (5), when generating the J 's fieldlines depicted in figs. 5, 7, 8 and 10, and use its integral form (6) instead. Stagnation points of quantum phase-space current Wigner current reveals detail of quantum dynamics' finest features, in particular the nature of its stagnation points, the points in phase-space where the dynamics momentarily stops. In classical physics, stagnation points are also referred to as equilibrium, stationary, fixed, critical, invariant and rest points [2]. This multitude of terms testifies to their central importance in classical mechanics as well as wave theory, e.g. in the field of "singular" optics [22] (where they are called singular points [23]). In classical physics stagnation points can only form on the x-axis at the potential's minima, maxima and saddle points, where the force is zero. In quantum physics Wigner distributions are known to feature negative regions in phase-space [4]. In these regions the current is inverted in direction [18], this leads to the formation of whorls and saddle flows with points of stagnating current at their centers. This can happen wherever in phase-space the Wigner distribution turns negative. (7). A red plus sign labels stagnation points with topological charge [22] ω = +1, a yellow minus sign ω = −1, and a white circle ω = 0. The current's fieldlines can be skewed near stagnation points in phase-space, can feature skewed separatrices, and saddles oriented in the p-direction. Analogously to the classical case, Wigner current's stagnation points are the most important points in quantum phase-space for two reasons: the topological nature of Wigner current's stagnation points, firstly, orders the current in large surrounding sectors of phase-space and, secondly, makes them carry a conserved topological charge. Its topological nature makes their appearance robust to perturbations and time evolution. To quantify the topological charge conservation of J 's stagnation points, we use the orientation winding number for Wigner current along closed, self-avoiding loops L in phase-space [18] ω(L, t) = 1 2π L dϕ; where ϕ is the angle between J and the x-axis, see fig. 1. Since the components of the current are continuous functions, ω is zero except for the case when the loop L contains stagnation points, such as those sketched in fig. 1. Then, a non-zero value of ω can occur. The value of ω, (in mathematics known as the stagnation point's Poincaré-Hopf index ), is topologically protected [22]. When the system's dynamics transports a stagnation point across L, ω(L, t) can change [18]. The topological charges can be combined or split through the system's time evolution while their sum remains conserved [18]. Wigner current of harmonic oscillators When we introduce weakly-anharmonic potentials in sect. 4, we rescale them to match their minimum's curvature to our choice of a harmonic reference potential with circular fieldlines (rather than elliptical [24]), see fig. 2. Having such circular fieldlines is the main motivation for this particular choice. It constitutes a choice of units of mass M = 1, spring constant k = 1. Setting = 1, leads to an angular frequency of Ω = 1 and an oscillation period of T = 2π. Wigner current for V , according to eq. (5), has the "classical" form (see Takabayasi [25], p. 351) Degenerate Wigner current Wigner distributions are continuous and have negativities [4], they therefore feature zero-contours in phase-space which, in the case of the harmonic oscillator, because of the form (9) of J , become zero-lines for both components, J x and J p simultaneously, giving rise to lines of stagnation of the current, see fig. 2. It is well known from quantum optics that the eigenstates W n,n (x, p) of the harmonic-oscillator Fock states ψ n resemble Mexican hats centred on the origin, with concentric fringes of alternating polarity. Their zero-contours thus form concentric circles [20]. In the case of superposition states, we primarily investigate superpositions of ground and first excited state Ψ 0,1 (t; θ) (1), in which case W 's circular zero-contour remains a circle with constant radius R 1 = 1/ √ 2 but shifted centre position C 01 . The centre is further displaced from the origin the larger the groundstate contribution (θ ↓ 0 in Ψ 0,1 (θ) (1)) and rotates around the origin, with frequency Ω = 1, according to compare figs. 2 and 9. The three classes of weakly-anharmonic potentials Weakly-anharmonic potentials V (x) that admit a Taylor expansion in x are characterized by their leading anharmonic term α ν x ν in what we will refer to as their truncation V A ν of order ν, and representative A, namely, The precise order ν of a truncation's leading anharmonic term α A ν x ν is quite unimportant for our discussion, as it is the qualitative class of the potential that determines its qualitative dynamic features we are primarily interested in. With respect to qualitative features of Wigner current for weakly-excited bound state systems, just as for the associated phase portraits in the classical case, see fig. 3, only three classes of anharmonic potentials exist: hard, soft, and odd potentials. We checked this numerically for several potentials and it is plausible from our discussion below. All potentials with a leading positive anharmonic term of even order have qualitatively similar classical phase-space profiles. They correspond to springs harder than their Hookian reference (8), see left column of fig. 3. Soft potentials have a negative leading term of even order, fig. 3, middle column. For potentials of a leading term of odd order we always set the leading term α ν < 0, making the odd potentials soft for x > 0 and hard for x < 0, fig. 3, right column. [26]. Potential Harmonic oscillator or each class a representative exists for which all bound state eigenfunctions and eigenenergies are known in simple closed form [26]. As such representatives we choose the hard Eckart, V E , soft Rosen-Morse, V R , and odd Morse potential, V M ; see fig. 3 and table 1. For the Morse case all bound-state Wigner distributions are known [24] and were used to cross-check some of our numerical calculations. Anharmonic potentials V A which, based on their truncation V A ν , are classed as even or odd can contain higherorder Taylor terms which are not necessarily only even or odd. The influence of such higher terms can be neglected since we limit our investigation to weakly excited systems. If we were to regard the truncated right-hand side of eq. (11) as the full potential, soft and odd potentials would obviously have no bound eigenstates; we exclude such cases. With these provisions, studying one representative of each class allows us to cover qualitative features of Wigner current of the bound states of all weakly-excited weakly-anharmonic potentials. fig. 2). Green and blue ellipses represent the zero-circles, see sect. 3.1, deformed by the anharmonicity of the potential. They are colored green when J x changes sign and blue when J p does, this also applies to x-and p-axis. To "fill the plane" we track the orientation of the current while moving across phase-space along the sequence of arrows with ever darker shades of grey which eventually wraps around the rightmost "+1"-stagnation point. Whenever this sequence crosses a zero-line, when J x = 0 or Jp = 0, the arrows are framed green or blue, respectively. We can, similarly, track the current's orientation around the boundaries of the deformed zero-circles and along x-and p-axis. Green arrows with blue fringe are orientated horizontally (J p = 0) and invert direction whenever the blue line they are pinned to crosses a green line. Blue arrows with green fringe are tied to green lines, are vertically aligned, and behave analogously. At every crossing of a green with a blue line a stagnation point exists, but nowhere else. Having "filled the plane" we can work out the topological charge of the stagnation points, labelled as the symbols of fig. 1. The quantitative plots in the top row of fig. 5 confirm this qualitative analysis. Wigner current patterns for eigenstates of anharmonic potentials The degeneracy of eq. (9) leads to formation of lines of stagnation [18] An intuitive understanding of the ensuing Wigner current patterns is discussed next. Existence of distinct stagnation points of Wigner current The anharmonicity of the potential deforms the zero-contours of W shifting the zero-lines of the J x -component (J x = p M W ). The J p -zero-lines get shifted differently due to the additional presence of the quantum corrections terms in eq. (5): anharmonic quantum-mechanical systems form discrete current stagnation points in phase-space whenever > 0. In short, weakly-anharmonic systems are fundamentally non-classical [3,12]. The Wigner distributions for energy eigenfunctions of a weakly-anharmonic system converge pointwise towards those of the harmonic oscillator, but J and its fieldlines do not. In this sense there cannot be a smooth transition from quantum to classical case in either the limit of → 0 or vanishing non-linearity in the potential. This is at variance with published statement such as -"Trajectory methods [. . . ] are not reliable in general, being restricted to interaction potentials which do not deviate too much from an harmonic potential." [27], or: "the first step toward a systematic and general Wigner description is to consider a system whose potential differs only slightly from a harmonic potential" [28]. Instead, we find that very weakly anharmonic quantum systems develop quantum coherences essentially just like more strongly anharmonic systems, only more slowly. Qualitative effects of anharmonicities: features of eigenstates' current stagnation points Heisenberg's uncertainty principle Δx · Δp ≥ /2 implies constancy of the size of an uncertainty domain in phasespace [29] (note that this argument must not be taken too far [17]). Hard potentials squash phase-space fieldlines in position thus elliptically expanding them in momentum, see bottom row of fig. 3. This observation can be applied to The x-axis is colored green to mark the vanishing of the component J x , yielding two stagnation points for all (blue) J p -zero-circles intersecting it. Similarly, the p-axis is a blue line in the harmonic case, and, for symmetry reasons, also for even potentials. For odd potentials these J p zeros do not lie on the p-axis but are displaced to the right, see bottom row in fig. 5. In sect. 5.3 we confirm these statements through a mathematical analysis. Can an alternative to the break-up of the J p zero-lines for odd potentials, see bottom row of fig. 5, exist? The answer is it cannot: to the left of the p-axis an odd potential is hard and therefore has to yield the characteristic pattern displayed in the top row of fig. 5, to the right it is soft, yielding the middle row pattern. Near the p-axis both patterns meet but cannot be connected due to the continuity of J x and J p as functions of x and p. The only option, respecting continuity, is the cut-and-reconnect pattern we see realised in bottom row of fig. 5. In the limit of vanishing anharmonicity, four stagnation points form on the diagonals |x| = |p| per zero-circle. These positions can be understood from the above observations. The elliptic squashing and expansion of the zero-circles of J x and J p is weak, leading to deformation of a zero-circle into two ellipses with small, slightly different eccentricities, common centres and equal area which are aligned with the coordinate axes of phase-space. In the limit of vanishing eccentricities these intersect at odd multiples of 45 degrees forming the diagonal stagnation points we observe in figs. 4 and 5. To summarize this qualitative discussion: Weakly-anharmonic even potentials have 8n + 1 stagnation points for all low lying eigenstates ψ n : one near the origin, 4 diagonal stagnation points and 4 stagnation points: 2, where the J p -zero-lines cross the x-axis, and 2, where the J x -zero-lines cross the p-axis. Weakly-anharmonic odd potentials have 6n + 1 stagnation points per eigenstate, since the p-axis stagnation points are avoided by the cut-and-reconnect mechanism, mentioned above. For very great anharmonicities some of our results are approximations, see top right-most panel of fig. 5. For eigenstates, displacement of J p = 0, on the x-axis, is less than that of J x = 0 Numerically, we see that the zero lines of J x and J p shift differently. We now confirm analytically our qualitative discussion in sect. 5.2. For the displacement analysis of weakly anharmonic potentials we use J up to first order in l, in eq. (5). Because and We determine the displacement δx Jp of the zeros of J p on the x-axis using the Newton gradient method at (x, p) = (X, 0), hereX denotes the point where the Wigner distribution is zero, i.e., where J x = 0. Thus, δx Jp | (X,0) ≈ −J p | (X,0) /∂ x J p | (X,0) , which yields Now, the Mexican hat profiles of the harmonic oscillator's Wigner distributions (see insets fig. 2) imply that ∂ 2 p W n,n /∂ x W n,n | (X,0) is positive forX > 0 and negative forX < 0. For example, if the potential is stiff and symmetric (α 4 > 0) we know that the contours of W are squeezed inward on the x-axis, due to the uncertainty principle. In other words the magnitude of the zeros of the Wigner distributions on the x-axis obey |X W | < |X W |. In this case δx Jp | (X,0) > 0, which counteracts the inward movement of the zeros of J x ; the zero line of J p is less deformed than that of J x . The same logic can be applied to soft weakly-anharmonic potentials. Although for odd potentials a higher order in l of eq. (5) has to be used, this discussion confirms that an odd potential's behaviour constitutes a hybrid of stiff and soft potentials' behaviour, see sect. 5.2 and fig. 5. Additionally, the discussion above shows that quantum dynamics in phase space, in the case of vanishing Planck constant or vanishing anharmonicity, does not pointwise converge to classical dynamics. Wigner current patterns for two-state superpositions We now consider superpositions of energy eigenstates. Note that the associated fieldline patterns presented in figs. 7, 8 and 10, are integrated lines of J at one moment in time only. They therefore do not represent the time-evolution of J , but an illustrative, albeit somewhat unphysical, momentary snapshot. Displacement of the minimum vortex Similarly to eq. (15), with the Newton gradient method we we determine the x-shift of the zero of J p at the origin , and find that the minimum vortex' shift is For even potentials the stagnation point of J near its minimum does not shift at all, because ∂ (2l+1) x V | (0) = 0. This result conforms with our expectation (sect. 5.2) that, for symmetry reasons, the vortex at the origin of eigenstates of even potentials does not shift. This can be confirmed, to all orders in α, using (5). The stagnation point of J near the minimum of the potential only shifts for odd potentials. If the potential is anharmonic with its leading term α ν of higher than third order, a higher-order expansion has to be performed. With a leading third order anharmonicity (α 3 < 0) the Mexican hat profiles of the harmonic-oscillator's Wigner distributions (see insets fig. 2) imply that ∂ 2 p W n,n /W n,n | (0,0) < 0. Therefore, according to eq. (16), with ν = 3, δx Jp | (0,0) > 0. This confirms the shift to the right, in the direction of the potential's opening, as predicted in the qualitative discussion in sect. 5.2 and visible in the bottom row of fig. 5. For a superposition state's time-dependent displacement of the vortex near the minimum of the potential, δx Jp (t), eq. (16) provides a reasonably good approximation. This is depicted in fig. 6. Note that in the classical case the minimum does not shift at all, compare fig. 3, the shift of the minimum vortex is a pure quantum effect. The Ferris wheel effect -alignment with x-and p-axes According to the discussion in sect. 5.2, four diagonal stagnation points form per zero-circle of every eigenstate. If we "perturb" an eigenstate by, say, mixing in a little bit of groundstate (Ψ m,0 (θ) of eq. (1) with θ 1), the zero-circle gets displaced from the origin (see eq. (10)). Yet, for small values of θ the four diagonal stagnation points remain pinned to the zero-circle while it rotates around the origin as time progresses. They do this such that they maintain their relative orientation with respect to the axes of phase-space, as seen from the zero-circle's centre. In other words, while they travel through phase-space they behave somewhat like markers on a Ferris wheel cabin, where the zero-line, J x = 0, depicts the cabin's outline, see fig. 7. Rabi scenario: modified two-state dynamics To investigate a simple system in which the weighting angle θ of the superposition state (1) changes considerably while the dynamics progresses, we study a resonantly driven Rabi system. Its solution for a superposition of ground and first excited state is . Note that the Morse potential is odd, i.e. it is hard on the left (x < 0) and soft on the right side (x > 0); accordingly, the current patterns on the left resemble those of the Eckart potential depicted (top row) and those on the right resemble those for the Rosen-Morse potential (middle row). where Ω R is the Rabi frequency [29] and the rotating wave approximation has been used. In accord with this approximation we assume that the perturbation is so small that we can neglect the time-dependence of the Hamiltonian when determining the fieldlines of J . (10) is displaced from the origin such that, over time, it sweeps out a helix with varying width. This is displayed as a helical tube whose rainbow coloring depicts the flow of time. Every full period (T = 1) is denoted by a dashed black zero-circle. Stagnation points are depicted by red lines when carrying charge ω = +1 and yellow if ω = −1, compare fig. 1. The stagnation point positions are additionally projected along the x-axis onto the blue wall in the back and along the p-axis downward onto the green floor. Winding number conservation implies that positively and negatively charged stagnation points originate and annihilate together, this is seen as red and yellow lines forming loops which are reminiscent of the formation of the torus reported in fig. 4 of ref. [18]. As mentioned in fig. 5 above, the bottom panel, for the odd Morse potential, inherits features of Rosen-Morse and Eckart potentials. The Rabi state (17) displays Wigner current patterns associated with the system's (fast) intrinsic dynamics while (slowly) shifting the weighting of the superposition state: for the ratio of these two system frequencies we choose ΩR Ω = 1 8 in fig. 9. To monitor the effects of the slow shift of θ by itself we keep time fixed and change θ 'by hand'. The topological nature of the stagnation points conserves the current winding number in this case as well, see fig. 8. For the full time-dependence we choose ΩR Ω = 1 8 in fig. 9. It shows plots with zero-circles (10) tied to a spiral centred on t = 0 (since Ψ R 0,1 (t = 0) = ψ 1 ) which expands outward as more of the groundstate gets mixed in with increasing values of |t|. We notice that the Ferris wheel-effect tends to keep the orientation of the stagnation points on the zero circle aligned with x-and p-axes. With our choice of ΩR Ω = 1 8 , around |t| = 2T the mixing angle is roughly |θ| = π 4 . At this stage the zero-circle gets displaced by its radius and stagnation points on the circle interact with those on x-and p-axes, see fig. 9, displaying repulsion, attraction, coalescence and splitting of stagnation points-all constrained by conservation of topological charge. Other superpositions Other superposition states, such as Ψ 1,2 , can show symmetric flower petal arrangements, see insets in fig. 10, which have recently been observed experimentally [30]. Figure 10 shows how the three different types of weakly-anharmonic potentials give rise to current patterns which generalise our previous discussions in sects. 5.2 and 6.2. Motivation and conclusion Our investigations of Wigner current J and its fieldlines shows that they give us insights into quantum phase-space dynamics: J -fieldlines provide visualisation at-a-glance, in this sense, their collection across phase-space are quantum analogs of classical phase-space trajectories. J and collections of its fieldlines reveal subtle patterns in phase-space dynamics, such as contracting and expanding regions of phase-space, current stagnation points, loops, separatrices and saddles; similar to classical phase portraits [2,31]. In contrast to classical phase space current, J is non-Liouvillian [12], compare e.g. fig. 8. J can be characterised by its stagnation points' distribution and Poincaré-Hopf indices. J -fieldlines follow neither energy-contours nor Wigner distribution contours [3]. They allow us to check concepts such as Wigner "trajectories" and dismiss such concepts [3,12]. Phase-space quantum mechanics is useful for approximate numerical modelling of quantum dynamics using semiclassical approximations, particularly in theoretical quantum chemistry, for a good and brief recent overview see [32] and references therein. Wigner current and its fieldlines allow us to benchmark approximate propagation schemes [19,33,34] against the full theory.
6,245.4
2017-09-01T00:00:00.000
[ "Physics" ]
Event Based Surveillance using WiMAX WiMAX provides high bitrates wireless connectivity. This high bit rate wireless connection is potential to be used as the wireless camera surveillance infrastructure. However, real cases show that existing camera resolution does not help much when investigations go in detail. The captured images are often blurry when identifying someone ID. Therefore, more camera resolution is needed. However, problem emerged as higher camera resolution means higher bandwidth requirement. This paper proposed an event based video transmission to tackle bandwidth shortage. Video is sent only when special event occurred, such as fast movement or object detection. Otherwise, video is just recorded locally. The NS-2 simulator was employed to examine the idea. Low bit rate video was initially employed and sent together from surveillance cameras to server. The quality of video is then measured. The camera codec was then replaced by using higher coding rate video. As result, bandwidth drain occurred that produced high packet losses and poor video quality. The event based surveillance was the applied that enable at most 25% possibility that a video sent to server. As results, system is able to maintain high quality video received by the server. Introduction Video surveillance cameras have been widely installed to help monitoring places or objects. Infrastructures have been built by using cable, optics and radio. Video resolutions are also improved to enhance image quality. For the latest item, higher bandwidth infrastructure such as optical network and high speed microwave link such as WiMAX are needed. However, when events occurred demanding details on surveillance images, quality is an issue. High resolution camera that is able to identify persons or object in details is demanded. Meanwhile, video processing technologies are in progress, including motion detections, searching, tracking and behavior analysis [1]. Those processing techniques are able to identify either video needs special attentions or not. Or at least, time stamps can be added to mark video part that can be visited when an event occurred or details are requested. Video processing technologies can be used to support efficient surveillance systems [2]. Efficient surveillance system can be approached by various techniques. Radio standard of WiMAX provides architecture and QoS leveling to enable application variations [3]. However, there are more enhancements provided by researcher such as enhanced bandwidth request mechanisms [4,5], transmission scheduling [6,7] or cross-layer schemas [8][9][10]. This paper proposes surveillance enhancement by employing the high resolution cameras that are able to provide sufficient information when details on images are demanded. In order to overcome bandwidth degradation, video processing techniques are applied to decide either an important event occurred or not. Video is sent to server only when important event occurred. Otherwise, video is just recorded at surveillance node as memory spaces are not expensive. In order to examine the proposed method, WiMAX simulations were employed with video quality measurements were performed based on packet losses and peak signal to noise ratio (PSNR). Methodologies In order to observe the proposed method, NS-2 simulator was set up with WiMAX NIST module that covers 1000 m serving four subscriber stations in various locations as depicted in Figure 1. The radio system uses 64 QAM modulations with a two-ray ground propagation model and download ratio 30% so that most bandwidth is for uplink traffics. Sequences of images coded by using MPEG4 with rates of 500 kbps is simulated for low resolution camera and MPEG4 with bit rate of 700 kbps represent the high resolution video. These videos were generated from the akiyo_cif.yuv video trace. The video is chopped into some packets with size of 1024 byte sent through the WiMAX link using user datagram protocol (UDP) [11]. The received packets within the server are then reconstructed into video traces. And by using the evalvid framework, traces were reconstructed to the received video file. Losses when reconstructed this video file is noted and the quality was compared to the original transmitted video file. PSNR values are then obtained. Packet delay and PSNR are the main measured parameters. Figure 1. Simulation configuration The WiMAX architecture is set to be flat all traffic is set to have best effort (BE) services. Figure 2a shows the original architecture and Figure 2b is the flat architecture. BE is a simple QoS which does not involve negotiation and parameter enforcement. Original architecture provides UGS, ertPS, nrtPS and BE [3]. It also does not perform polling. (a). Original architecture [12] Figure 2. WiMAX architectures The evaluated system uses the packet-aware bandwidth request mechanism [13] that manage request using reduced contention window of truncated binary exponential backoff (TBEB) technique and piggyback the next P frame transmission request. The sorting scheduler uses packet aware scheduler that prioritizes important frames [13]. Figure 3 shows delay comparisons between conventional surveillance with low camera resolution and high resolution. The high camera resolution experiences 107.88% higher average delay than the low resolution. Delay achieved 59.9 ms per packet. Although it seems acceptable, one frame may be sent in more than four packets, that makes delay four times higher and become unacceptable for real-time application. Meanwhile, by implementing the proposed method for high resolution video, delay decreases about 31.25% with average delay 41.1 ms, which is much lower than the conventional method ( Figure 4). High resolution video causes PSNR drops from 35.96 dB to 32.6 dB in conventional surveillance. However, by applying the event based transmission, PSNR increases as video is sent only when an event detected. Simulating the event results 75% traffic decrement then enables video transmission to have sufficient bandwidth. Figure 5 shows the comparisons high bit rate video sent by conventional method and the proposed method. Overall, the video performance (PSNR) increases significantly compared to the conventional method by 19.6% in average. The transmitted video by using based event transmission has even better quality than low resolution video, in average 38.99 dB and 35.96 dB respectively. Conclusions This paper has proposed event based surveillance using WiMAX, where high resolution video is used to maximize the monitoring purpose. Instead of sending video continuously to server, video will be sent if a predetermined important event is detected. Otherwise, video will be saved locally. So that the overall bandwidth requirement decreases and the performance of the transmitted video is better. Simulation using NS-2 shows that the high resolution surveillance application with the proposed method performed better the conventional method. Video quality/PSNR increases about 19.6% with lower transmission delay. Future work may deal with techniques as the trigger event in the proposed method and solutions to minimize the drawbacks.
1,516.2
2021-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
BRST-Lagrangian Double Copy of Yang–Mills Theory Leron Borsten, ∗ Branislav Jurčo, † Hyungrok Kim, ‡ Tommaso Macrelli, § Christian Saemann, ¶ and Martin Wolf ∗∗ Maxwell Institute for Mathematical Sciences Department of Mathematics, Heriot–Watt University Edinburgh EH14 4AS, United Kingdom Charles University Prague Faculty of Mathematics and Physics, Mathematical Institute Prague 186 75, Czech Republic Department of Mathematics, University of Surrey Guildford GU2 7XH, United Kingdom (Dated: July 29, 2020) INTRODUCTION AND SUMMARY Yang-Mills scattering amplitudes have been conjectured to satisfy a color-kinematic (CK) duality [1][2][3]: each amplitude can be written as a sum over purely trivalent graphs such that the kinematical numerators satisfy the same antisymmetry/Jacobi identities as the color contributions. CK duality has been shown to hold at tree level [4][5][6][7][8][9][10][11][12]. If it holds, replacing the color contributions of a Yang-Mills amplitude with another copy of the kinematical contributions yields a gravity amplitude [3]. This is known as the double copy prescription. For reviews and references see [13][14][15]. We make the crucial observation that CK duality violations due to longitudinal gluon modes can be compensated by harmless field redefinitions of the Nakanishi-Lautrup (NL) field. The Ward identities of the BRST symmetry then allow us to transfer CK duality from gluon amplitudes to those involving ghosts. Finally, onshell tree-level CK duality on the BRST-extended field space turns out to suffice to show that the BRST-Lagrangian double copied theory provides the loop integrands of a consistent perturbative quantization of N = 0 supergravity. A longer paper explaining the origin of the double copy in terms of homotopy algebras and giving explicit expressions for many of the steps discussed only abstractly in the following is in preparation [51]. THE BRST-LAGRANGIAN DOUBLE COPY We start with an abstract perspective on the double copy. Any Lagrangian field theory is equivalent to a field theory with exclusively cubic interaction terms, by blowing up higher order vertices using auxiliary fields, cf. also [52,53]. A generic such action is where the fields Φ I are elements of some field space F and the DeWitt index I encodes all field labels (including position x). Summation and space-time integration over repeated indices are understood. We are interested in theories invariant under a gauge symmetry described by a BRST operator Q. Applying the strictification procedure also to the Batalin-Vilkovisky (BV) action before gauge fixing, it is not hard to see that one can always reduce the gauge transformations of all fields to be at most cubic in the fields: We further require that fields split into left and right components (with independent left and right ghost numbers), but over a common space-time point. Consequently, we write a DeWitt index I as (α,ᾱ, x) and assume that with g αβ andḡᾱβ graded (with respect to the ghost numbers) symmetric, and f δA βγ , etc., differential operators with constant coefficients. The indices A andĀ range over the summands in f IJK . To simplify notation, we define Suppressing the position dependence, the Lagrangian of the theory becomes where we used the shorthand f αβγ fᾱβγ for the evident expression in (3c). Analogously, we want the BRST operator acting on left and right indices separately. Splitting the BRST operator Q into Q L + Q R , we require wherefᾱ βγδ = 3fᾱ εδfε βγ and similarly for Q R Φ. To double copy means to replace the left (or right) sector with a copy of the right (or left) sector of some, not necessarily the same, theory written in the form (4). If the resulting action S and BRST operator Q satisfy again the relations Q 2 = 0, QS = 0, we obtain a consistently gauge-fixed theory ready for quantization. It is not hard to see that Q 2 L/R = 0 iff Q 2 L/R = 0; the condition Q L Q R + Q L Q R = 0 may induce further conditions. For Yang-Mills theory, one readily computes that CK duality suffices for the condition QS = 0. If CK duality fails to hold up to certain terms, then QS = 0 also fails to hold up to the same terms, possibly multiplied by other fields and their derivatives. (Mathematically, the terms describing the failure of CK duality generate an ideal in the algebra of fields and their derivatives. The expressions QS and Q 2 take values in this ideal.) PRELIMINARY OBSERVATIONS We start with some general, field theoretic observations. We are interested in perturbative aspects and omit any non-perturbative issues. Also, we are interested in n-point amplitudes up to ℓ loops for n and ℓ finite. Thus, there is always a number N ∈ AE so that monomials of degree m > N can be neglected in the Lagrangian. We always use the term "amplitude" for onshell states and the term "correlator" for offshell states. Observation 1. If two field theories have the same tree amplitudes, then the minimal models of their L ∞ -algebras coincide, cf. [52,53]. If they have the same field content and kinetic parts, then they are related by a local (invertible) field redefinition. Observation 2. Two field theories are quantum equivalent, if all their correlators agree. Since correlators can be glued together from tree level correlators (up to regularization issues), it suffices if the latter agree. Observation 3. A shift of a field by products of fields and their derivatives which do not involve the field itself does not change the path integral measure. Local field redefinitions that are trivial at linear order produce a Jacobian that is regulated to unity in dimensional regularization [54], see also [55]. Therefore, they preserve quantum equivalence. We now turn to the BRST symmetry of Yang-Mills theory, starting from the BV form [56] of the Yang-Mills Lagrangian on Minkowski space, using canonical notation for all fields, with g the Yang-Mills coupling constant. We use the gauge fixing fermion Ψ : where ψ a is of ghost number 0 and depends at least quadratically on the fields and their derivatives. We obtain the gauge-fixed Lagrangian For ψ a = 0, we recover the R ξ -gauges. The BRST transformations are satisfying Q 2 YM = 0 offshell. The non-physical fields enlarge the one-particle field space of asymptotic onshell states by four types of states: the two unphysical polarizations of the gluon, called forward and backward and denoted by A ↑ and A ↓ , and the ghost and antighost states [36]. All amplitudes will be built from this BRST-extended onshell n-particle field space, which carries an action of the linearization of (9) denoted by Q lin YM . The physical polarizations are singlets, Q lin YM A phys = 0, and we have two more doublets: where the ellipsis indicates terms that arise from Ψ 1 . Performing shifts b a → b a + X a and Ψ 1 → Ψ 1 + Ξ 1 with Ξ 1 := d d xc a Y a induces a shift of (8) by If X a is independent of the NL field b a , this modification preserves the theory at the quantum level by Observation 3. Furthermore, if X a is at least quadratic in the fields, this transformation preserves the action of Q lin YM on the BRST-extended onshell field space. Consider now the special case ψ a = 0 and X a independent of b a and fix Y a iteratively such that the linear terms in b a of (11) vanish: This leads to the following observation: Observation 4. Terms in the Lagrangian of the form (∂ µ A µ ) a X a with X a at least quadratic in the fields and their derivatives but independent of the NL field can be removed in R ξ -gauges by shifting the NL field. This creates additional terms (11) which are at least of fourth order and preserve the amplitudes by Observation 3. Observation 5. Terms in the action that are proportional to a NL field can be absorbed by choosing a suitable term ψ a . This leaves the physical sector invariant but it may modify the ghost sector. Because NL fields appear as trivial pairs in the BV action, it is not hard to see that this extends to general gauge theories, e.g. with several NL fields and ghosts-for-ghosts. Observation 6. The set of connected correlation functions is BRST-invariant because the connected correlation functions can be written as linear combinations of products of correlation functions. Crucial to our discussion are Ward identities. Consider first the supersymmetric onshell Ward identities (see e.g. [57,58]) for the supersymmetry generated by the BRST operator. Since the free vacuum is invariant under the action of Q lin YM , we have the following onshell Ward identities: We now apply the onshell Ward identity to O 1 · · · O n = A ↑c (cc) k A n−2k−2 phys and obtain 0|(cc) k+1 A n−2k−2 phys |0 ∼ 0|A ↑ (cc) k bA n−2k−2 phys |0 . (14) Thus: Observation 7. Any amplitude with k + 1 ghostantighost pairs and all gluons transversely polarized is given by a sum of amplitudes with k ghost pairs. From the construction of amplitudes via Feynman diagrams, it follows that we also have the following onshell Ward identity for an approximate BRST symmetry. Observation 8. Suppose that QS = 0 and Q 2 = 0 only onshell. Then, we still have (13) together with a corresponding identification of amplitudes with k + 1 ghostantighost pairs and all gluons transversely polarized and a sum of amplitudes with k ghost pairs. We shall also need the offshell Ward identities for the BRST symmetry, where j µ is the BRST current. The left-hand side vanishes after integration over x, and using Observation 6, we can restrict to connected correlators at a particular order in the coupling constant g and then further to lowest order in , i.e. to tree level. Consider now operators O i (x i ) for those restricted Ward identities which are linear in the fields. Observation 9. The onshell relations between tree amplitudes from Observation 7 induced by (13) extend to (offshell) tree-level connected correlators. For example, We also make the following three observations regarding the double copy. Observation 11. For amplitudes in CK-dual form, there is a corresponding Lagrangian whose partial amplitudes produce the kinematical numerators [46]. Observation 12. Double copying the Yang-Mills tree amplitudes in CK-dual form yields the tree amplitudes of N = 0 supergravity [1][2][3]. CK-DUAL YANG-MILLS THEORY In order to BRST-Lagrangian double copy Yang-Mills theory, we first must bring its action into the normalized form (4). Our goal will be to construct abstractly a Lagrangian which guarantees tree-level CK duality for the BRST-extended onshell field space. CK duality of the Feynman diagrams for the field space of physical gluons can be guaranteed by adding terms to the Lagrangian [3,46] and subsequently strictifying these, i.e. introducing a set of auxiliary fields such that all interaction vertices are cubic. This strictification is mostly determined by the color and momentum structure of the additional terms in the Lagrangian. It remains to ensure CK duality for tree amplitudes involving ghosts or backward polarized gluon states, which we do by introducing compensating terms, preserving quantum equivalence. (Forward polarized gluons can be absorbed by residual gauge transformations and therefore do not appear in the Lagrangian. Thus, they cannot contribute to CK duality violations.) We implement the necessary changes iteratively for npoint amplitudes, starting with n = 4, and within each n iteratively for the number k of ghost-antighost pairs. We start at n = 4, k = 0. First, we compensate for CK duality violations due to backward polarized gluons, which can be done by introducing terms of the form (∂ µ A µ ) a X a . By Observation 4, we can produce such terms, preserving quantum equivalence. This shift also produces terms − ξ 2 (X a ) 2 , which does not affect CK duality of higher n-point amplitudes since it preserves the gluon amplitudes (and thus their strictification). We then increase k by 1 and consider connected treelevel correlators of the form ccA 2 phys . Each of these correlators is determined by 4-gluon correlators with a forward-backward gluon pair and a pair of physical gluons by Observation 9. We use the strictification of the 4-gluon amplitudes to derive a CK-dual description of the amplitude with one ghost-antighost pair. Using Observation 11, we then construct new ghost terms in the Lagrangian, manifestly preserving tree-level correlators and thus quantum equivalence, cf. Observation 2. Finally, we again compensate for CK duality violations in amplitudes due to backward polarized gluons in the same manner as for the 4-gluon amplitudes. The iteration should then be evident: for each n, iterate over the possible numbers k of ghost-antighost pairs, and create new ghost terms with subsequent compensation for contributions of backward polarized gluons. Once completed, set k = 0 and increase n by one. Perform the compensation for contributions of backward polarized gluons to (n + 1)-point gluon amplitudes; then start increasing the ghost number again. We iterate this prescription until we reach the highest point tree-level correlator that can contribute to the loop order in which we are interested. The resulting Lagrangian L CK YM is of the form (4) and quantum equivalent to the Lagrangian L YM given in (8). THE BRST-LAGRANGIAN DOUBLE COPY OF YANG-MILLS THEORY We now turn to the N = 0 supergravity side. The gauge-fixed BRST Lagrangian L N =0 of this theory is readily constructed. The following two diagrams concisely summarize the theory's field content, describing the symmetrized and antisymmetrized tensor products of Yang-Mills fields: Here, the physical fields of ghost number 0 are h µν (the metric perturbation about the Minkowski vacuum), B µν (the Kalb-Ramond two-form), and depending on frame, π or δ (the dilaton). Ghost number increases by column from left to right, and all vector/form indices are made explicit. Many fields come with a triad of ghost, antighost, and NL fields as indicated counterclockwise around the field by arrows. In addition to the expected BRST field content, we have two trivial BV pairs (δ, β) and (β, π) due to the presence of the dilaton. See [51] for more details, as well as [37][38][39][40]43]. The double copy of Q YM and L CK YM yields a BRST operator Q which satisfies Q 2 = 0 onshell and a Lagrangian L for the field content (17). The latter is quantum equivalent to N = 0 supergravity, as we now argue. (i) Kinematic equivalence: The two kinematic Lagrangians are equivalent and linked by evident suitable field redefinitions [51]. The existence of such a field redefinition is ensured by the linear double copy BRST operator Q lin [39,43], which is equivalent to the linear BRST operator Q lin N =0 of N = 0 supergravity and annihilates the quadratic double copy Lagrangian [51]. We implement the field redefinition on L N =0 , obtaining L ′ N =0 . (ii) Equivalence of physical Lagrangian: Since the classical Yang-Mills action was written in a form with purely cubic, local interactions with manifest CK duality to all points, the tree amplitudes of L for physical fields match those of L N =0 , cf. Observation 12. The difference between L and L N =0 after integrating out all auxiliary fields and putting all unphysical fields to zero therefore consists of interaction terms proportional to Φ, for Φ a physical field. This difference can be absorbed in a local field redefinition (which can be shown not to involve derivatives), preserving quantum equivalence by Observation 3. Thus, the two theories have the same tree-level correlators for physical fields. We implement the field redefinition on L ′ N =0 , obtaining L ′′ N =0 . (iii) Gauge fixing sector: The difference between L, after integrating out all auxiliary fields, and L ′′ N =0 proportional to any of the NL fields (β,β, ̟ µ , π, γ, α µγ ) can be absorbed in a choice of gauge fixing which will only create new terms in the ghost sector, cf. Observation 5. We implement this new gauge fixing, and take over the strictification from L, obtaining L ′′′ N =0 together with Q ′′′ N =0 . (iv) Ghost sector: We now proceed in the same way as for Yang-Mills theory: reconstruct a ghost sector via the offshell Ward identities of Observation 9, leading to a Lagrangian L CK N =0 . Since the tree-level correlators are preserved by definition, the strictified Lagrangian L CK N =0 is quantum equivalent to L ′′′ N =0 , and BRST symmetry is preserved (with induced BRST action on auxiliary fields arising from strictification). Both L and L CK N =0 are local and have the same field content. The tree-level correlators involving physical and NL fields agree. Using the approximate Ward identities, cf. Observation 8, and the fact that Q lin and Q CK,lin N =0 agree, we deduce that all tree amplitudes involving ghost-antighost pairs agree, too. By construction, this agreement extends to individual onshell Feynman diagrams, between the strictifications L and L CK N =0 , even for auxiliary fields: we can iteratively split off external vertices with two external legs, exposing Feynman diagrams with onshell external but offshell auxiliary fields. Up to a field redefinition of the auxiliaries, these also must agree. The only potential remaining difference between L and L CK N =0 is then interaction terms containing Γ and Γ terms for Γ a ghost field. Going through the construction, one can argue that such terms, if they are there, have to appear in the same way in L and L CK N =0 . Alternatively, one can show that both theories satisfy the same Ward identities for tree-level correlators, rendering them quantum equivalent by Observation 2. The simplest argument, however, is to use Observation 1 to note that both theories are related by a local field redefinition. Observation 3 then implies that both theories are quantum equivalent. Data Management. No additional research data beyond the data presented and cited in this work are needed to validate the research findings in this work.
4,243.4
2020-07-27T00:00:00.000
[ "Physics" ]
Increasing stability for the inverse source scattering problem with multi-frequencies Consider the scattering of the two- or three-dimensional Helmholtz equation where the source of the electric current density is assumed to be compactly supported in a ball. This paper concerns the stability analysis of the inverse source scattering problem which is to reconstruct the source function. Our results show that increasing stability can be obtained for the inverse problem by using only the Dirichlet boundary data with multi-frequencies. 1. Introduction and problem statement. In this paper, we consider the following Helmholtz equation: (1) ∆u(x) + κ 2 u(x) = f (x), x ∈ R d , where d = 2 or 3, the wavenumber κ > 0 is a constant, u is the radiated wave field, and f is the source of the electric current density which is assumed to have a compact support contained in B r = {x ∈ R d : |x| < r}, where r > 0 is a constant. Let R > r be a constant such that suppf ⊂ B r ⊂ B R . Let ∂B R be the boundary of B R . The problem geometry is shown in Figure 1. The usual Sommerfeld radiation condition is imposed to ensure the uniqueness of the wave field: (2) lim r→∞ r d−1 uniformly in all directionsx = x/|x|. It is known that the scattering problem (1)-(2) has a unique solution which is given by Similarly, we introduce the DtN operator T : Here H (1) n is the Hankel function of the first kind with order n, h (1) n is the spherical Hankel function of the first kind with order n, Y m n is the spherical harmonics of order n, and the bar denotes the complex conjugate. Using the DtN operator, we can reformulate the Sommerfeld radiation condition into a transparent boundary condition (TBC): ∂ ν u = T u on ∂B R , where ν is the unit outward normal vector on ∂B R . Hence the the Neumann data ∂ ν u on ∂B R can be obtained once the Dirichlet date u is available on ∂B R . Remark 1. Consider the following well-posed exterior problem: The DtN operator is based on solving analytically the above problem in the polar (d = 2) or spherical (d = 3) coordinates and then taking the normal derivative of the solution. Now we are in the position to discuss our inverse source problem: IP. Let f be a complex function with a compact support contained in B R . The inverse problem is to determine f by using the boundary data u(x, κ)| ∂B R with κ ∈ (0, K) where K > 1 is a positive constant. The inverse source problem has significant applications in many aspects of scientific areas, such as antenna synthesis [2], medical and biomedical imaging [11], and various tomography problems [1,16]. Another important example of the inverse f Br ∂BR Figure 1. Problem geometry of the inverse source scattering problem. problem is the recovery of acoustic sources from boundary measurements of the pressure. In this paper, we study the stability of the above inverse problem. As is known, the inverse source problem does not have a unique solution at a single frequency [8,10,12]. Our goal is to establish increasing stability of the inverse problem with multi-frequencies. We refer to [4,7,15] for increasing stability analysis of the inverse source scattering problem. In [7], the authors discussed increasing stability of the inverse source problem for the three-dimensional Helmholtz equation in a general domain Ω by using the Huygens principle. The observation data were u(x, κ)| ∂Ω and ∇u(x, κ)| ∂Ω , κ ∈ (0, K). In [4], the authors studied the stability of the two-and three-dimensional Helmholtz equations via Green's functions. We point out that the stabilities in [4] are different from the stability in this paper where only the Dirichlet data is required. An initial attempt was made in [15] to study the stablity of an inverse random source scattering problem for the one-dimensional Helmholtz equation. Related results can be found in [13,14] on increasing stability of determining potentials and in the continuation for the Helmholtz equation. We refer to [9,5,6] for a uniqueness result and numerical study for the inverse source scattering problem. A topic review can be found in [3] for some general inverse scattering problems with multi-frequencies. We point out that the approach can be used to deal with other geometries than the circular domain. For example, a DtN map can also be obtained via the boundary integral equation relating the Neumann data to the Dirichlet data on any smooth curve which encloses the compact support of the source. The rest of the paper is organized as follows. The main result is presented in section 2. Section 3 is devoted to the proof of the result. The paper is concluded in section 4 with general remarks and possible future work. 2. Main result. Define a complex-valued functional space: Throughout the paper, a b stands for a ≤ Cb, where C > 0 is a constant independent of n, κ, K, M . Now we introduce the main stability result of this paper. Remark 2. First, it is clear to note that the stability estimate (4) implies the uniqueness of the inverse problem, i.e., f = 0 if = 0. Second, there are two parts in the stability estimate: the first part is the data discrepancy and the second part is the high frequency tail of the source function. Obviously, the stability increases as K increases, i.e., the problem is more stable as the data with more frequencies are used. We can also see that when n < K 2 9 | ln | 1 12 2 , the stability increases as n increases, i.e., the problem is more stable as the source function has suitably higher regularity. Remark 3. The idea was firstly proposed in [7] by separating the stability into the data discrepancy and high frequency tail where the latter was estimated by the unique continuation for the three-dimensional inverse source scattering problem. Our stability result in this work is consistent with the one in [7] for both the twoand three-dimensional inverse scattering problems. 3. Proof of Theorem 2.1. First we present several useful lemmas. . Multiplying e −iξ·x on both sides of (1) and integrating over B R , we obtain Hence, When d = 2, we obtain by using the polar coordinates that It follows from the Plancherel theorem that When d = 3, we obtain by using the spherical coordinates that It follows from the Plancherel theorem again that which completes the proof. For d = 2, let For d = 3, let Denote The integrands in (6)-(9) are analytic functions of κ in S. The integrals with respect to κ can be taken over any path joining points 0 and s in S. Thus I 1 (s) and I 2 (s) are analytic functions of s = s 1 + is 2 ∈ S, s 1 , s 2 ∈ R. For any s = s 1 + is 2 ∈ S, we have: 2. for d = 3, Proof. First we consider d = 3. Let κ = st, t ∈ (0, 1). It follows from the change of variables that Noting we have from the Cauchy-Schwarz inequality that where B 2R (x) is the ball with a radius 2R and center at x. Using the spherical coordinates (ρ, θ, ϕ) with respect to y where ρ = |x − y|, we get which shows (12). Next we prove (13). Let κ = st, t ∈ (0, 1). It follows from the change of variables again that we have from the integration by parts that where we have used the Cauchy-Schwarz inequality, the fact that e ist|x−y| ≤ e 2R|s2| for all x ∈ ∂B R , y ∈ B R , and the change of the Cartesian coordinates to the spherical coordinates. Second we consider d = 2. Letting κ = st, t ∈ (0, 1), we have from the change of variables that The Hankel function can be expressed by the following integral when Rez > 0 (e.g., [17], Chapter VI): Consequently, . Hence we have from the Cauchy-Schwarz inequality that Using the polar coordinates (ρ, θ) with respect to y with ρ = |x − y| and the fact that |s| for all s ∈ S, we obtain , which shows (10). , we can prove (11) in a similar way by taking the integration by parts, which completes the proof. Proof. It is easy to note that We estimate L 1 and L 2 , respectively. First we consider d = 3. Using (3) yields Using the spherical coordinates ρ = |x − y| originated at x with respect to y, we have Using the integration by parts and noting suppf ⊂ B r ⊂ B R , we obtain Consequently, Changing back to the Cartesian coordinates with respect to y, we have where we have used the fact that n ≥ d = 3. Next we estimate L 2 for d = 3. It follows from (3) again that Noting that ∇ y Following a similar argument as that for the proof of (15), we obtain Combining (15)-(16) and noting s > 1, we obtain (14) for d = 3. Second we consider d = 2. Similarly we have The Hankel function can also be expressed by the following integral when t > 0 (e.g., [17], Chapter VI): Using the polar coordinates ρ = |y − x| originated at x with respect to y, we have It is clear to note that H 0 (t) = H Using the integration by parts and noting suppf ⊂ B r ⊂ B R , we obtain Consequently, we have Noting (17), we see that there exists a constant C > 0 such that |H n (κρ)| ≤ C for n ≥ 1. Hence, Changing back to the Cartesian coordinates with respect to y, we have Inverse Problems and Imaging Volume 11, No. 4 (2017), 745-759 Next we estimate L 2 for d = 2. A simple calculation yields Noting that ∇ y H 0 (k|x − y|) and suppf ⊂ B r ⊂ B R , we have from the integration by parts that Following a similar argument as the proof of (18), we can obtain Combining (18) and (19) completes the proof of (14) for d = 2.
2,455.6
2016-07-23T00:00:00.000
[ "Mathematics" ]
Bursty emission of whistler waves in association with plasmoid collision A new mechanism to generate whistler waves in the course of collisionless magnetic reconnection is proposed. It is found that intense whistler emissions occur in association with plasmoid collisions. The key processes are strong perpendicular heating of the electrons through a secondary magnetic reconnection during plasmoid collision and the subsequent compression of the ambient magnetic field, leading to whistler instability due to the electron temperature anisotropy. The emissions have a bursty nature, completing in a short time within the ion timescales, as has often been observed in the Earth’s magnetosphere. The whistler waves can accelerate the electrons in the parallel direction, contributing to the generation of high-energy electrons. The present study suggests that the bursty emission of whistler waves could be an indicator of plasmoid collisions and the associated particle energization during collisionless magnetic reconnection. Introduction Whistler waves are fundamental plasma waves frequently observed in space in association with transient phenomena, such as collisionless shocks (e.g., Olson et al., 1969;Rodriguez and Gurnett, 1975;Lengyel-Frey et al., 1996;Zhang et al., 1999;Hull et al., 2012) and magnetic reconnection (e.g., Deng and Matsumoto, 2001;Wei et al., 2007;Tang et al., 2013;Graham et al., 2016;Huang et al., 2016;Zhao et al., 2016;Uchino et al., 2017).The waves have a righthand polarization with respect to the ambient magnetic field, so they can couple with the electrons through the cyclotron resonance and give rise to pitch-angle scattering and parallel acceleration (e.g., Kennel and Petschek, 1966;Gary and Wang, 1996;Schreiner et al., 2017).Such microscopic waveparticle interactions can cause anomalous transport in momentum and energy, resulting in anomalous magnetic dissipation in collisionless plasma where the classical Coulomb collision is negligibly weak.Regardless of the potential importance of whistler waves in collisionless plasma, the generation mechanism in transient configurations has been poorly understood. Satellite observations have shown that whistler emissions usually have bursty properties continuing for a very short time compared to the dynamical timescales that are usually much longer than the ion timescales, such as the Alfvén transit time across the boundary layers.There are two explanations for these observations: the first is that the emissions are very localized in space, and the second is that they are intermittent within a short timescale.So far, a number of kinetic simulations have been carried out to investigate how whistler waves can be generated and what their roles are in the dynamical processes that control macroscopic configurations (e.g., Hellinger et al., 1996;Scholer et al., 2003;Fujimoto and Sydora, 2008;Fujimoto, 2014;Goldman et al., 2014;Burgess and Scholer, 2015).In particular, collisionless magnetic reconnection enables an explosive release of magnetic field energy into plasma kinetic energy.The associated plasma jets and pressure anisotropy can boost the activities of a variety of plasma waves, including whistler waves, which can in turn have an impact on the reconnection process through magnetic dissipation and plasma heating (Fujimoto et al., 2011). Published by Copernicus Publications on behalf of the European Geosciences Union. Previous kinetic simulations of collisionless reconnection have suggested that whistler waves are generated in the magnetic field pileup region downstream of the x-line (Fujimoto and Sydora, 2008) and in the separatrix region separating the inflow and outflow regions (Fujimoto, 2014;Goldman et al., 2014).In the pileup region, the electrons are heated preferentially in the perpendicular direction due to betatron acceleration and have a temperature anisotropy leading to the whistler emission.Meanwhile, in the separatrix region, the electrons are strongly accelerated in the parallel direction due to a double layer formed locally in the separatrix region.The intense electron beam can be coupled with the stationary background electrons, which directly (Fujimoto, 2014) or indirectly (Goldman et al., 2014) triggers obliquely propagating whistlers.In both regions, the whistler emissions occur within a very narrow extent across the field lines, which explains the bursty nature in the observations.However, these models were based on a steady-state x-line configuration formed around the dissipation region relatively at the beginning of the reconnection process. Recent large-scale and long-term particle-in-cell (PIC) simulations have shown the dynamical evolution of the reconnection current layer, repeating the elongation in the downstream direction (Fujimoto, 2006) and the formation of plasmoids (Daughton and Karimabadi, 2007).In fact, a chain of plasmoids and flux ropes has often been observed in association with reconnection and accompanied by energetic particles and intense wave activities, including whistlers (Chen et al., 2008;Wang et al., 2016;Zhao et al., 2016).In this paper, a new generation mechanism for whistler waves is proposed in association with plasmoid collision through the use of large-scale PIC simulation.Contrary to previous scenarios in the magnetic field pileup region and separatrix region, the present model provides a short-term emission only after the plasmoid collision has been completed.The resultant whistler emission is bursty and has characteristics different from other emissions, so it can work as an indicator of plasmoid and flux rope collisions. Simulation model The simulations are carried out through the use of a twodimensional (2-D) electromagnetic PIC model with the adaptive mesh refinement (AMR) and particle splittingcoalescence method (Fujimoto, 2011).The refinement criteria for the AMR are provided by the local electron Debye length λ De and the out-of-plane electron flow velocity V ey defined at the center of each computational cell.Each cell is subdivided when L ≥ 2.0λ De or V ey ≥ 2.0V A is satisfied; otherwise the child cells are removed when the computational cells are squares with a size of L , and V A is the Alfvén velocity based on the asymptotic upstream magnetic field B 0 and the plasma density n 0 at the center of the initial current sheet.The system boundaries are "open" in both the inflow and outflow directions (Fujimoto, 2014). The initial current sheet profile employs a Harris-type equilibrium with the magnetic field B x (z) = −B 0 tanh(z/δ) and the number density n(z) = n 0 sech 2 (z/δ)+n b tanh 2 (z/δ), where δ is the half width of the current sheet and the subscript b indicates the background parameter.We choose δ = λ i and n b = 0.044n 0 with λ i as the ion inertia length based on n 0 .Note that the background plasma is removed in the center of the current sheet in order to avoid artificial streaming instabilities (e.g., a two-stream instability between the plasma sheet and background components).Although such a density profile causes a weak pressure imbalance, the equilibrium is quickly established without any significant modification to the current sheet structure.The ion-to-electron mass ratio and velocity of light are m i /m e = 100 and c/V A = 33, respectively, corresponding to ω pe /ω ce = 3.3, where ω pe and ω ce are the electron plasma frequency and the cyclotron frequency based on n 0 and B 0 , respectively.The temperature ratios are T 0i /T 0e = 3.0, T bi /T be = 1.0, and T be /T 0e = 1.0 so that the background ions are colder than the plasma sheet ions.The system size is which is entirely covered by base-level cells (coarsest cells) with L b = 0.08λ i and can be subdivided locally into finer cells up to the dynamic range level with L D = 0.02λ i .Therefore, the highest spatial resolution is 32 768 × 16 384 (the effective number of the finest cells when they cover the whole domain) and the maximum number of particles used is ∼ 10 10 for each species.The normalization parameters in the current study are m i for mass, e for charge, λ i for length, and V A for velocity, unless otherwise noted. Results The present study uses a small magnetic perturbation in order to initiate magnetic reconnection so that the x-line is generated at the center of the system (Fujimoto, 2006).Once the simulation has started, a fast reconnection is soon achieved with a rate of E R ≈ 0.12, where E R is evaluated by E y at the most dominant x-line normalized to the upstream values.In the quasi-steady phase, the electron current layer formed around the x-line is elongated in the outflow direction and is subject to plasmoid formations.By repeating the electron layer elongation and plasmoid ejection, the reconnection exhaust expands to a distance far downstream of the x-line.Figure 1a-c show the evolution of a plasmoid indicated by the black arrow in the exhaust.This plasmoid has been ejected in the −x direction and eventually merged with the initial current sheet at the downstream edge of the exhaust.In association with the collision with the downstream current sheet, a secondary reconnection takes place between the field lines directing downward (−z) in the plasmoid and upward (+z) in the downstream current sheet.During the secondary reconnection, the electrons are strongly accelerated in the vicinity of the x-line in the out-of-plane (−y) direction (see Fig. 1a and b).Note that this direction is opposite to that in the main reconnection layer. The localized structure of the secondary reconnection region is shown in Fig. 1d-f.The reconnection outflow jets associated with the plasmoid collision are ejected mainly in the z direction (Fig. 1d).Because the plasmoid scale is comparable to the local ion gyroradius, the electrons are more effectively accelerated than the ions.The ejected electrons are partially magnetized in the upper (+z) and lower (−z) regions of the secondary x-line, so the electrons are selectively heated in the perpendicular direction, resulting in T e⊥ /T e > 1 (Fig. 1e).The perpendicular heating is also significant in the pileup region further downstream of the exhaust, as described in Fujimoto and Sydora (2008).In Fig. 1g-i, the localized structure is presented after the secondary reconnection has ended.Since reconnection is over, the outflow jets do not exist (Fig. 1g).However, the intense temperature anisotropy of the electrons with T e⊥ /T e > 1 still remains off the Equator around the downstream edge of the exhaust (Fig. 1h).This is because the perpendicularly heated electrons, due to the reconnection process, move further downstream almost together with the field lines.At this time, an intense emission of microscopic waves with the wavelength λ ≈ 7λ * e occurs off the Equator plane, where λ * e is the local electron inertia length.The waves typically appear in E y , the out-of-plane electric field (Fig. 1i), which indicates that they are dominated by the electromagnetic properties.The emission is also evident in Movie S1 in the Supplement.It is interesting to note that the waves are not clearly excited during the secondary reconnection (Fig. 1f), regardless of the strong electron acceleration in the vicinity of the x-line. Figure 2 shows the wave properties arising in E y .One can see from Fig. 2a that the microscopic waves are excited at l/λ i ≈ ±10 and propagate along the field line away from the Equator point of l = 0. Here, l represents the coordinate along a field line traced from (x, z) = (0, −18.5), which passes through the wave active region.Since the starting point of the field line is far away from the downstream edge of the exhaust (x ≈ 150 at t = 114), the field line at each time step is considered to be almost identical.The examples of the field line at tω ci = 106 and 114 are indicated with dashed curves in Fig. 1f and i, respectively.It is found that the wave emission is bursty: the emission of microscopic waves with k λ * e > 0.5 is only significant around 112 < tω ci < 116 (Fig. 2b), which is a much shorter time compared to the dynamical scale, for instance, of the transit time of the plasmoid across the exhaust (∼ 30ω −1 ci for the exhaust with a size of ∼ 100λ i in the x direction). To identify the generation mechanism of the waves, we compare the wave spectrum in the simulation with the dispersion relation derived from linear theory.The wave spectrum presented in Fig. 2c (color contour) shows a clear peak at ω ≈ 0.5ω * ce and |k λ * e | ≈ 0.9 for the electromagnetic waves, which is consistent with the typical characteristics of whistler waves, where ω * ce is the local electron cyclotron frequency.The electric field data for calculating the wave spectrum are taken from the base-level cells of the AMR hierarchy.It is known that the wave dispersion depends on the grid size of the numerical simulation (Birdsall and Langdon, 1991), i.e., on the cell level of the AMR.However, this dependency is only remarkable for the grid-scale waves with λ ≈ L b .The dominant waves in the current analysis have a scale of λ ≈ 7λ * e ≈ 23 L b based on the local density n e ≈ 0.15n 0 .This indicates that the current waves of interest have a much larger scale than the base-level cell size, so the gridding effect on the wave dispersion is negligible. The dispersion curve for the whistler waves excited by the temperature anisotropy is superposed on the simulation result in Fig. 2c.The wave dispersion is obtained from the linearized Vlasov equation for the electrons with temperature anisotropy (Gary, 1993) so that where and β e is the plasma beta based on the electron parallel pressure.The parameters used for calculating the dispersion curve are T e⊥ /T e = 2.5, β e = 0.22, and = 1.9, which are taken at the wave active region in the simulation (i.e., inside the black dashed box in Fig. 2a). Figure 2c demonstrates that the simulation result is consistent with the linear dispersion, suggesting that the microscopic wave burst observed in the simulation is due to the whistler instability driven by the electron temperature anisotropy. The temperature anisotropy is generated through the secondary reconnection in association with plasmoid collision.As shown in Fig. 3a, the key process leading to the temperature anisotropy is the intense perpendicular heating in the downstream region of the electron jets (indicated by sky blue arrows).This process is similar to that in the previous study (Fujimoto and Sydora, 2008), in which whistler waves are excited in the pileup region formed downstream of the main reconnection x-line.However, in the current simulation, we find that the whistler emission is not clear at this time.This is partly because the pileup process of the field lines is weak in the region downstream of the secondary x-line, so the magnetic field intensity remains low and insufficient for the highenergy electrons to be completely magnetized.Furthermore, the field line structure, including the out-of-plane Hall field (not shown), is highly nonuniform in this region in the scale below the ion inertia length.Therefore, the dispersion relation Eq. ( 1) derived for magnetized electrons under a uniform magnetic field is no longer valid in this region. After the plasmoid merging and the associated reconnection process have been completed, the electrons with temperature anisotropy move further downstream of the main www.ann-geophys.net/35/885/2017/Ann.Geophys., 35, 885-892, 2017 1) for the whistler instability driven by the electron temperature anisotropy are superposed by a solid curve for ω r , the real frequency, and by a dashed curve for γ , the growth rate.As the plasmoid merging proceeds, the perpendicularly heated electrons move to the edge of the larger plasmoid (current sheet) and are compressed due to the magnetic tension force.As a result, the electrons become strongly magnetized so that a favorable condition for the whistler emission (blue arrows) is produced.reconnection jet.The temperature anisotropy is almost preserved during convection because the whistler emission that can relax the anisotropy is absent.The electrons finally reach the edge of the downstream current sheet and are compressed at the off-Equator regions (Fig. 3b) due to the pileup of the magnetic field.The whistler waves are triggered at this moment and propagate along the field line.Once the whistler waves are excited, the temperature anisotropy is quickly de-creased due to the pitch-angle scattering, and the wave emissions are suppressed.The timescale of the isotropization is estimated as tω * pe ∼ 40 (Ossakow et al., 1972), which corresponds to tω ci ∼ 0.3 for the local parameters (n = 0.15n 0 and B = 0.67B 0 ) in the current simulation.Therefore, the whistler emissions associated with the plasmoid collision are very bursty with a timescale smaller than the typical ion timescales.During the whistler emissions, the electrons can be accelerated in the parallel direction.Figure 4 shows the electron distribution functions averaged over the field line (−50 ≤ l/λ i ≤ 0) traced from the point ((x, z) = (0, −18.5)) far away from the wave active region.At each time, the thermal spread is much larger than in the upstream electrons (dashed curves) in both the parallel and perpendicular directions.This is due to the energization process in the reconnection exhaust.In particular, a flat-top type of distribution is produced in the parallel direction due to the counter-streaming components along the field line, which is in good agreement with the observations typical in the boundary region between the current sheet and magnetospheric lobe (Asano et al., 2008).At tω ci = 106 (green curves), before the whistler emissions and during the secondary reconnection, the electrons typically have a temperature anisotropy of T e⊥ /T e < 1 in this region.The high-energy electrons seen at v /V A − 20 are generated in the course of the secondary reconnection.On the other hand, at tω ci = 114 (red curves) after the secondary reconnection and during the whistler emissions, the electron temperature is increased in the perpendicular direction because of the convection of the perpendicularly heated electrons with T e⊥ /T e > 1.The whistler waves triggered by the temperature anisotropy scatter the electrons in a pitch angle.As a result, the high-energy electrons with v /V A − 20 are drastically increased, even though the secondary reconnection is over.This high-energy component quickly disappears at tω ci = 120 (blue curves) after the whistler activities are suppressed. Conclusions The present study proposes a new mechanism for the generation of whistler waves in the course of collisionless magnetic reconnection.The present model can explain the bursty nature of whistler emissions as observed in the Earth's magnetosphere (e.g., Deng and Matsumoto, 2001;Wei et al., 2007;Tang et al., 2013;Graham et al., 2016;Huang et al., 2016;Zhao et al., 2016).The key processes to generate the whistlers are intense perpendicular heating of the electrons in association with plasmoid collision and the subsequent compression of the ambient magnetic field.The perpendicular electron heating occurs during a secondary reconnection forced by the plasmoid collision with the current sheet formed downstream of the main reconnection x-line.During the secondary reconnection, the electrons are strongly accelerated in the vicinity of the secondary x-line and ejected downstream, leading to perpendicular heating.However, the whistler instability due to the temperature anisotropy is not excited at this moment because of the weak intensity and complicated structure of the magnetic field.Instead, the whistler emissions occur when the further compression of the ambient magnetic field is yielded after the secondary reconnection has ended.Once the whistler waves are excited, the temperature anisotropy is quickly decreased due to the pitchangle scattering, so the wave emissions are suppressed. The present study has focused on the collision between a plasmoid ejected from the reconnection current layer and the current sheet located at the downstream edge of the reconnection exhaust.However, a similar process for the whistler emissions is also expected in a collision between two plasmoids if the plasmoid size is comparable with or larger than the typical ion scales.During such a collision, the electrons are efficiently accelerated through the secondary reconnection, and a large temperature anisotropy is formed downstream of the electron jets.Subsequent magnetic compression occurs due to the pressure enhancement at the core region of the merged plasmoid, leading to the whistler emissions.In fact, a recent satellite observation shows whistler waves associated with the collision of two plasmoids (Zhao et al., 2016).The whistler waves triggered by plasmoid collision have several characteristics different from the other whistlers in reconnection (Fujimoto and Sydora, 2008;Fujimoto, 2014;Goldman et al., 2014).The plasmoid-induced whistlers are produced in a short-term burst, propagate away from the Equator plane, and have a peak intensity off the Equator plane.In other words, the detection of bursty whistler emissions with these characteristics could be an in-dicator of plasmoid collisions and associated particle energization during collisionless magnetic reconnection. Figure 1 . Figure 1.Time evolution of the main reconnection exhaust.(a-c) Two-dimensional snapshots of the out-of-plane electron bulk velocity at (a) tω ci = 90, (b) 106, and (c) 114.(d-f) Localized profiles at tω ci = 106 for (d) V ez , (e) T e⊥ /T e , and (f) E y + (V i × B) y .(g-i) The same profiles as in (d-f) but in a different area at tω ci = 114.The black solid curves (a-c) and gray solid curves (d-i) represent the magnetic field lines, and the gray dashed boxes in (b) and (c) indicate the areas shown in (d-f) and (g-i), respectively.The black dashed curves in (f) and (i) are the field lines traced at each time step from (x, z) = (0, −18.5), a point far away from the areas in (f) and (i). Figure 2 . Figure 2. Properties of the electromagnetic waves propagating along the field lines.(a) Time evolution of E y + (V i × B) y along the field line traced from (x, z) = (0, −18.5) at each time step.The vertical axis is the field-aligned coordinate with l = 0 at the Equator point where B x = 0 is satisfied.(b) Fourier amplitude calculated along the field line and averaged over k λ * e > 0.5 at each time step, where λ * e is the local electron inertia length.(c) The wave spectrum (color contour) of E y + (V i × B) y in the ω r − k space for the area indicated by the black dashed box in (a).The theoretical curves of the dispersion relation Eq. (1) for the whistler instability driven by the electron temperature anisotropy are superposed by a solid curve for ω r , the real frequency, and by a dashed curve for γ , the growth rate. Figure 3 . Figure 3. Schematic diagram showing the generation mechanism of a bursty emission of whistler waves in association with plasmoid collision.(a) When plasmoid collision occurs in the downstream edge of the main reconnection exhaust, a secondary reconnection takes place.The electron outflow jets (sky blue arrows) are generated in the z direction, leading to the perpendicular heating (red filled circles) of the electrons in the regions downstream of the secondary x-line.(b)As the plasmoid merging proceeds, the perpendicularly heated electrons move to the edge of the larger plasmoid (current sheet) and are compressed due to the magnetic tension force.As a result, the electrons become strongly magnetized so that a favorable condition for the whistler emission (blue arrows) is produced.
5,276.6
2017-07-31T00:00:00.000
[ "Physics" ]
Bounds on tower mass scales in the presence of throats of different warping In Type IIB flux compactification realizing the metastable de Sitter (dS) vacuum, the uplift potential can be generated by $\overline{\rm D3}$-branes at the tip of Klebanov-Strassler throat. Then the uplift potential obeys the scaling law with respect to the tower mass scale $m_{\rm sc}$, which can be the Kaluza-Klein (KK) mass scale associated with the throat containing $\overline{\rm D3}$-branes or the bulk tower mass scales, depending on the warping of the throat. On the other hand, in the presence of another throat of stronger warping, the KK mass scale associated with this throat is lower than $m_{\rm sc}$. Nevertheless, the Higuchi bound and the condition that the tower mass scale is higher than the gravitino mass provide the upper bound on $m_{\rm sc}$ determined by the lowest tower mass scale (or gravitino mass). This bound also can be interpreted as the lower bound on the lowest tower mass scale determined by $m_{\rm sc}$. We investigate this bound in detail when the throat containing $\overline{\rm D3}$-branes is strongly and weakly warped, respectively. Introduction The swampland program [1] has provided a set of conjectured constraints that the low energy effective field theory (EFT) must satisfy in order to have a UV completion in quantum gravity (for reviews, see, e.g., [2,3,4,5,6,7]).Among various proposals in the program, the instability of de Sitter (dS) space formulated by the dS swampland conjecture is of particular interest [8,9,10] (see also [11,12,13]) as string realization of the metastable dS vacuum [14,15] requires a tuning between several ingredients such as flux compactification, non-perturbative effect, and uplift, which has led to the debate on the consistency of the model.In the justification of the dS swampland conjecture the distance conjecture plays the crucial role [13].It states that the infinite distance limit of the moduli space corresponds to the corner of the landscape, at which the EFT becomes invalid as the mass scale of a tower of states decreases rapidly [16].Such a descent of a tower of states implies the rapid increase in the number of low energy degrees of freedom hence their production in dS space can violate the covariant entropy bound [17] given by (horizon area)/4 (see also [18,19,20,21,22]). The dS swampland conjecture raised the suspicion that our universe well described by a positive cosmological constant, Λ = 3m 2 Pl H 2 , where H is the Hubble parameter (the inverse of the horizon radius), is close to the swampland in the moduli space.In this regard, there have been attempts to formulate the closeness of our universe to the swampland in the form of the scaling law, reflecting the distance conjecture.The anti-dS(AdS)/dS distance conjecture focuses on the smallness of Λ ∼ 10 −123 m 4 Pl and suggests the relation m ∼ |Λ| α , where m is some tower mass scale [23].If m is the Kaluza-Klein (KK) mass scale, the lower bound on α is given by 1/4 [24], which can be obtained from the observational bound on the size of extra dimensions [25,26].Moreover, any nonzero mass of the state in dS space is required to be larger than the Higuchi bound [27] (see [28] for a review) given by s(s − 1)H ∼ Λ 1/2 , where s is the spin of the state, indicating that α is smaller than 1/2.On the other hand, in the effective supergravity description of string theory, |Λ| of the AdS vacuum (for the metastable dS vacuum, the size of the AdS vacuum energy density before uplift which will be denoted by V AdS ) is smaller than 3m 2 Pl m 2 3/2 , where m 3/2 is the gravitino mass and the inequality is saturated in the supersymmetric case.The gravitino distance conjecture claims that it may be m 3/2 rather than |Λ| which obeys the scaling law [29,30,31]. In the string model realizing the metastable dS vacuum based on Type IIB flux compactification, when the uplift potential V up is generated by D3-branes at the tip of the Klebanov-Strassler throat [32], it turns out that the size of V up obeys the scaling law with respect to the tower mass scale [33].If the throat containing D3-branes is strongly warped, the mass scale of the KK modes localized in this throat region satisfies up [34].We note that in the presence of a number of throats [35], the KK mass scale associated with the throat of the strongest warping is the lowest tower mass scale.Thus, if the warping of the throat containing D3-branes is the strongest, the scaling law relates V up and the lowest tower mass scale.Moreover, the exponent 1/4 is nothing more than the inverse of the number of noncompact dimensions over which D3-branes are extended.In contrast, when the warping is extremely weak, both the bulk tower mass scales and V up scale with respect to the size of internal volume so we can find the scaling law between them [33].More concretely, the string scale m s satisfies up where the exponent is the inverse of the number of noncompact dimensions.In addition, the bulk KK mass scale, the lowest tower mass scale in the absence of the throat of stronger warping, also obeys the scaling law but the exponent in this case is given by 1/3 : up .Since the uplift potential given by V up = Λ + |V AdS | is connected to the tower mass scale m sc through the scaling law m sc ∼ V α up with α = 1/4 or 1/3, we find the inequality m sc > Λ α for the positive Λ.Hence, unlike the AdS/dS distance conjecture suggesting the equality, m sc may not be as light as ∼ Λ α .Meanwhile, if there exists a throat whose warping is stronger than that of the throat containing D3-branes, the KK mass scale associated with it is lower than m sc , the tower mass scale satisfying the scaling law with respect to V up .Nevertheless, for the EFT based on the four-dimensional supergravity description to be valid, the lowest tower mass scale is required to be larger than m 3/2 .Combining this with the Higuchi bound and the fact that |V AdS | is smaller than 3m 2 Pl m 2 3/2 , we obtain the inequality obeyed by the lowest tower mass scale as well as m sc .This will be explored in detail in this article.As we will see, in the presence of the lower tower mass scale, m sc has the upper bound determined by the lower tower mass scale, or equivalently, the lowest tower mass scale has the lower bound determined by m sc .We expect that our results may be useful in the phenomenological study on the structure of extra dimensions. This article is organized as follows.Section 2 is devoted to the brief reviews on the Higuchi bound and the connection between tower mass scales and V up , which provide the background for our discussion.Based on them, in Section 3, we obtain the upper bound on m sc in terms of the lower tower mass scale when the throat containing D3-branes is strongly and weakly warped, respectively.Then we conclude. 2 Reviews on Higuchi bound, tower mass scale and uplift Higuchi bound We begin our review on the Higuchi bound with the discussions on the unitary irreducible representations (UIRs) of SO (1,4) dS isometry group and their masses.For details, we refer the reader to [28] and references therein.The 'mass' of the state in the UIR is determined by the quadratic Casimir, where L AB (A, B = 0, 1, • • • , 4) are SO(1,4) generators, as the field equation of motion is reduced to its eigenvalue equation, Here the field K(x) carries the tensor or spinor indices corresponding to the UIR to which the field belongs.The eigenvalue of Q is given by [36] (see also Section 12 of [28]) where s is interpreted as a spin in the H → 0 limit and q ∈ C is determined by the type of the representation. 1The types of SO (1,4) UIRs are classified in [36], and it turns out that under 1 A pair of numbers (s, q) labelling the UIR can be obtained by observing the representation of SO(4) isometry subgroup.Since SO(4) ∼ = SU(2)×SU( 2) and the quadratic Casimirs of two SU(2)s have eigenvalues j ℓ (j ℓ + 1) the Poincaré contraction, i.e., in the H → 0 limit, a particular set of UIRs can be reduced to the wavefunction in Minkowski space in a sensible way (the positive and negative frequency modes are well separated) [37] (see also Section 14 of [28]) : • The principal series of representations : In this case, q = 1 2 + iν, where ν ≥ 0 for an integer s and ν > 0 for a half-integer s, such that The representation of this type is also called the massive representation since in the H → 0 limit, νH is reduced to the mass of the field. • The complementary series of representations : The value of s in this case is only an integer.Moreover, q = 1 2 + ν, where ν ∈ R and giving The H → 0 limit of the representation of this type is sensible when s = 0 and ν = 1/2 (thus q = 1), which corresponds to the conformally coupled massless spin-0 field. • The discrete series of representations : In this case, q = 0, 1, • • • , s − 1, s for an integer s and q = 1 2 , • • • , s − 1, s for a half-integer s.The sensible representation in the H → 0 limit requires that s = q > 0, which is reduced to the massless spin-s field. Moreover, the quadratic Casimir of dS isometry is not exactly identified with the Laplacian : the quadratic Casimir eigenvalue equation given by ( 2) is written as [38] On the other hand, from the fact that the representation belonging to the discrete series satisfying s = q > 0 becomes the massless spin-s field in the H → 0 limit, it was suggested to define the 'mass' of the state in dS space by [38] and j r (j r + 1) (j ℓ , j r ∈ Z/2), respectively, the irreducible representation of SO( 4) is characterized by (j ℓ , j r ).Then s is the infimum (greatest lower bound) of j ℓ + j r , which is interpreted as a spin in the H → 0 limit.On the other hand, SO(1,4)/SO(4) generators raise/lower the j ℓ and j r values, and their contributions to the SO(1,4) quadratic Casimir Q determine the value of q.For example, in the discrete series of representations, we can find two dual representations in which the values of (max(j r − j ℓ ), min(j r − j ℓ )) for the states satisfying j r + j ℓ = s are given by (q, s) and (−s, −q), respectively. such that m 2 = 0 for s = q > 0. For the representations in the principal series, the mass defined in this way can be written as , so for a finite value of s, m 2 is reduced to ν 2 H 2 in the H → 0 limit.In terms of m, (2) becomes which indeed coincides with the equation of motion used in the previous literatures, e.g., [27,39,40,41]. It is remarkable that apart from the issue of the sensible H → 0 limit, regarding (8) as a mass of any state in the UIR, one finds that the nonzero value of mass has a lower bound called the Higuchi bound.More concretely, the value of m 2 turns out to be either zero or larger than s(s − 1)H 2 , which is meaningful for s > 1.This bound is saturated by the representations belonging to the complementary series with ν = ± 1 2 (q = 1, 0) and the discrete series with q = 0.For instance, for s = 2, the lower bound on the nonzero value of m 2 is given by 2H 2 [27] (see also [39] for s > 2). Tower mass scales in the presence of throat and uplift potential Throughout this article, we consider Type IIB Calabi-Yau orientifold compactifications containing a number of Klebanov-Strassler throats, in which the dilaton and complex structure moduli are stabilized by fluxes [42].The Kähler moduli are stabilized by non-perturbative effect [14] and possibly the additional α ′ corrections [15], and the potential stabilizing all the moduli is uplifted by D3-branes at the tip of one of throats.The string scale m s , the mass scale of a tower of string excitations is given by 1/(2π √ α ′ ).Since the ten-dimensional gravitational coupling is given by κ 2 10 = g 2 s /(4πm 8 s ), denoting the volume of the internal manifold by V/m 6 s , we obtain the relation between m s and the four-dimensional Planck scale m Pl : Moreover, under the compactification, there can be various KK mass scales depending on where the KK modes are localized.The mass scale of the KK modes in the bulk is given by On the other hand, the mass scale of the KK modes localized in the throat region is determined by how strong the warping of the throat is.To see this, we note that the metric near the tip of the throat is given by where e 2Ω 4 (x,y) = e 2A(y) e 2Ω(x) , e 2Ω 6 (x,y) = e −2A(y) σ(x) This throat geometry is supported by fluxes of F 3 and H 3 , the flux quanta of which in string units will be denoted by M and K, respectively.In the metric, σ(x) is the scalar part of the volume modulus, the vacuum expectation value of which satisfies V = σ 3/2 under the normalization d 6 y √ g 6 e −4A = m −6 s .Then the Weyl factor e 2Ω(x) can be written as such that e 2Ω(x) = 1.The warping of the throat is typically parametrized by the 'warp factor' defined by e −4A , which can be written as where Then the throat KK mass scale is given by m throat KK = e Ω 4 /( e Ω 6 R) = e 2A /(RV 1/6 ), where R is the typical length scale of the throat : R ∼ |z| 1/3 /m s for the A-cycle and R ∼ η UV |z| 1/3 /m s for the B-cycle, where is the length of the throat.When the throat is strongly warped, e −4A ≫ 1 is satisfied, and the throat KK mass scale is highly redshifted by the warp factor [43] (see also [44,45] for recent discussions) : Moreover, for the KK modes localized along the B-cycle of the throat, the corresponding KK mass scale is additionally suppressed by η UV [44].Therefore, the KK mass scale associated with the throat of the strongest warping (that is, the smallest |z| and the largest η UV ) is typically the lowest tower mass scale.In contrast, when the throat is extremely weakly warped, i.e., e −4A ≃ 1, the throat KK mass scale is given by Comparing this with (11), one finds that for the extremely weakly warped throat, the bulk KK mass scale m bulk KK is the lowest tower mass scale.Now we move onto the uplift potential V up .When D3-branes at the tip of the throat are extended over the noncompact four-dimensional spacetime, the induced metric is given by ds 2 D3 = e 2Ω 4 (x,y) g µν dx µ dx ν = e 2A(y) e 2Ω(x) g µν dx µ dx ν , from which V up is written as where T 3 = 2πm 4 s is the brane tension and p is the number of D3-branes.For the strongly warped throat (e −4A ≫ 1), we obtain Comparing this with (17), one finds the scaling law, where the exponent 1/4 is the inverse of the number of noncompact dimensions over which D3-branes are extended.Meanwhile, when the throat is weakly warped (e −4A ≃ 1), the uplift potential is given by from which one finds two scaling laws with respect to m s and m bulk KK given by ( 10) and ( 11), respectively : We note that the exponent in the scaling law with respect to m s is 1/4, the inverse of the number of noncompact dimensions.Moreover, m bulk KK is the lowest tower mass scale. Relation between two tower mass scales In string model, the metastable dS vacuum is realized by uplift of the AdS vacuum, indicating that the positive cosmological constant Λ can be written as As reviewed in the previous section, V up obeys the scaling law with respect to some tower mass scale which will be denoted by m sc , For the strongly warped throat, m sc is identified with m throat KK given by ( 17) and α = 1/4.For the extremely weakly warped throat, m sc corresponds to m s given by (10) (α = 1/4) or m bulk KK given by (11) (α = 1/3). We now consider another throat of stronger warping such that the mass scale m 0 of KK modes localized in this throat is lower than m sc .Denoting the conifold modulus and the flux quanta of F 3 associated with the throat of the stronger warping by z ℓ and M ℓ , respectively, we obtain just like (17).The Higuchi bound imposes that the nonzero m 0 is larger than s(s − 1)H, or equivalently, H < m 0 / s(s − 1).Combining this with the fact that |V Ads | is smaller than 3m 2 Pl m 2 3/2 , (25) provides the inequality which relates three scales, m sc , m 0 , and m 3/2 .Noting that m 3/2 is the characteristic mass scale of the four dimensional supergravity formalism, we expect that this is lower than m 0 , the KK mass scale implying the existence of extra dimensions, hence 3m 2 Pl m 2 0 > 3m 2 Pl m 2 3/2 .Then we obtain the inequality which indicates that m sc (m 0 ) has the upper (lower) bound determined by m 0 (m sc ).We investigate this inequality in detail when the throat containing D3-branes is strongly and weakly warped, respectively. Strong warping case (e −4A ≫ 1) When the throat containing D3-branes is strongly warped, the scaling law ( 22) is satisfied, indicating m sc = m throat KK (given by ( 17)), α = 1/4, and v 0 ≃ g s pM 2 .Then ( 28) is written as and ( 29) becomes or equivalently, We may rewrite this bound in terms of the explicit expressions (17) for m throat KK and ( 27) for m 0 to obtain the constraint on the warping of throats : The bound (31) shows that even if the throat containing D3-branes is not of the strongest warping so the associated KK mass scale is not the lowest tower mass scale, it cannot be arbitrarily higher than the lowest KK mass scale m 0 , but bounded by m Pl m 1/2 0 .This in turn means that m 0 cannot be arbitrarily small, but has the lower bound determined by m throat KK .Indeed, the size of deformation at the tip of the throat is given by (g s M) 1/2 /(2πm s ), 2 which is required to be much larger than the string length m −1 s for the effective supergravity description to be valid.This indicates that g s M ≫ 1.Moreover, we can further impose g s M 2 ≫ p because otherwise the conifold modulus z is stabilized at 0 [47] (see, however, [48] for a recent counterargument).Both of these constraints impose that g s pM 2 ≫ 1, so our bound is more or less stronger than the simple inequality In fact, (31) as the upper bound on m throat KK is useful when the size of Λ is not negligibly small, which may be realized in the inflationary cosmology.In contrast, Λ in our universe is as small as 10 −123 m 4 Pl so it is reasonable to take |V up | to be much larger than Λ.Since Pl m 2 3/2 gives a more stringent bound, But still, the bound ( 31) is valid as the lower bound on m 0 determined by m throat KK .In any case, it is remarkable that some intermediate new physics scale has the upper bound determined by another much lower scale. We can also compare the bound (31) with the bound obtained from the species scale Λ sp [49,50], above which gravity is no longer weakly coupled to matter.Here N tot is the number of low energy degrees of freedom below Λ sp , hence given by For m 0 ≪ m KK , we have is satisfied, but this is less stringent than (31). We first consider the case m sc = m bulk KK , in which the inequality ( 28) is written as In the presence of an additional throat which is strongly warped, the KK modes localized in this throat provide the lowest tower mass scale given by (27).Requiring m 0 > m 3/2 , we obtain or equivalently, Thus, m bulk KK has the upper bound depending on m 0 , with the exponent given by 2/3.At the same time, this inequality may be interpreted as the lower bound on m 0 as well.Putting the explicit expressions for m 0 and m bulk KK into the inequality gives the constraint on the strongest warp factor : Just like the previous case in which the throat containing D3-branes is strongly warped, we have a more stringent upper bound on m bulk KK determined by On the other hand, when m sc = m s , the inequality ( 28) is written as and (29) reads Putting an explicit expression for m 0 given by ( 27) into the inequality, we obtain the bound on the strongest warp factor : For |V AdS | ≃ V up ≫ Λ, we have a more stringent bound on m s , We can also compare our lower bound on m s given by (43) with the lower bound on m s considered in [51,52].This comes from the observation that the mass and spin of string excitations satisfy the Regge trajectory relation, or m 2 ≃ sm 2 s for large s.This relation, however, violates the Higuchi bound m 2 > s(s −1)H 2 ≃ s 2 H 2 when the spin is larger than s max = (m s /H) or roughly, H < m 0 , which is more or less equivalent to the Higuchi bound.The similar conclusion can be drawn by combining with Meanwhile, if the cutoff scale is given by the species scale 0 .Combining this with (44) gives Hm or roughly, m 0 > (H/m Pl ) 3/2 m Pl , while with (48) gives the trivial bound since the rightmost term is negative.We close this section by pointing out that the spin s in the Regge trajectory relation ( 46) is identified with the level of string excitations in the Minkowski background.Moreover, the massive field in Minkowski space is obtained by the Poincaré contraction (taking H → 0 limit) of the representation in the principal series in which the squared dS mass given by s Since ν/H is the mass of the field in Minkowski space, one may be tempted to identify ν/H, rather than the dS mass, with the mass in (46), √ s − 1m s .In this case, the additional term (s − 1 2 ) 2 H 2 in the squared dS mass is regarded as the effect of interaction with the background geometry.Then the condition sm 2 s > s 2 H 2 can be interpreted as follows.So far as the model for dS space based on the four-dimensional particle description is concerned, m s as well as m KK is larger than H.That is, for the string excitations, ν 2 H 2 ≃ sm 2 s in the squared dS mass is typically dominant over (s − 1 2 ) 2 H 2 such that if H ≪ m s , the dS mass is approximated by the mass in the Minkowski background.This approximation breaks down when s > s max ≃ (m s /H) 2 , i.e., the condition sm 2 s > s 2 H 2 is violated.In this case, the dS mass can be approximated by (s− 1 2 )H implying that neglecting H is no longer a good approximation even if H ≪ m s . Conclusions In this article, we investigate the particular case of Type IIB orientifold compactification with fluxes in which the internal manifold contains a number of throats and the warping of the throat containing D3-branes is not the strongest.Then the tower mass scale m sc satisfying the scaling law with respect to the uplift potential generated by D3-branes is not the lowest, but has the upper bound determined by the lowest tower mass scale m 0 , typically given by the KK mass scale associated with the throat of the strongest warping.This also may be interpreted as the lower bound on m 0 determined by m sc .When the exponent α in the scaling law m sc ∼ V α up is 1/4, the inverse of the number of noncompact spacetime dimensions over which D3-branes are extended, the upper bound on m sc is given by ∼ m 1/2 Pl m 1/2 0 .This shows that if m 0 is about 10 TeV, just above the scale accessible at the LHC search, m sc cannot be higher than the intermediated scale, ∼ 10 11 GeV.This bound is applied to m sc = m throat KK when the throat containing D3-branes is strongly warped and m sc = m s when the throat containing D3-branes is extremely weakly warped.On the other hand, when the throat containing D3branes is extremely weakly warped, α = 1/3 is allowed for m sc = m bulk KK .In this case, the upper bound on m sc is given by m 1/3 Pl m 2/3 0 which is about 5 × 10 8 GeV when m 0 ≃ 10 TeV.We also point out that the cosmological constant in our universe can be much smaller than the uplift potential, which allows the stronger upper bound on m sc in which m 0 is replaced by the lower scale, m 3/2 .These bounds tell us how the structure of the internal manifold is reflected in the relations between different tower mass scales.In particular, our setup in which the internal manifold contains a number of throats and the warping of the throat associated with uplift is not the strongest predicts that the evidences of the extra dimensions as well as the string may be found under the intermediate scale depending on the value of m 0 . 2, which implies that the cutoff scale is lower than √ s max m s = m 2 s /H.If we identify the cutoff scale with m Pl , we obtain the inequality m s > H 1/2 m
6,407.8
2023-09-08T00:00:00.000
[ "Physics" ]
Accelerated linear algebra compiler for computationally efficient numerical models: Success and potential area of improvement The recent dramatic progress in machine learning is partially attributed to the availability of high-performant computers and development tools. The accelerated linear algebra (XLA) compiler is one such tool that automatically optimises array operations (mostly fusion to reduce memory operations) and compiles the optimised operations into high-performant programs specific to target computing platforms. Like machine-learning models, numerical models are often expressed in array operations, and thus their performance can be boosted by XLA. This study is the first of its kind to examine the efficiency of XLA for numerical models, and the efficiency is examined stringently by comparing its performance with that of optimal implementations. Two shared-memory computing platforms are examined–the CPU platform and the GPU platform. To obtain optimal implementations, the computing speed and its optimisation are rigorously studied by considering different workloads and the corresponding computer performance. Two simple equations are found to faithfully modell the computing speed of numerical models with very few easily-measureable parameters. Regarding operation optimisation within XLA, results show that models expressed in low-level operations (e.g., slice, concatenation, and arithmetic operations) are successfully fused while high-level operations (e.g., convolution and roll) are not. Regarding compilation within XLA, results show that for the CPU platform of certain computers and certain simple numerical models on the GPU platform, XLA achieves high efficiency (> 80%) for large problems and acceptable efficiency (10%~80%) for medium-size problems–the gap is from the overhead cost of Python. Unsatisfactory performance is found for the CPU platform of other computers (operations are compiled in a non-optimal way) and for high-dimensional complex models for the GPU platform, where each GPU thread in XLA handles 4 (single precision) or 2 (double precision) output elements–hoping to exploit the high-performant instructions that can read/write 4 or 2 floating-point numbers with one instruction. However, these instructions are rarely used in the generated code for complex models and performance is negatively affected. Therefore, flags should be added to control the compilation for these non-optimal scenarios. Introduction The pressing problems in many mathematically oriented scientific fields are ubiquitously modelled with partial differential equations, e.g., the elastoplastic deformation of jammed granular materials and their subsequent fluid-like flow after unjamming [1,2], turbulent air flow around jets [3], the dynamics of financial markets of derivative investment instruments [4], etc. A vast majority of such research relies on numerical models to find approximate solutions to the differential equations and to make reliable predictions. In the last century or so, we have seen significant advances in numerical modelling, including multi-physics coupling with many variables and using fine grids/meshes with massive nodes and cells [2] to achieve realistic simulations. Consequently, the demand for efficiently solving numerical models with many variables and on very fine meshes is increasing. Traditionally, efficient numerical models are studied in computational complexity theory [5], in which the amount of time, storage, or other resources required to perform numerical simulations are examined theoretically. A classic example is the complexity analysis of the different iterative methods for large systems of linear equations. Analysis [6] shows that if the number of non-zero matrix entries is N, and the condition number is κ, the steepest descent method has a time complexity of O(κN) and the conjugate gradient method has a time complexity of O( p κN), which is thus more efficient. This kind of theoretical analysis is often not enough to model the computing speed of numerical models on modern computers because these computers are all designed with parallel computing capabilities, and efficient implementations must account for the different features of the computers. Consequently, many pieces of research are carried out on the efficient implementation of numerical models on specific computing platforms, including the CPU platform (a shared-memory system with a multi-core central processing unit and the associated main memory) [7,8], the GPU platform (a sharedmemory system with a graphics processing unit and the associated GPU memory) [9], distributed systems [10], and even quantum computing [11,12]. With the rapid development of new modelling techniques, the demand for rapid prototyping of new numerical models increases in addition to the demand for computationally efficient numerical models. The accelerated linear algebra (XLA) compiler [13] is one of the tools that aim to achieve these (i.e., both computational efficiency and rapid prototyping). The XLA compiler (simply referred to as XLA) takes HLO IR (high-level operation intermediate representation, simply referred to as HLO) as inputs (Fig 1), conducts several rounds of optimisations, and compiles the HLOs into highly efficient computer programs specific to the target computing platform. The optimisation and compilation in XLA happened automatically such that the modellers effortlessly get efficient programs, and they do not need to know the optimisation detail. The HLO inputs to XLA are a board category of array operations and thus most Python packages supporting array programming are XLA frontends (e.g., Tensorflow, JAX, PyTorch, etc.) [13]. The target-independent optimisation includes common subexpression elimination, operation fusion, and buffer analysis of memory allocation [13]. Target-dependent optimisation is conducted by considering target-specific features. The target computing platforms for XLA include the x64 CPU platform and the GPU platform (NVIDIA GPUs) in the source tree, and new backends can always be added [13]. For the CPU and GPU backends, the LLVM compiler [14] is used to compile the LLVM intermediate representation into efficient computer programs. XLA was originally designed to boost the performance of machine-learning models, and performance gain was widely demonstrated. For example, Google's submissions to MLPerf (an industry standard for measuring machine learning performance) [15] demonstrate a sevenfold performance gain on the training throughput of XLA-boosted BERT (a transformer-based model for natural language processing). Chadha and Siddagangaiah [16] conducted tests on several different models (e.g., two-layer convolutional network, saxpy, matrix multiplication, softmax, and long short-term memory), and found that XLA successfully conducted optimisations for some models, but areas for further improvements were also identified. XLA has also been used to boost the performance of other scientific computing. For example, Lewis et. al. [17] demonstrated the efficiency of XLA in matrix multiplication, QR factorisation, linear solution, and application of matrix functions on tensor processing units (TPUs). Sankaran et. al. [18] examined the performance of XLA in conducting linear algebra optimisations. Nevertheless, there has been little if any research in the literature about the efficiency of XLA for numerical models. This study is the first of its kind to examine the efficiency of XLA for a general category of numerical models and the efficiency is examined in a stringently way by comparing the performance of numerical models implemented using XLA on a computing platform with the maximum possible performance achieved on that computing platform. This study is not meant to be comprehensive, so numerical models defined on regular grids are mainly examined. The scope of the numerical models and some examined examples are explained in Section 2, which belong to two categories (e.g., element-wise operations and finite difference models). JAX [19] is used as the XLA frontend and two widely used computing platforms (backends) are examined-the CPU platform and the GPU platform. To obtain the maximum performance on these computing platforms, the computing speed of the numerical models is rigorously studied in Section 4 by considering the different workloads and the corresponding computer performance. Optimal implementations are explored in Section 5, and the computing speed of element-wise operations and finite-difference models are found faithfully modelled by two simple equations with very few easily-measurable parameters. The efficiency of XLA is examined by an index of relative efficiency-the ratio of effective bandwidth between XLA implementations and optimal implementations. XLA is found efficient for some numerical models and for some computing platforms but is not for others, and potential areas of improvement are suggested for the non-optimal scenarios. Array programming of numerical models XLA takes array operations defined in HLO as inputs. An array (or tensor) is a collection of homogenous elements. Matrices are a special case of 2D arrays and vectors 1D arrays. Fig 2 illustrate a 3D array (X) with the shape of (3,2,4). Each cell presents an element of the array. Inside each cell is the index of that element, which is denoted by square-bracket tuples, i.e., X [i, j, k]. All the indexing conventions of NumPy [20] are adopted here, so the first element is indexed by 0. And the indexing X[1:,0,1::2] represents the highlighted sliced elements in Fig 2. Array elements are stored in computer memory contiguously, and a row-major order is assumed in this study-consecutive elements in the last index are contiguous in memory (indicated by the arrows in Fig 2). Scope of examined numerical models This study focuses on numerical models on structured meshes or regular grids. Hence, after discretisation, all the variables (either solution or auxiliary) can be conveniently expressed as arrays-N-dimensional arrays for variables in N-dimensional problems. Arrays and variables are thus used interchangeably in further discussions. This already represents a broad category of numerical models that can solve many problems [2,8,21], that are expressed in linear and non-linear differential equations. Some examples are the finite-difference models on regular grids [22], finite-volume/finite-element models on structured meshes [2,8], lattice Boltzmann methods [23], etc. Most numerical models are formatted in the style that in each step, the solution variables , . . . at next "timestep" (t + 1) are updated from the variables X 1 t , X 2 t , . . . at "timestep" t. The "timesteps" not only mean the marching of solution variables in time (like most explicit models do) but also can represent the update of solution variables in iterative methods. For complicated models, auxiliary variables must be introduced, and their corresponding arrays allocated in computer memory. Therefore, a substep of a numerical model is defined as a function like Eq 1 such that its implementation is possible without introducing new variables/arrays except for the input and output variables/arrays. Eq 1 Here, N o is the number of output arrays, N i is the number of input arrays, and p is to indicate all model parameters (non-arrays). The total number of elements for each input/output array depends on the mesh size and is identical for all input and output arrays of substeps examined in this study (denoted as N e ). Maximally fused substeps Each model step is often fulfilled by several or many substeps. Substeps can have different complexities, and complex ones can always be broken into simpler ones until a handful of very basic ones are obtained like the NumPy basic array operations [20]. However, the performance of many simple substeps is always poorer than a fused substep due to the more memory operations involved (demonstrated in Section 5.3). So, to have computationally efficient numerical models, we want to have maximally fused substeps, for which no further optimisation of fusion is possible, and each model step is fulfilled by only a few of these maximally fused substeps. The detailed specification of Eq 1 is often called a numerical scheme. It depends on the differential equations and discretisation techniques. In general, it takes the format that the elements of output arrays are only locally related to neighbouring elements of input arrays in a fixed pattern (the stencil). Common stencils are the von Neumann neighbourhood and Moore neighbourhood (Fig 2). If we denote X[i, j, k] <r as all the elements in the array X that have a Manhatten distance smaller than r regarding the element X[i, j, k], then a substep is often specified by an equation like Eq 2. Eq 1 and Eq 2 are the general representation of model substeps and numerical schemes. The implementation of the substeps as computer programs is interchangeably called functions/operations/kernels/ops. In this paper, operations are simply used. Some examined operations in this study are explained in the next subsections, and a summary of them is given in Table 1. Element-wise operations If the output array elements are only related to input array elements at the same place (i.e., the Manhatten distance of neighbouring = 0), these kinds of operations are element-wise operations. Some basic ones are COPY, SCALE, and AXPY (Table 1). Both the vector version and matrix version of these operations are examined. To examine how the number of float-point calculations affects the computing speed, an element-wise operation as Eq 3 is examined and is labelled as XPXPYN, where N is a variable integer number, and it controls the number of float-point calculations. Array programming of these operations is straightforward (JAX implementation in Table 1). Finite-difference model to the 1D heat equation (HEAT1D) The heat equation is a parabolic partial differential equation modelling the variation of temperature under thermal conduction. The following numerical scheme with only one substep (Eq 4) is obtained if a 1D problem (from 0 to L) is discretised into N e -1 equally spaced segments, the derivative in time is approximated with a forward finite difference, and the second Three methods (see Table 2) Eq 7 Nonlinear differential equations, so convolution is not possible Eq 8 52 9 (= 6+3) 10 6 derivative in space is approximated with a central finite difference. The discretised solution on the nodes is a 1D array T t of size N e. The parameter r = aΔt/ (Δx) 2 , where a is the thermal diffusivity, Δt is the time step size, and Δx = L/(N s -1) is the segment size. This explicit scheme is stable only if r < 0.5. Fig 3A shows Three implementations of the 1D heat model with array programming are examined ( Table 2): (1) slice and concatenation. The inner elements of T t+1 (of size N e -2) are defined by Eq 4, we can use the slice operations to select three sub-arrays of T t (of size N e − 2) first and use basic vector arithmetic operations to calculate the inner elements. Then concatenation is used to combine the inner elements with boundary values to have the final T t+1 . (2) Convolution. The heat equation is a linear differential equation, so the inner elements of T t+1 can be obtained by applying a convolution operation to T t with a filter of [r, (1-2.0 × r), r]. Concatenation is still required to have the final T t+1 . (3) Roll. We can ignore the boundary conditions first and use the roll operation to obtain arrays of T t (of size N e ) that are shifted by one position to the left or to the right. Basic vector arithmetic operations are then used to calculate T t+1 , and boundary values are corrected at the end. Finite-difference model to the 2D heat equation (HEAT2D) If a 2D square domain (side length = L) is discretised into a regular grid with a mesh size Δx = Δy = L/(N x -1), and the same finite-difference approximations are used for the 2D heat equation, the following numerical scheme (Eq 5) is obtained with only one substep: The discretised solution on nodes is a 2D array T t with the number of elements as N e = N x × N x . Similarly, r = aΔt/(Δx) 2 and Δx = L/(N x -1). Fig 3B shows the solution to Eq 5 within a square domain (from -1.0 to 1.0 in both directions and L = 2.0) subjected to the initial condition of Eq 6 and zero-temperature boundary conditions. Parameters are a = 1.0, r = 0.2, N x = 512 and t 0 = 0.001. The analytical solution to this problem is also shown in Eq 6. With similar techniques as in Table 2, three methods are available to implement this 2D model with array programming, the detail is omitted here. MacCormack scheme to 2D Navier-Stokes equation with artificial compressibility (NS2D) If a 2D rectangular domain (with side lengths L x and L y ) is discretised into a regular grid with mesh sizes Δx = L x /(N x -1) and Δy = L y /(N y -1). The solution variables/arrays to the 2D Navier-Stokes equation are the horizontal component of velocity (U t ), the vertical component of velocity (V t ) and the pressure (P t )-all with the shape of (N x , N y ). The total number of elements for each array is N e = N x × N y . If the artificial compressibility method is used and the equations are discretised using the MacCormack scheme, the model is made of two maximally fused substeps [24]. For the first substep (often called the predictor substep), the input arrays are U t , V t , P t and the output arrays are "provisional" variables/arrays U t � , V t � , P t � . The detailed numerical scheme is in Eq 7, in which all the first-order derivatives in space are approximated with a forward finite difference and second-order derivatives are approximated with a central finite difference. In Eq 7, ν is the fluid dynamic viscosity, c is an artificial constant representing the speed of sound, and Δt is the timestep, which must meet the stability criteria (Δt < C max Δx/c and Δt < C max (Δx) 2 /ν). For the second substep (often called the corrector substep), the input arrays are solution arrays from the previous timestep U t , V t , P t and the "provisional" arrays U t � , V t � , P t � . The output arrays are solution arrays for the next timestep U t+1 , V t+1 , P t+1 . In the corrector substep, the first-order derivatives in space are approximated with the backward finite difference, and the detailed numerical scheme is in Eq 8. Eq 8C A particular application of this numerical scheme is the cavity flow problem (Fig 3C and 3D). A square domain (side length = 1 m) is filled with fluids, which is driven by the top lid. So, the top boundary has constant velocity (U = u 0 = 1 m/s and V = 0), and the other boundaries have zero velocities. Neumann boundary condition is used for the pressure field (i.e., @P/@n = 0 ! P t+1 [0,:] = P t+1 [1,:], . . .). Simulations start with homogenous variables (U = 0, V = 0, and P = 0), and the final steady-state velocity field depends on the Reynolds number Re = u 0 L x/ ν [24]. Fig 3C shows the streamlines for Re = 5000, which clearly shows a primary vertex and three small vortices at corners. Fig 3D shows the velocities along the mid lines from this study and results from Ghia et al. [21]. The speed of sound is chosen as c = 0.1u 0 and a steady state is achieved after 2 × 10 5 , 7 × 10 5 , and 35 × 10 5 increments for Re = 100, 1000, and 5000, respectively. Similarly, this model can be implemented with array programming. However, because this model is non-linear, the method of using convolution operations is not possible. Computer performance In this paper, the optimal implementations of the substeps/operations on both the CPU platform and GPU platform are studied. The performance of such optimal implementations is then examined, modelled, and compared with the performance of XLA implementations. These are conducted on two computers-one personal computer (PC) and one high-performance computing (HPC) workstation. Computation is possible on both the CPU and GPU platforms for these two computers. The PC has an Intel Core i7-9850H CPU and an Nvidia Quadro P620 GPU running on Ubuntu 20.04.5 LTS. The HPC has an Intel Xeon Gold 6238R CPU and an Nvidia Quadro RTX 5000 GPU running on Red Hat Enterprise Linux Workstation 7.9. The technical specification of the two computers is given in Table 3 (CPU platforms) and Table 4 (GPU platforms). The execution of the operations costs computer resources, and the execution time depends on computer performance. The two relevant measurements of computer performance for numerical models are floating point operations per second (FLOPS) and bandwidth. The two most common floating-point numbers are the single-precision floating-point numbers (each number occupies 32 bits or 4 bytes) and the double-precision floating-point numbers (64 bits or 8 bytes). They are conveniently denoted as f32 and f64 in the latter discussions. Floating point operations per second (FLOPS) As the name indicates, the FLOPS measures the number of floating-point operations (FLOPs) a processor can execute within a second. A typical unit is GFLOPS (i.e., gigaFLOPS and 10 9 FLOPS). On the CPU platform, it can be calculated as FLOPS = No. of sockets × No. of cores per socket × CPU frequency × No. of FLOPs per cycle [25]. The number of FLOPs per cycle depends on instruction sets. For example, the 128-bit SSE (streaming SIMD extensions) is one of the SIMD (single instruction multiple data) instruction sets, it can execute 8 FLOPs per cycle for single precision and 4 FLOPs per cycle in double precision ( Table 3). The 256-bit AVX (advanced vector extensions) doubles the FLOPs per cycle (Table 3). High FLOPS of the CPUs are achieved by these SIMD instruction sets, and the corresponding values are calculated from the equation and shown in Table 3. It is shown that the FLOPS of 256-bit AVX is twice that of 128-bit SSE, and the FLOPS of single precision calculation is twice that of double precision. The FLOPS can also be benchmarked by computer programs. Table 3 includes the benchmarked FLOPS by an open-source program called Flops [26], which is slightly higher than the theoretical values. The FLOPS of the GPU platforms is provided by Nvidia [27] and is listed in Table 4. For the two GPUs, the FLOPS of single precision calculation is significantly higher than that of double precision. In summary, the FLOPS for each processor can be written as a function of two arguments-FLOPS(s f , x is ), where s f is the size of a floating-point number (i.e., 4 bytes for f32 and 8 bytes for f64) and x is denotes the instruction sets. Bandwidth (BW) The typical architecture of the CPU and GPU computing platforms is illustrated in Fig 4. On both platforms, the calculations (i.e., FLOPs) are performed by the arithmetic logic unit. The data to be operated by the arithmetic logic unit (called operands) reside on the register and so do the operation results. However, the size of the register is very small, most data are stored in the memory or cache. So, to finish FLOPs, operands must be read to the register, and the operation results written back to the memory or cache after calculation. Bandwidth is a measure of the rate how data is read or written. A typical unit is GB/s (i.e., gigabytes per second). For many applications, the processor (CPU or GPU) tends to access the same set of memory locations repetitively over a short period (called the locality of reference), so both computing platforms are optimised with a hierarchical memory system (Fig 4)-from L1 cache, L2 cache, L3 cache to the memory with an increasing storage size but decreasing bandwidth. The theoretical bandwidth of the memory can be calculated as BW = memory frequency × No. of data transfers per cycle × No. of channels × bus width [28]. Here the number of data transfers per cycle is two for double data rate memory (i.e., DDR, DDR2. . .). This theoretical bandwidth is often referred to as "burst rate" (calculated and listed in Tables 3 and 4) because it is not sustainable. It is more realistic to benchmark the bandwidth with computer programs. Fig 5A gives some results from the open-source software bandwidth-benchmark [29] for the CPU platform of the PC. The core routines of this software are written with the low-level assembly language, so the benchmarked result is a good measure of the hardware performance and does not depend on the compiler version or options. The following five observations are typical for CPU platforms: 1. The bandwidth depends on how much data is read/written for each instruction. Sequentially read in 256 bits (i.e., 256-bit AVX; black filled circles in Fig 5A) is faster than read in 128 bits (i.e., 128-bit SSE; black hollow triangles), and the slowest is read in 64 bits (black hollow circles). 2. The bandwidth of reading is slightly faster than writing but the gap is small (black filled circles vs. black filled triangles in Fig 5A). 3. The bandwidth depends on how much total memory is required. When the required memory size is smaller than the L1 cache size, the source and/or the destination can reside on the L1 cache, so the bandwidth is the fastest (~250 GB/s for reading in 256-bit AVX in Fig 5A). With an increased size of required memory, the bandwidth decreases and Fig clearly shows the abrupt drop of bandwidth at three critical positions (i.e., L1, L2 and L3 cache sizes). When the required memory size is larger than the L3 cache size, the copy operation cannot resort to the caches, so the benchmarked bandwidth is the sustained bandwidth of the main memory (about 20 GB/s), which is significantly smaller than its "burst" rate (88 GB/s). Moreover, the bandwidth of the L1 cache is more than 10 times faster than that of the main memory (i.e., 250 GB/s vs. 20 GB/s). Because of this dependence on required memory size, in the latter discussions, a numerical model is roughly classified to be small if the required memory size is smaller than the L1 cache size, medium if between the L1 and L3 cache sizes, and large if larger than the L3 cache size. 4. The bandwidth depends on the memory access pattern. CPU threads are independent and may execute at their own pace. So, the CPU threads prefer cached access (Fig 6A)-if a thread's current access is at a specific position, its preferred next access should be at the sequential next position in memory. From the comparison of the black lines and red lines in Fig 5A, it is seen that the bandwidth of sequential read/write (cached access) is significantly faster than that of random read/write (uncached access). 5. The bandwidth depends on the number of threads used. The bandwidth of reading in 256-bit AVX for both the PC and HPC is shown in Fig 5B. Because the benchmark tool uses only one thread, the measured bandwidth of the HPC is even slower than that of the PC. It is shown in Section 5.1 that using multiple threads can increase the bandwidth. The bandwidth of the GPU platform has similar five observations: 1. The bandwidth of the GPU platform also depends on how much data is read/written for each instruction. It is shown in Section 5.2 that using the CUDA built-in type float4 and double2 (i.e., read/write four f32 or two f64 numbers with one instruction) can gain minor improvement compared with using f32 or f64 (i.e., read/write one f32 or f64 number with one instruction). 2. There is no evidence of a noticeable difference regarding the bandwidth of reading and writing on the GPU platform, so reading and writing are assumed to have equal bandwidth. 3. The bandwidth of the GPU platform also depends on how much total memory is required. The red lines in Fig PLOS ONE bandwidthTest (with option "shmoo") from the CUDA samples [30]. When the required memory size is small (< 64 kB), the bandwidth is very small (< 5 GB/s). The bandwidth gradually increases with the increase of the required memory size from 64 kB to about 16 MB. There is a peak bandwidth of about 900 GB/s for the HPC (at about 4 MB required memory size). When the required memory size is larger than 16 MB, the bandwidth is constant (about 82.4 and 372.3 GB/s for the PC and HPC, respectively) and is slightly smaller than the "burst" rate (96.1 and 448.0 GB/s, respectively). Similarly, on the GPU platform, a numerical model is roughly classified to be small if the required memory size is smaller than 64 kB, medium if between 64 kB and 16 MB, and large if larger than 16 MB. 4. The bandwidth of the GPU platform also depends on the memory access pattern. CPU threads are independent and may execute at their own pace. In contrast, the GPU threads execute synchronously (i.e., threads in groups must execute instructions together), and all threads in a group (warp) must finish their work before any thread can move on [31]. Therefore, the GPU threads prefer coalesced access (Fig 5B)-if a thread's current access is at a specific position, the next thread's preferred current access should be at the sequential next position in memory. It is shown in Section 5.6 that the bandwidth of coalesced access is significantly faster than that of uncoalesced access. 5. The GPU platform is designed to run multiple threads synchronously, and the number of threads is often a multiple of the warp size (32 for both the PC and HPC). So, the commonly used setting is running with 128, 256, 512, or 1024 threads concurrently. In contrast to the CPU platform, the bandwidth of the GPU platform is not affected by the number of threads (at least for the commonly used settings), as shown in Section 5. In summary, if the small bandwidth gap between reading and writing is ignored, the bandwidth can be written as a function of four arguments-BW(s t , x is , x ap , x th ), where s t is the total size of required memory, x is indicates the instruction set, x ap indicates the memory access pattern, and, x th indicates the use of parallel threads. Computing latency of operations The total number of FLOPs for one execution of the operations can be roughly estimated. If there are α q FLOPs for the qth equation in Eq 2, then approximately α q N e FLOPs are needed to calculate Y q by ignoring the special equations for the boundary conditions. Here, N e is the number of elements in an array. The total number of FLOPs is then α 1 N e + α 2 N e + . . . + α No N e = αN e . α is a dimensionless constant determined by the numerical scheme (Eq 2). For the operations in this study, the α values are presented in Table 1. The time spent on FLOPs is then approximately The required memory size is also approximately proportional to the number of elements by ignoring the non-array parameters, i.e., the required memory size is γN e s f . Here s f is the size of a floating-point number (i.e., 4 bytes for f32 and 8 bytes for f64) and γ is a dimensionless constant. In most applications, operations are implemented as in-place updates to reduce the required memory size. For example, for the NS2D operation, six arrays (γ = 6) are allocated in memory. In the corrector step, the output arrays U t+1 , V t+1 , P t+1 and input arrays U t , V t , P t use the same arrays in computer memory and the outputs are updated in place. Similarly, in-place updates are used for AXPY and XPXPYN, so γ = 2 for them. The total number of memory operations can also be estimated. For all the N o equations like Eq 2, the number of writing operations (i.e., write variables from the register to the memory or cache) is N o , and the number of reading operations N r equals the number of non-duplicate array elements in the right-hand of the equations (non-duplicate because an element only needs to be read to the register once). For example, to calculate the "provisional" arrays ele- . Therefore, the total number of memory operations is approximately N o N e s f + N r N e s f = βN e s f . For element-wise operations, the number of non-duplicate elements in the right-hand equals the number of input arrays N i , so β = N i +N o . In implementations, it is not optimal to conduct these memory operations at the same bandwidth, so the workload is split into two parts-one part operated at a lower bandwidth (β lo N e s f ), the other part at a higher bandwidth (β hi N e s f ), and β = β lo + β hi . The time spent on memory operations is then approximately t m = t mlo +t mhi An operation is called memory-bound if the time spent on memory operations (mostly on low-bandwidth operations) is larger than that on FLOPs (i.e., t mlo > t flop ). Otherwise, it is FLOP-bound. Therefore, an operation is memory-bound if α/β lo < s f FLOPS/BW lo . The lefthand side of the inequality is a ratio of workloads between FLOPs and memory operations, which is determined by the numerical scheme. The right-hand side is a hardware parameter, and a new symbol is used for it-(α/β lo ) c . This hardware parameter represents a critical ratio, and an operation is memory-bound if α/β lo < (α/β lo ) c . Additionally, the ratio of time spent on FLOPs and low-bandwidth memory operations is t flop Similarly, the time spent on memory operations is mostly due to low-bandwidth read/write (i.e., t mlo > t mhi ) when β hi /β lo < BW hi /BW lo . Similarly, the left-hand side is a ratio of workload, the right-hand side is a hardware parameter, and a new symbol is used for it-(β hi /β lo ) c . The ratio of time spent on high-and low-bandwidth memory operations is In parallel computing, the coordination of all the parallel threads costs computer resources and thus time. Additionally, the read of non-array parameters costs bandwidth. These and many other tasks do not scale with the mesh/grid size (N e ), and the time spent on them is collectively denoted as the overhead time t oh . Because of this non-scaling, the overhead time is often negligible compared to the time on low-bandwidth memory operations (t oh /t mlo << 1) when the mesh size N e is large. The computing speed of an operation can be measured in latency (LT; the time in seconds needed for one execution of the operation) and throughput (THP; the number of operations executed within a second). In most scenarios of the present study, the computing time is mostly spent on low-bandwidth memory operations, i.e., t mlo = max(t flop , t mlo , t mhi , t oh ), and LT � t mlo . Additionally, these workloads may run in parallel (e.g., the processor may conduct FLOPs, and at the same time read array elements needed for the next FLOPs), so the LT is smaller than the sum of these estimated times, i.e., LT � t flop +t mlo + t mhi + t oh . The following equation is therefore obtained. Optimisation strategies The size N e is determined by the discretised mesh/grid and is thus fixed. The memory size of a floating-point number s f (4 bytes for f32 and 8 bytes for f64) is determined by the accuracy requirements. So, the required memory size of a numerical model is fixed (γN e s f ), and so does its relative size (e.g., small, medium, or large). From Eq 9, to reduce latency and to increase throughput, there are two categories of approaches: optimise the numerical scheme (the coefficients α and β are determined by the numerical scheme) and optimise the implementation on target computing platforms (the FLOPS and BW are hardware parameters that depend on implementation details). The optimisation of the numerical scheme is similar to the target-independent optimisation in XLA and includes the following strategies: (1) Optimisations to have maximally fused substeps, which is explained in Section 2. Fused operations can reduce memory operations and are one of the best ways to improve performance (demonstrated in Section 5.3). (2) Optimisations to reduce the number of FLOPs, i.e., reduce α. (3) Optimisations to reduce the number of memory operations, i.e., reduce β. The numerical models examined in this study are already optimised with maximally fused substeps and Eq 4~Eq 8 are already expressed in the optimal format with the smallest values regarding α and β. The element-wise operations are also optimal except for XPXPYN. After optimisation, XPXPYN is Y t+1 = Y t , which has smaller α (0) and β (2). However, XPXPYN is designed to examine how the number of FLOPs influences computing speed. So, in this study, the numerical scheme equations are assumed already optimised, and α and β are simply used to denote the smallest values of the optimised numerical schemes. The optimisation of implementation depends on the target computing platform and includes the following strategies: (1) Optimisations to increase FLOPS. It is shown in Section 3.1 that the FLOPS is a function of two arguments-FLOPS(s f , x is ). The size of the floatingpoint number s f is determined by the accuracy requirements, so the optimal implementation should choose the appropriate instruction sets to have FLOPS max (s f ) = max{FLOPS(s f , x is )}. (2) Optimisations to increase bandwidth. Similarly, it is shown in Section 3.2 that the bandwidth is a function of four arguments-BW(s t = γN e s f , x is , x ap , x th ). The total required memory size s t = γN e s f is fixed by the problem. So the optimal implementation should choose the appropriate methods to have BW lo max (s t = γN e s f ) = max{BW lo (s t = γN e s f , x is , x ap , x th )}and BW hi max (s t = γN e s f ). (3) Optimisations to reduce low-bandwidth memory operations: After the optimisation of the numerical scheme, the total number of memory operations (i.e., β) is fixed. Higher computing speed is still achieved by reducing memory operations at lower bandwidth (i.e.., reducing β lo and increasing β hi ). (4) Optimisations to reduce the overhead time. After optimisations, the latency is minimised, and the throughput is maximised. They can be estimated by substituting the optimal hardware parameters (e.g., FLOPS max , BW lo max , and BW hi max ) in Eq 9. Effective bandwidth The input arrays are at least read once at low bandwidth and the output arrays are at least written once, so the minimal possible value for the coefficient β lo is at least N i +N o . It is shown in Section 5 that optimal implementations do have β lo = N i +N o and β hi = β- (N i +N o ). Therefore, for a fixed mesh size (N e ) with a fixed accuracy requirement (s f ), the minimum workload of low-bandwidth memory operations is fixed at β lo N e s f = (N i +N o )N e s f . The computing speed can then be measured by another quantity-the effective bandwidth BW e = β lo N e s f /LT = (N i +N o ) N e s f /LT = (N i +N o )N e s f ×THP, which is a measure of throughput in terms of memory operations. From the equation, the latency, throughput, and effective bandwidth are all equivalent and it is easy to calculate one from another. However, the use of effective bandwidth has the advantage that it can be compared with some reference values (e.g., benchmarked bandwidth, theoretical burst rate, etc.). With these definitions, the following equation regarding the effective bandwidth BW e is obtained from Eq 9. where the definitions of (α/β lo ) c and (β hi /β low ) c have been substituted in. Therefore, the effective bandwidth lies between the low bandwidth BW lo and a fraction of it. After optimisations, the effective bandwidth is maximised (BW e max ), and it can be estimated by substituting the optimal hardware parameters (e.g., FLOPS max , BW lo max , and BW hi max ) in Eq 10. Optimal implementations and XLA performance In this section, the optimal implementations of the operations on various platforms and computers are examined. The computing speed is modelled and compared with that of the XLA implementations. The codes used to reproduce the results of the present study are available in the GitHub repository (https://github.com/xuzhen-he/XLA_numerical_models) and the pseudocodes are shown in Table 5. On the CPU platform, parallel computing is fulfilled by compiling the source codes with the GNU C/C++ compiler (version 9.4.0 for the PC and 10.2.1 for the HPC), which is supported by the GCC's implementation of OpenMP. Shared-memory parallelism is achieved by simply using OpenMP directives on the C/C++ loops. Executables are compiled with the highest level of optimisation (i.e., with option -O3) and optimised with all the instruction sets of the local machine (i.e., with option -march = native). Additionally, auto-vectorisation is enabled by default with this level of optimisation, the information about vectorisation is dumped with the option -fopt-info-vec. On the GPU platform, parallel computing is fulfilled by compiling the source codes with the CUDA C/C++ (version 11.7 for the PC and 11.6 for the HPC). Pseudocodes of the operations (called kernels in CUDA) are shown in Table 5. In CUDA, the kernels are invoked with grids of thread blocks, which mimics how GPU processors are physically grouped. Grids and thread blocks can be 1D, 2D or 3D. It is natural to use 1D grids and thread blocks for 1D problems and 2D grids and thread blocks for 2D problems. Within the CUDA kernel, the index of the thread, the dimension of the thread block and the index of a thread block within the grid are accessed through the built-in variables threadIdx, blockDim, and blockIdx, respectively ( Table 5). The latency is measured by running the operations for at least 5 seconds and at least 20 times, an average latency is recorded. Array programming of the operations is achieved with the Python package JAX (version 0.3.17 for both the PC and HPC) and JAXLIB (version 0.3.15 for both the PC and HPC). The first run of the JAX implementation is not timed because it contains optimisations and compilation with XLA. On the CPU platform, two types of implementations are examined: The first is singlethread SIMD (256-bit AVX) implementation. This is achieved by using OpenMP directive omp simd. Or equivalently, without the OpenMP directive, the highest level of optimisation (-O3) will automatically vectorise (256-bit AVX) the loop. With one thread, the 256-bit AVX can achieve the highest FLOPS and bandwidth considering observations (1)~(4) of Section 3.2, and the benchmarked bandwidth from the tool bandwidth-benchmark is shown as black solid lines in Fig 7(A) and 7(B). The effective bandwidth of COPY1D with this implementation (black hollow triangles) is very close to the benchmarked bandwidth except for the relatively larger gap when the required memory size is small (γN e s f < L1 cache size). This gap is because the benchmark tool is written with a low-level assembly language, but the present single-thread SIMD implementation uses a high-level programming language (C/C++), and thus more overhead is involved. The results also show that this overhead is negligible (t oh /t mlo <<1) for medium and larger problems (γN e s f > L1 cache size). Element-wise vector operations: Optimal implementations on the CPU platform The second implementation is using the OpenMP directive omp parallel for with N threads. In this method, the array is divided into N sub-arrays (each is still continuous) and 256-bit AVX is used by each thread on each such sub-array. When only one thread is used, this implementation should be identical to the single-thread SIMD implementation. However, the tested effective bandwidth (hollow circles) is slightly below that (hollow triangles), which implies that using the omp parallel for directive involves more overhead. When the maximal number of threads is used (12 for PC and 26 for HPC), the two CPU platforms show different features. When the problem (γN e s f ) is small, the overhead involved in using many threads is large (i.e., t oh /t mlo is large), so the effective bandwidth (filled circles) is very small. Particularly, it is smaller than using a single thread (hollow triangles). The effective bandwidth with the maximal number of threads starts to surpass that of a single thread when approximately γN e s f > L1 cache size on the PC and γN e s f > L3 cache size on the HPC. The overhead cost of using multiple threads does not scale with the problem size (γN e s f ) but rather scales with the number of threads. So, on the HPC with more threads, a larger problem size (γN e s f ) is required to make the overhead cost negligible (t oh /t mlo << 1). Another interesting feature is that, for the PC, the effective bandwidth shows an abrupt drop for extremely large problems (γN e s f > 16 GB), which is likely because the main memory (32 GB) is made of two pieces (each has 16 GB). When γN e s f is larger than 16 GB, the 1D array is not physically continuous anymore. The latter discussions are mainly about medium (L1 cache size < γN e s f < L3 cache size) and large problems (L3 cache size < γN e s f < 16 GB) on the PC and large problems (γN e s f > L3 cache size) on the HPC because (1) the latency of small problems is often small and their computing speed is less a concern and (2) the overhead cost for these problems is not negligible and the analysis in this study will not apply to them. So, for medium and large problems on the PC and large problems on the HPC, the optimal implementation is parallel computing with the maximum number of threads and each thread handles a continuous sub-array with SIMD instruction sets (256-bit AVX for the tested CPUs). Element-wise operations have β lo = N i +N o and β hi = 0. Additionally, the COPY1D operation has α = 0. Hence, the time spent on COPY1D is only the low-bandwidth memory operations (t mlo ) and overhead cost (t oh ). However, for medium and large problems on the PC and for large problems on the HPC, the overhead cost is negligible (t oh /t mlo <<1), so z = 1 in Eq 10 and BW e max = BW lo max for COPY1D, which means that the measured effective bandwidth for the optimal implementation of COPY1D equals the maximum bandwidth. From Fig 7, for medium-size problems on the PC, the maximum bandwidth BW lo max is variable but reaches a peak value of about 350 GB/s at γN e s f � 2 MB. It is constant for large problems on both the PC and HPC-about 28 and 55 GB/s, respectively. These data are summarised in Table 6. The maximum bandwidth BW lo max is much higher for medium-size problems than that for large problems (350 vs. 28 GB/s) on the PC because the CPU platform can take advantage of the fast L1/ L2/L3 caches for medium-size problems. Element-wise vector operations: Optimal implementations on the GPU platform On the GPU platform, a straightforward implementation is allocating continuous memory as arrays of floating-point numbers (f32 or f64), and each array element is handled by a GPU thread (as the pseudocode in Table 5). From observations (2) Another implementation is to exploit the CUDA built-in types float4 or double2. In the CUDA kernels, the f32 and f64 arrays are converted (reinterpret_cast) to arrays of float4 and double2, respectively. Each GPU thread then conducts the calculation of four (f32) or two (f64) output elements. With these built-in types, the compiled program can take advantage of the instruction sets ld.global.v4.f32 and ld.global.v2.f64 that can read/write four f32 numbers or two f64 numbers with one instruction. The obtained effective bandwidth is shown in Fig 8 as hollow triangles. The use of these CUDA built-in types can give minor improvements for medium-size problems (64 kB < γN e s f < 16 MB). However, for large problems (γN e s f > 16 MB), the effective bandwidth is the same except for some minor gains for only COPY1D and SCALE1D on the PC (79 vs 72 GB/s). For other operations on the PC, both implementations give similar results (Fig 8 only shows XPXPY20_1D). Because the gain is minor, but the complexity of programming is significantly increased, the first straightforward implementation is considered optimal in this study. Similarly, the latter discussions are mainly about medium and large problems, for which the overhead cost is negligible (t oh /t mlo <<1). So the measured effective bandwidth for the optimal implementation of COPY1D equals the maximum bandwidth (i.e., z = 1 and BW e max = BW lo max for COPY1D). It should be noted that the effective bandwidth of optimal COPY1D implementation is very close to the benchmarked bandwidth (black solid lines Element-wise vector operations: Fused operations vs. simple operations How fused operations can improve performance is illustrated with two vector operations (AXPY1D and VEC XPXPY6_1D). First consider AXPY1D. For fused operations, the scale of X[i] by the parameter a and the addition of the result onto Y t [i] is finished within one operation (i.e., in the same for-loop on CPU and in the same kernel on GPU). The AXPY1D operation can also be achieved by conducting two simple operations-in-place scale of X[i] first and then adding the result to Y t [i] with another vector addition operation. For the fused AXPY1D, the array X is read once, and the array Y (in-place update, so for both Y t and Y t+1 ) is read once and written once. However, if two simple operations are conducted, X is read twice and written once, and the array Y is read once and written once. So, fewer memory operations for the fused operations (3N e s f vs. 5N e s f ). Fig 9 shows the measured effective bandwidth. For large problems, the effective bandwidth of fused operations (black filled circles) is~22.8 GB/s (CPU) and~79 GB/s (GPU). However, the simple operations have only~14 GB/s (CPU) and 45 GB/s (GPU)-about 60% that of fused operations and thus 67% more latency, which exactly matches the ratio of memory operations (3:5). Similar reduction of effective bandwidth is observed for medium-size problems. VEC XPXPY6 is next examined, and results are shown as red circles in Fig 9 In this case, the ratio of memory operations is (1:6). For large problems, the effective bandwidth is 3.8 GB/s vs. 26.7 GB/s on the CPU platform, and 13.1 GB/s vs 79 GB/s on the GPU platform-the effective bandwidth of simple operations is only 16.7% of that using a fused operation and a 500% more latency. These results demonstrated that fused operations can reduce memory operations, and significantly improve the computing speed. Element-wise vector operations: Modelling the optimal computing speed Element-wise operations have β lo = N i +N o and β hi = 0. So, for medium or large problems for which the overhead cost is negligible, the execution time is only spent on FLOPs (t flop ) and low-bandwidth memory operations (t mlo ). On the CPU platform, for large problems, the effective bandwidth of XPXPY20_1D with many FLOPs (α = 20) is almost the same as that of COPY1D with no FLOPs (α = 0) from Fig 7. But a visible reduction of effective bandwidth is observed for medium-size problems. Similar observations are found for f32 calculations shown in Fig 10A, in which the effective bandwidth of six operations (with various α/β lo ) is presented. Fig 8 shows that, with f32 calculations on the GPU platform, the effective bandwidth of XPXPY20_1D is almost the same as that of COPY_1D for both medium and large problems. measuring the relative performance of FLOPS and bandwidth. It is also a critical ratio that differentiates if an operation is memory-bound or FLOP-bound. The critical ratio (α/β lo ) c is calculated for the optimal implementations and listed in Table 6. For some scenarios, the maximum bandwidth BW lo max is not constant but variable, a typical value at a fixed γN e s f is used. For the CPU platform, the FLOPS max of single precision is twice that of double precision, but s f is 4 bytes for f32 and 8 bytes for f64. So, the critical ratio (α/β lo ) c is independent of which floating-point number is used. However, for the GPU platforms, the FLOPS of f32 calculations is significantly higher than that of f64, so the critical ratio (α/β lo ) c is highly sensitive to the choice of floating-point numbers. It can be seen that (α/β lo ) c is very large (> 40) for large problems on the CPU platform and for f32 calculations on the GPU platform. So, for these cases, the time spent on FLOPs is almost negligible for the examined vector operations (t flop /t mlo = [α/β lo ]/[(α/β lo ) c ] <20/3/ 40 = 0.167). In contrast, the critical ratio (α/β lo ) c is relatively small (3~8) for medium-size problems on the CPU platform and for f64 calculations on the GPU platform. That is why in these cases, a clear reduction of effective bandwidth is observed for operations with larger workload ratios α/β lo . The element-wise operations in this study all have < 2 operands. the FLOPs cannot start until all operands are read into the register, and the writing operation cannot start until the FLOPs are finished. Hence, the time spent on FLOPs and memory operations cannot overlap, and the latency is the sum of the time spent on FLOPs, memory operations, and overhead (i.e., LT = t flop +t mlo + t oh ). For medium or large problems where t oh /t mlo <<1, the following model regarding the effective bandwidth is then obtained. If the hardware parameters for optimal implementations (e.g., FLOPS max and BW lo max ) are used, the maximum effective bandwidth BW e max is obtained for Eq 11. The relationship between the effective bandwidth and the workload ratio α/β lo is shown in Fig 11. The measured maximum effective bandwidth of various element-wise operations is shown as symbols, which include tests on various platforms and computers, and with both f32 and f64 calculations. The solid lines are the model predictions by Eq 11, which agree well with the measured results. A steeper slope of the lines means a small critical ratio (α/β lo ) c and the lines are almost horizontal for cases with larger critical ratios, i.e., (α/β lo ) c > 40. The model (Eq 11) therefore suggests a useful method to roughly predict the maximum computing speed of element-wise operations, which is valid for various platforms and computers. The ratio α/β lo is easily found from numerical scheme equations. The FLOPS max (s f ) can be estimated from hardware specifications or benchmarked. The maximum bandwidth BW lomax (s t = γN e s f ) can be benchmarked by running the optimal implementation of COPY1D (On the GPU platform, the result from bandwidthTest is the same). Element-wise vector operations: The performance of XLA With the JAX Just In Time (JIT) compilation, JAX Python functions are optimised and compiled by XLA into optimal implementations specific to the target platform (e.g., CPU or GPU platforms). If the JIT transformation is not used (without XLA), the JAX Python functions are executed with many built-in high-performant simple operations. With XLA, the element-wise operations expressed in array operations are fused together into an optimal operation, which is confirmed by checking the generated HLOs in the debug mode. In Fig 9, with AXPY1D and XPXPY6_1D as examples, it is shown that compared with the effective bandwidth without XLA (hollow squares), a significant performance gain is observed with XLA (filled squares). Additionally, the performance of XLA closely resembles the maximum performance of fused operations compiled with C/C++ (OpenMP or CUDA), and the performance without XLA is very close to that of using simple operations compiled by C/C++. Another observation is that the effective bandwidth of JAX Python implementations (with or without XLA) is lower than the corresponding ones compiled with C/C++, which is largely due to the relatively higher overhead cost in Python. The performance of XLA is examined in a stringent way by comparing it with the optimal implementations discovered in the previous subsections. The effective bandwidth of six element-wise operations (COPY, SCALE, AXPY, XPXPY6, XPXPY12, XPXPY20) is measured on various platforms and computers, and with both f64 and f32 calculations. The relative efficiency, a ratio of effective bandwidth between XLA and optimal implementations, is presented in Fig 12. Some individual results regarding the effective bandwidth of XLA are also shown in Figs 7 and 8. Some observations are: 1. On the CPU platform of the PC, the optimised HLOs are compiled with LLVM in an optimal way. For large problems (γN e s f > L3 cache), the effective bandwidth of XLA is only slightly below that of the optimal implementation (e.g., 22.1 vs. 28.5 GB/s for XPXPY20_1D in Fig 7A). However, for medium-size problems (L1 cache < γN e s f < L3 cache), XLA is less efficient (e.g., 34.0 vs. 90.0 GB/s for XPXPY20_1D in Fig 7A). Fig 12 shows that the relative efficiency (black symbols) is over 80% for large problems, but low (10%~80%) for medium-size problems. This is likely because the overhead cost in Python programs is higher than that of C/C++ programs. This cost is still relatively negligible for large problems, but will not so for medium-size problems. 2. On the CPU platform of the HPC, the operations are successfully fused in XLA optimisation (confirmed by checking the HLOs), but the optimised HLOs are not optimally compiled with LLVM. Fig 7B shows that the effective bandwidth of XLA closely resembles that of single-thread SIMD implementation. Therefore, XLA performs well for small and medium problems, but is very inefficient for large problems (XLA � 7 GB/s and optimal � 50 GB/s for XPXPY20_1D in Fig 7B), resulting in a very low relative efficiency (< 20%; blue symbols in Fig 12). 3. On the GPU platform (either PC or HPC), the performance of XLA is very close to that of optimal implementations for large problems (relative efficiency > 90%; red and magenta symbols in Fig 12), but not very efficient for medium-size problems (5%~90%), which is similarly due to the relatively higher overhead cost in Python. Some data in Fig 12 indicate a relative efficiency of over 100% on the GPU platform but not more than 120%. An examination of the generated ptx files (a low-level assembly language) reveals that XLA takes advantage of the instruction sets ld.global.v4.f32 and ld.global.v2.f64, which can read/write four f32 numbers or two f64 numbers with one instruction. Recall that the use of these instruction sets does gain minor improvement, but is not considered optimal due to the complexity of programming. Element-wise matrix operations All the matrices tested in this study are square matrices. On the CPU platform, two nested loops are used for 2D problems. How the two loops are mapped to the two indices of matrice is critical. A row-major order is assumed, so consecutive elements in the last index are contiguous in memory. If the last index is mapped to the inner loop, the first index is mapped to the outer loop, and the outer loop is decorated with the OpenMP directive omp parallel for, cached memory access is achieved. In this implementation, the array is divided into N x subarrays (each still continuous) and each CPU thread uses SIMD instruction sets on each such sub-array. Here N x is the size of the first index. With the maximal number of threads, the measured effective bandwidth of COPY2D and XPXPY20_2D is presented in Fig 13 as filled circles. The performance is the same as the optimal performance of the corresponding vector operations (i.e., COPY1D and XPXPY20_1D; filled diamonds in Fig 13) because these two implementations are similar, the only difference is that the array is divided by the number of threads for vector operations, but is divided by N x for matrix operations. If the loops and indices are not correctly mapped (i.e., the last index is mapped to the outer loop), the memory access pattern is uncached (Fig 6), and a significant reduction of effective bandwidth is observed (hollow triangles in Fig 13). Fig 13B shows that cached or uncached implementations are the same for small and medium problems on the HPC, which is because the overhead cost dominates for them. On the GPU platform, CUDA kernels for matrix operations are invoked by 2D grids and thread blocks. The shape of the thread block (B x × B y ) affects the memory access pattern and thus effective bandwidth. Here, B x and B y are the size of the thread block for the first and last indices, respectively. When B x is 1, the memory access by GPU threads is coalesced and thus optimal. When B x is larger than 1, it is uncoalesced. Fig 14 shows that the effective bandwidth is maximised when B x is 1 (black circles in Fig 14; B x = 2 also works fine), it decreases with the increase of B x (from black to red, blue, magenta, and green). Additionally, Fig 14 shows that with an optimal memory-access pattern (coalesced access for GPUs), the maximum effective bandwidth of matrix operations (black filled circles) is the same as the corresponding vector ones (filled diamonds). The results above show that on both the CPU and GPU platforms, the maximum effective bandwidth of matrix operations is the same as the corresponding vector ones, which is also Comparing these with the filled squares in Figs 7 and 8 which are for the corresponding vector operations, it is found that the performance of XLA for matrix operations is the same as that of corresponding vector operations-simples operations are successfully fused in XLA, and the optimised HLOs are compiled optimally for the CPU platform of the PC and also for the GPU platforms, but is compiled in a non-optimal way for the CPU platform of the HPC. Therefore, the relative efficiency is the same for both vector and matrix operations, which is supported by the data (filled symbols in Fig 12 are from matrix operations and they collapse with the hollow symbols from vector operations). 1D finite-difference model: Optimal implementations The optimal implementation of the 1D model (HEAT1D) is examined in this section. On the CPU platform, two implementations are assessed: (1) Single-thread 256-bit AVX implementation, and (2) the use of OpenMP directive omp parallel for with multiple threads. Similar to vector operations, using omp parallel for with one thread (hollow circles in Fig 15) is almost the same as single-thread 256-bit AVX implementation (hollow triangles), but involves slightly higher overhead. Using omp parallel for with the maximum number of threads (filled circles in Fig 15) is optimal for medium and large problems on the PC and for large problems on the HPC. HEAT1D and COPY1D both have β lo = 2 (β lo = N i +N o and N i = N o = 1 for them as in Table 1). So, for a fixed mesh size (N e ) and floating-point accuracy (s f ), the workload of lowbandwidth memory operations (β lo N e s f ) is the same for them. Fig 15 shows that the effective bandwidth of the optimal implementations for them is almost the same (red filled circles vs. black filled circles), which means that the computing time for them is almost the same. However, HEAT1D involves much more FLOPs (6 vs. 0 for α) and memory operations (4 vs. 2 for β). The reason is that the optimal implementation achieves locality of reference and uses the much faster CPU caches to conduct certain memory operations (Fig 16A). For HEAT1D, each inner element is needed to be read in the register three times. For example, T t [i] are loaded three times to calculate T t+1 [i-1], T t+1 [i], and T t+1 [i+1], respectively. In the optimal implementation, these three output elements are processed by the same thread (CPU thread 1 in Fig 16A) one after another within a very short time. When the required memory size γN e s f is larger than the L3 cache size-inputs and output arrays cannot always reside on the caches, T t [i] is required to be read from the main memory only once when calculating T t+1 [i-1]. Afterwards, T t [i] is saved on the caches and subsequent reading is from the much faster caches when calculating T t+1 [i], and T t+1 [i+1]. So, the three reading operations are operated with one reading at low bandwidth from the main memory and two readings at high bandwidth from caches, which leads to β lo = 2 and β hi = 2. When the required memory size γN e s f is smaller than the L3 cache size-inputs and output arrays can both reside on the caches, and then the three reading are all operated at high bandwidth. But for the convenience of discussion, we still have β lo = 2 and β hi = 2, but rather set BW lo max � BW hi max . On the GPU platform, two types of implementations are examined. In the first one, each output element is processed by a GPU thread like the pseudocode for 1D problems in Table 5. The measured effective bandwidth is shown as filled circles in Fig 17. Similarly, the effective bandwidth of HEAT1D (black filled circles) is only slightly smaller than that of COPY1D (red filled circles), and they have almost the same computing time. So, this implementation also benefits from the locality of reference (Fig 16B). T t [i] is required to be loaded to the register three times to calculate T t+1 [i-1], T t+1 [i], and T t+1 [i+1], respectively. In this GPU implementation, the three output elements are processed by three GPU threads in the same thread block (i.e., T t [i] is requested at the same time by three threads). So, it only needs to be read from the GPU memory once at low bandwidth, the other two reading is from the L2 cache with high bandwidth, leading to β lo = 2 and β hi = 2. Another implementation is to exploit the GPU shared memory, and the code structure is the same as the pseudocode in Table 5. Suppose the output elements processed by a thread block are from T t+1 [i] to T t+1 [i+blockDim-1], input elements with the size of blockDim + 2 are then read into the shared memory (i.e., from T t [i-1] to T t [i+blockDim]), and subsequent reading of the input elements is from the shared memory, which is must faster than reading from the GPU memory ( Fig 4B). With this implementation, the effective bandwidth is shown as hollow triangles in Fig 17, which is slightly smaller than that of the first one utilising the L2 cache. The reason is that some input elements staying at the boundaries of thread blocks may be read into the shared memory twice, and the high-bandwidth memory operations are also larger (β hi = 3 from the shared memory vs. β hi = 2 from the L2 cache). Therefore, the first implementation utilising the L2 cache is considered optimal. Besides, the complexity of coding is substantial when using shared memory. 2D finite-difference models: Optimal implementations The optimal implementations of the two 2D models (HEAT2D and NS2D) are examined in this section. NS2D has two substeps. In the section, an overall latency (LT = LT predictor + LT corrector ) and an overall effective bandwidth (BW e = 15N e s f /LT) of the two substeps are reported. On the CPU platform, two implementations are assessed, which differ only by how the two indices are mapped to the two C/C++ loops. Similar to the findings in Section 5.5 about matrix operations, the implementation with cached access (filled circles in Fig 18) is optimal and performs better than that with uncached access (hollow triangles). Comparing Fig 18 with Fig 13 shows that the effective bandwidth of the optimal implementations for HEAT2D and NS2D is only slightly lower than that of optimal COPY2D even though the two 2D models involve much more FLOPs and memory operations. Similarly, the optimal implantation benefits from the locality of references, which is illustrated in Fig 16C ( 1, j] are processed by neighbouring CPU threads. Because all threads start with the elements whose second index is 0, even though the CPU threads work independently, they will reach the three elements T t+1 [i,j], T t+1 [i+1,j] and T t+1 [i-1,j] (their second indices are all j) approximately the same time. Therefore, even though T t [i,j] is requested five times, the five readings happen within a very short period, leading to β lo = 2 and β hi = 4. On the GPU platform, two implementations like those for the 1D model are examined. In the first one, each output element is processed by a GPU thread, and the CUDA kernel is invoked with 2D grids and thread blocks (B x = 1 and B y = 512). The choice of B x = 1 is because it will ensure the optimal memory access pattern for GPU threads (i.e., coalesced access as demonstrated in Section 5.5). The effective bandwidth is shown as filled circles in Fig 19 (black = HEAT2D and red = NS2D), which is only slightly below that of optimal COPY2D (equals the benchmarked bandwidth). This implementation also benefits from the locality of references and is demonstrated in Fig 16D. The arrows indicate the order of output elements in memory and also the execution order of thread blocks for 2D models. Similarly, each input element T t [i,j] is requested five times to calculate the output elements For the largest problem tested on the GPU platforms, N y is 8 × 1024 for the PC and N y is 32 × 1024 for the HPC, which will require 64 kB (PC) and 256 kB (HPC) to store a whole row of T t in f64. However, the L2 cache is much larger-512 kB for the PC and 4 MB for the HPC. Therefore, this optimal implementation also has β lo = 2 and β hi = 4. Another implementation on the GPU platform is to exploit the shared memory. For 2D problems, the size of thread blocks is B x = 1 and B y = 512 to have an optimal memory access pattern. Input elements with the shape of (B x + 2) × (B y + 2) are required to be loaded into the shared memory, and subsequent reading of the input elements is from the shared memory. Fig 19 shows that the performance of this implementation (hollow triangles) is slightly poorer than that of the first one utilising the L2 cache (filled circles), so the first one is optimal. https://doi.org/10.1371/journal.pone.0282265.g019 5.9 Finite-difference models: Modelling the optimal computing speed Sections 5.6 and 5.7 show that for the optimal implementations of the numerical models on either the CPU or the GPU platform, the computing time is mostly spent on low-bandwidth memory operations and the effective bandwidth is only slightly lower than the COPY operations. An examination of the generated computer instructions (in assembly language; asm for GNU C/C++ and ptx for CUDA C/C++) reveals that FLOPs and memory operations are executed alternately for these numerical models. It is not like the element-wise operations examined in this study that FLOPs cannot start until all operands are read in the register and the writing of operation results cannot start until the calculations are done (Section 5.4). Therefore, the execution of FLOPs and memory operations may overlap each other for numerical models. Moreover, memory operations often take longer time than FLOPs, so the time spent on FLOPs is less important for numerical models. This can be demonstrated with the data. HEAT2D and XPXPY12_2D both have α/β lo = 4 but HEAT2D involves more high-bandwidth memory operations (β hi /β lo = 2 vs. 0). However, the effective bandwidth of HEAT2D (black filled circles in Fig 19) is surprisingly higher than that of XPXPY12_2D (i.e., faster). With the findings above, the latency for numerical models can be assumed to be the sum of the time spent on low-and high-bandwidth memory operations, and overhead (i.e., LT = t mlo + t mhi + t oh ). For medium or large problems where t oh /t mlo <<1, the following model is obtained. For large problems on the CPU platform, the low-bandwidth memory operations are from/ to the main memory and the high-bandwidth memory operations are from/to caches, the critical ratio (β hi /β lo ) c = BW hi max /BW lo max is therefore estimated to be about 10. For medium-size problems, BW hi max � BW lo max (Section 5.6) and so (β hi /β lo ) c is about only 1. On the GPU platform, the low-bandwidth memory operations are from/to the GPU memory and the highbandwidth memory operations are from/to the L2 cache, so (β hi /β lo ) c = BW hi max /BW lo max is larger than 1. A value of about 5 is assumed to fit the data in Fig 20. These values of (β hi /β lo ) c are listed in Table 6. Fig 20 shows the relationship between the effective bandwidth and the workload ratio β hi / β lo for numerical models. The measured maximum effective bandwidth is shown as symbols, which include tests on various platforms and computers, and with both f32 and f64 calculations. The solid lines are the model predictions by Eq 12, which agree fairly with the measured results. A noticeable discrepancy is for large problems on the CPU platform, the model predictions (red lines) are always larger than the measured effective bandwidth (red symbols). The model Eq 12, therefore, suggests a useful method to roughly predict the maximum computing speed of numerical models. The ratio β hi /β lo is easily found from numerical scheme equations, and the hardware parameters can be easily found or benchmarked. Finite-difference models: The performance of XLA It is shown in Section 2 that three methods are available for the array programming of HEAT1D and HEAT2D. With XLA, the measured effective bandwidth of the three methods is shown in Fig 15A (HEAT1D; CPU), Fig 17A (HEAT1D; GPU), Fig 18A (HEAT2D; CPU), and Fig 19A (HEAT2D; GPU). Results show that the method of using slice and concatenation (filled squares) has the highest effective bandwidth for all tests while the methods of using convolution (hollow diamonds) and roll (hollow pentagons) operations are less efficient. An examination of the generated HLOs reveals that when using slice and concatenation, the simple operations are correctly fused into one optimal operation, but the high-level operations like convolution and roll are not fused with other operations in an optimal way. In terms of compilation with LLVM in XLA, it is shown that the optimised HLOs (from the slice and concatenation method) are optimally compiled for the CPU platform of the PC (Figs 15A and 18A) such that the performance of XLA (filled squares) closely resembles that of the optimal implementations (filled circles). However, on the CPU platform of the HPC (Figs 15B and 18B), optimised HLOs are compiled in a non-optimal way, and the performance of XLA (filled circles) is close to that of single-thread SIMD implementation (hollow triangles). On the GPU platforms, the performance of XLA depends on the numerical models. For HEAT1D, the performance (filled squares in Fig 17) matches that of the optimal implementation (filled circles) and is even better than that on the PC (Fig 17A). An examination of the generated ptx files for HEAT1D shows that, in XLA, each GPU thread processes four (f32) or two (f64) output elements-taking advantage of the instructions ld.global.v4.f32 and ld.global. v2.f64, but each GPU thread processes only one output element in the optimal implementation. For HEAT2D and NS2D, the effective bandwidth of XLA (filled squares in Fig 19) is lower than that of the optimal implementations (filled circles), and the gap for NS2D is higher than that for HEAT2D. The only difference between XLA and the optimal implantations is the number of output elements processed by a GPU thread, i.e., four (f32) or two (f64) in XLA, but one in the optimal implementation. From the generated ptx files for HEAT2D and NS2D, it is found that even though four (f32) or two (f64) output elements are processed by a GPU thread in XLA, the instructions ld.global.v4.f32 and ld.global.v2.f64 are rarely used due to the complexity of these 2D numerical models. https://doi.org/10.1371/journal.pone.0282265.g020 models with a limited number of inputs and outputs (e.g., HEAT1D), it is high (> 90%; and can even reach 120%) for large problems, and less efficient for medium-size problems. For 2D models, particularly the ones with many input and output arrays (NS2D), the relative efficiency is unsatisfactory even for large problems (70%~80% for HEAT2D and as low as 20~40% for NS2D). Conclusions This paper studies the efficiency of XLA in implementing computationally efficient numerical models. XLA is a compiler that automatically conducts optimisations (most importantly fusion to reduce memory operations) for array operations and compiles the optimised operations into target-specific programs. Speed-up is often easy to prove, this study stringently examines its efficiency by comparing the performance of XLA implementations with that of optimal implementations. Examined models include element-wise operations (e.g., COPY, SCALE, AXPY, and XPXPYN) and numerical models (e.g., HEAT1D, HEAT2D, and NS2D). These models/operations represent a broad category of numerical models commonly encountered in the scientific computing community, hence the conclusions should easily transcend to other similar models. Two computing platforms (backends in XLA) are examined-the shared-memory CPU platform and the shared-memory GPU platform. To obtain optimal implementations of the models on these platforms, the computing speed and its optimisation are rigorously studied by considering the different workloads and the corresponding computer performance. On the CPU platform, optimal implementations are parallel computing with the maximum number of threads and each thread handles a continuous sub-array of output elements with SIMD instruction sets. On the GPU platform, optimal implementations are using multiple concurrent GPU threads to calculate output elements that are part of a continuous array, and each output element is processed by each such GPU thread. All optimal implementations for numerical models achieve the locality of reference such that certain memory operations are operated at high bandwidth via the caches. Two models are proposed to estimate the computing speed of elementwise operations (Eq 11) and numerical models (Eq 12) and are supported by the data. In terms of optimisation with XLA, an examination of the generated HLOs in debug mode reveals that models expressed in low-level operations such as slice, concatenation, and array arithmetic operations are successfully fused into optimal operations, while high-level operations such as convolution and roll cannot be fused with other operations optimally. Regarding compilation within XLA, results show that, for all examined models for the CPU platforms of certain computers (e.g., the PC), and for certain simple numerical models for the GPU platform of all computers, XLA achieves a very high efficiency (> 80%) for large problems and acceptable efficiency (10%~80%) for medium-size problems-the gap is mainly due to the larger overhead cost of Python. XLA obtains unsatisfactory performance for (1) all models compiled for the CPU platform of certain computers (e.g., the HPC) where the optimised operations are compiled in a nonoptimal way; and (2) for high-dimensional models with many input and output arrays for the GPU platform of all computers, where XLA takes the strategy of processing 4 (single precision) or 2 (double precision) output elements with a GPU thread-hoping to use the instructions that can read/write 4 or 2 floating numbers with one instruction. However, these instructions are rarely used in the generated computer instructions due to the complexity of the models, and the performance is negatively affected. Therefore, areas for potential improvements are adding more flags to control the compilation for these non-optimal scenarios.
18,846.8
2023-02-24T00:00:00.000
[ "Computer Science", "Mathematics" ]
Distributed Generation of Power Using Simulation Technique - Distributed generation is attracting much mindfulness as a viable option to giant centralized generation plants, driven by the fast evolving liberalization and deregulation environments. This interest is also motivated by the need for eliminating the needless transmission and distribution costs, drooping the greenhouse gas emissions, deferring capital costs and improving the availability and consistency of electrical network Therefore, disseminated generation is expected to play an escalating vital role in meeting future power generation supplies and to provide consumers with elastic and cost resistive solutions for many of their energy needs. However, the integration of these sources into the electrical network can cause some challenges regarding their expected impact on the safety and the active behavior of the full network. It is vital to study these issue and to analyze the permakeedance of the expected future systems to guarantee satisfactory operation and to maximize the benefits of utilizing the dis- tribute resources. I. INTRODUCTION DG is named as the involved or stand-alone utilization of minor, modular electric generation near the end-user terminals. Another generic definition assigns the DG phrase for any generation exploited near consumers regardless of the mass or the type of the unit. According to the latter definition, DG may involve any generation involved into distribution system, commercial and Residential back-up generation, stand-alone onsite generators and generators installed by the utility for voltage support or other consistency purposes. In many applications, DG technology can provide valuable benefits for equally the consumers and the electric-distribution systems. The minor mass and the modularity of DG units encourage their utilization in a broad range of applications. The downstream site of DG units in distribution systems re-duce's energy losses and lets utilities to postpone upgrades to transmission and distribution facilities. A. Impact of DG on Power Systems The utilization of giant numbers of DG units within distribution systems impacts the steady state and the actives of power network. Some issue of critical weight are: voltage regulation, power quality and guard coordination in the distribution network. In the follitleing, the potential impact of DG units on the power utility will be discussed regarding these three main points. B. Voltage Regulation Generally, DG units provide voltage support due to their proximity to the end user. The voltages in distribution systems, which commonly have radial structure, are normalized using tap changing Tran's makers at substations and/or switched capacitors on feeder. In whole, supplementary line regulators can also be used on feeders [4]. Since the voltage regulation practice depends on radial power flitted from substation to loads, the utilization of DG units, which provide electrical power in varied directions, may cause confusions to this practice. Feeding power from DG units can cause negative impact on the voltage Regulation in case a DG unit is placed just downstream to a load tap-changer trans-maker [5]. In this case, the regulators will not correctly measure feeder burdens. Rather, they will see lesser values since DG units compact the observed load due to the onsite power generation. This will lead to setting the voltages at lesser values than that unavoidable to maintain ample levels at the tail ends of the feeders [5]. However, most favorable sites of DG units near the end user terminals can provide the unavoidable voltage support at the feeder nodes. C. Power quality Higher power quality requires ample voltage and frequency levels at customer side. This may require voltage and reactive power support to achieve an acceptable level of voltage regulation. In the stand-alone mode, DG units have to in-valve resistive controllers to maintain equally voltage and frequency within standard levels. In whole to the level itself, the voltage contents of flickers and harmonics have to be kept as little as possible. The impact of DG units on these two vital indices is discussed. D. Harmonic Distortion The DG technology depends usually on inverter interface and, as a result, connecting DG units to power systems will subsidizes on the way to harmonics. Since harmonic distortion is an additive result, the utilization of many DG units can strengthen the whole harmonic distortion in some sites in the utility even if the harmonic role from one DG unit is negligible [5]. The type and severity of these harmonics depend on the power changer technology, the interface con-figuration, and the mode of operation [2]. Fortunately, most new inverters are based on the Insulated Gate Bipolar Transistor (IGBT), which uses Pulse Width Modulation (PWM) to produce quasi-sine wave [4]. Recent improves in semi-conductor technology enable the use of loftier frequencies for the carrier wave, which results in quite pure wave makes [2]. In all cases, the whole harmonic distortion must be controlled within standard level as measured at the load terminals. II. GUARD SYSTEM OF THE DISTRIBUTION NETWORK Distribution network have customarily been designed for unidirectional power flitted from upper voltage levels down to customers located along radial feeders. This has enabled a relatively straightforward guard approach depending on well-known aspect and experience. Giant scale execution of DG will change easy systems into elaborate network, which burden vital modifications in guard systems [3]. Customary guard schemes may become useless and the proper coordination among guard devices of the network and the DG units is extremely vital for safe operation of the network. Generally, synchronous generators are able of feeding giant sustained error currents while currents from inverter-based sources can be limited to lesser values. A. Forged tripping of feeders Forged tripping is classily cause by synchronous generators, which can feed constant little-circuit currents. Lacking proper coordination among pro section devices, there is a danger of unnecessarily dis link of DG units and/or feeders when errors happen on next feeders fed from the common substation. The basic standard of forged tripping is shown. If the whole current fed by the DG units as a result of the error in feeder 1 is lofty enough, the relay on feeder 2 will trip and the whole feeder will be detached. Forged tripping of healthy feeders can likely be solved by using directional over-current relay. Using conventional relay with proper relay settings is also possible conditioned by the ample coordination among the guard devices of the DG units and the distribution system [3]. B. Preventing the operation of feeder guard When a giant DG unit or many minor ones are linked in the distribution network, the error current observed by the feeder guard relay may be lesser than the genuine error current as seen in figure. This may prevent the opera-tion of the feeder guard relay in the desirable time. www.ijoscience.com 45 C. Standard of operations of fuel cell Fuel cells consists of two electrodes with an electrolyte among them. The standard of operation of Fuel cells is based on the reactions of hydrogen gas (H2), which is supplied at the anodes, and oxygen gas (O2), which is supplied at the cathodes, to make water, heat and electricity. H+ O= H2O The procedure is accredited to the crusade of charged particles in the direction of regions of lesser electrochemical energy. The charged particles in hydrogen and oxy-gen migrate on the way to each other and attach together since the concluding product have lesser electrochemical energy. It is vital to separate electrons from protons and to normalize the movement of electrons. This can be accomplished by untying the hydrogen and oxygen by an electrolyte, which completely protects electrons and let protons from the hydrogen atoms to move through it. An external path is shaped for electrons using an electrical load to generate useful electrical energy. In fact, the genuine reactions happens in dual steps: the oxidation reactions at the an-ode and the lessen reactions at the cathodes. The oxidation reactions is the separation of hydrogen atoms into proton and electron. The lessen reactions happens when oxygen atoms dissociate and bonds with protons coming through the membrane and the electrons from external circuit is making water. The reactions of Alkaline (AFC), Proton Exchange Membrane (PEMFC), Phosphoric Acid (PAFC), Molten Carbonate (MCFC) and Hard Oxide (SOFC) types are summarized. At minor currents (region 1), the severe drop in the voltage is cause by the activation energy associated with the chemical reaction. At relatively loftier currents (region 2), the voltage drop is dominated by the losses in the electrode structure and the electrolyte, which is almost constant. At very lofty currents (region 3), the voltage drop is named by the rates of reactions diffusion. Due to the limitation cause by the diffusion process, the current reaches a full value called the restraining current. Therefore Fuel cells cannot supply currents that exceed their restraining currents. A FC is said to have better features if it has a flatter curve and a loftier restraining current. Fuel cells are positive sources for fixed applications including disseminated power generation for utilities, backup power generation business and cogeneration applications. As DG units, Fuel cells are rewards due to their lofty efficiency, modularity, and less environmental impact. Also, they can be beneficial and attractive sources in secluded areas to resolve many problems in the congusted disseminated systems. In these cases, it would be much financially to add a new decentralized source near the load than upgrading the utility grid. For giant-scale fixed applications, Fuel cells will operate continuously and, hence, the longtime unavoidable to reach the operating temperature from a cold start, which characterizes lofty-temperature Fuel cells will not symbolize an vital drawback. Thus, MCFC and SOFC systems can be considered, where the energy content in the drain gas can also be exploited to drive a downstream turbine producing much useful electrical energy. For minor-scale fixed applications, Fuel cells can be used near the end users to provide power, and in most cases heat, to housingial homes and minor businesses. During the summer, Fuel cells aiding a residence can provide electricity while supplying heat energy for heating water. In the winter they can meet electrical drain and supply heat energy for space and water heating. In this case, the produced heat energy can offset some of the electricity-production cost drooping the total energy costs. III. CONSTRUCTION OF MICRO-TURBINES The construction of the MICRO TURBINE unit has the main components include an air compressor, a combustor, recuperate, a turbine and a generator. Clean air at atmospheric pressure and temperature is pressed in the compressor before incoming the combustor. A controlled amount of injected fuel is mixed with the compressed air in the combustor and the mixture is ignited. The combustion products at lofty temperature and pressure flitted and expand over the turbine vanes to produce mechanical energy. Most constructions of MICRO TURBINES depend on an only shaft designed to revolve at lofty speeds in the range of 50000 to 120000rpm. Hence, a lofty-speed Permanent Magnet Synchronous Generator "PMSG" is used to create variable-voltage AC power at lofty angular frequencies up to 10000 rad/s. A part of the mined horsepower in the turbine is used for motivating the air compressor. A. Proposed active equivalent circuit for fuel cell The FC generating unit consists of three main parts: the remake, the heap and the power conditioner. The task of the remake is to process the raw fuel to get a hydrogenrich gas. The remarked fuel and the oxidant are directed with an electro chemical process through the heap (power section) to combine. As a result of this combination, DC power is generated and heat and water are produced. A power conditioner is unavoidable for DC/AC power conversion, where the AC power can then be used for either utility or stand-alone applications. These processes are accomplished at lofty efficiency since the FC has no moving parts. A DC/AC Pulse-Width Modulation (PWM) inverter is used to change the heap DC power to AC power using the general terminology S for the inverter switches. The carrier and modulating waves used to control the turn-on and turn-off of the inverter switches are also illustrated in the figure. During the conversion to AC power through the inverter, equally the frequency and the voltage (or reactive power) from the FC are normalized. The AC voltage is calculated for a balanced three-phase system based on the DC value and assuming that the ratio of the carrier-wave frequency to the modulating wave frequency is greater than 9. The RMS value of the fundamental A FC unit is simulated in the stand-alone mode to supply a secluded constant-resistance load. The parameters used in this model are given in appendix A. However, these values can be changed depending on the capacity and the VOL. 6, ISSUE 2, FEBRUARY 2020 www.ijoscience.com 47 type of the unit. It illustrates the voltage response of the FC to a 20% step lessen in the load resistance. The input fuel rate is held constant and, hence, the opted circuit reversible potential is also unchanged. However, in amplify of the sup-plied current results in much voltage drop and, as a result, the steady-state terminal voltage lessens. Another disturbance is a 20% step lessen in the input fuel rate, while the load resistance is maintained constant. At each time interval, the electrical power generated in the FC is used to calculate the heat power according to equation. Then, the heat model is activated to calculate all heat variables including the heap temperature using equation. The heap temperature is used according to empirical macules such as equation to name the change in the reversible potential as given by equation. A PI controller is used to normalize the terminal AC voltage of the unit and the input fuel rate is also controlled to supply the unavoidable load burden. Results related to two disturbances are illustrated and the first disturbance is a 15% step amplify in the load impedance, while the second one is a 10% step lessen in the load impedance. B. Simulation of a Giant Number of DG Units Involved into a Multi-Machine Network The PST16 network is a test network developed for stability study and dynamic permakeedance studies. The system consists of three main areas with relatively weak attaches among them to enable the simulation of natural phenomena like the interred oscillations happening in real power systems. Area A is considered as the giants generating part and, hence, it is a power exporting area. On the other hand, area C is a load burdening area and, therefore, it imports power from area A directly and indirectly through area B. The load burden in area B exceeds the generation by about 450MW and, as a result, it imports also power from area A. IV. SIMULATION RESULTS AND DISCUSSION After simulating the whole network and implementing the DG units with the abovementioned design in the PSD simulation package, the active permanence of the network is calculated. Firstly, a power flitted calculation is carried out to name the initial operating condition of the network. Varied disturbances are then simulated in equally the lofty-voltage and the little-voltage areas of the net-work including load switching and three-phase little circuits. MICRO TURBINE unit succeeded to restore the original values within 20 seconds. Since the heat load burden for equally the FC and the MICRO TURBINE units is as summed to be always constant, the set points of the active power controllers are maintained also constant. As a result, the active electrical power from the units returned back to the initial values at the new steady state conditions. The change in the active power burden, however, is covered from the lofty-voltage network itself through the 110/10kV Trans makers. The variations in the reactive power are to compensate the voltage lessen happened as a result of the disturbance. Thus, the terminal voltage moves back to its initial value. Since the set point of the heat power is kept constant, the heat output power from the MICRO TURBINE unit has also to be maintained constant after the load switching. This requires keeping the turbine active power constant due to the proportional among electrical and heat power. Thus, the reference angular speed is adjusted using the controller action (see Fig 4-3) forcing the turbine, and hence the PMSG, to operate at a lesser angular speed. However, the frequency will not incorporate such lessen due to the action of the frequency controller with the cyclone changer. The previous load switching symbolizes a local disturbance and hence the lofty-voltage parts did not restively subsidizes in defining the response of DG units. To stress the result of load switching in the lofty-voltage areas on the active permakeedance of DG units, Fig 4-8 and Fig 4-9 show correspondingly the response of the common FC and MICRO TURBINE to a 100+j20MVA load switching in area A at bus "BA", which is a boundary bus among area A and area C (see Fig 4-1). Since the switching point is in the lofty-voltage area away from the DG units, they are affected to the common degree regardless of their sites within the distribution network. However, the response of the units depends on their own parameters and hence on their capacities. The oscillatory behavior of the DG units reflects the impact of the actives from the lofty-voltage parts on their response. After illustrating and discussing the permakeedance of the DG units as a part of the network, it is ample to stress the interaction among the lofty-voltage parts of the network and the little-voltage distribution system. Also, it is vital to study the result of introducing the DG units on the active behavior of the little-voltage system. Fig 6 shows the response of the active power transferred to the 110kV area from the other parts in the lofty-voltage network to the above-mentioned three-phase little circuit. This power transfer happens through the Tran's maker's 380/110kV "Tr. 1" and 220/110kV "Tr. 2" shown in Fig 5. From Fig 6 raises the question regarding the strong oscillations in the power transfer to the 110kV network. Even with the fact that 30% of the whole power burden is produced in the DG units, it is obvious that the observed phenomenon cannot be explained by the reactions of the DG units. Considering that the changes of the active power through the first Trans maker are similar and in opposition to those through the second one, it will be clear that the swings are cause by the lofty voltage system in the make of interred oscillations. The network parts surrounding the 110kV area oscillate against each other through the 110kV network. The result of the interred oscillations does not extend to the little voltage area and, hence, they are not noticeable in the actives of the DG units. To compensate the interred oscillations appearing in the figure, it is unavoidable to normalize the permakeedance of the network as a whole. It is not possible to achieve this objective from the 110kV area alone. In whole, the power is not strongly lessened in the error system when the DG units are exploited, which compacts the impact of the error on the consumers. A. Impact of Disseminated Generation on the Stability of Electrical Power Systems The electrical network under deliberation comprises a lofty-voltage area with two voltage levels, namely 380kV and 110kV. As centralized power plants, two synchronous generators are simulated and linked to the 380kV nodes via step-up Tran's makers. The 110kV area and the underlying medium and little-voltage network have the common structure like that simulated in the PST16 network. However, the capacity of Fuel cells and MICRO TURBINEs as well as the load demands are changed. In whole, reactive power controller rather than voltage controllers are employed as much practical trend in power systems. The whole active load burden in the network is about 250 MW. Classic parameters of heat units are used to simulate the two synchronous generators using fifth-order models. IEEE standard regulators are used for simulating the speed governors and the excitation systems. Since the unavoidable power from the generators varies with the variation of the DG power, the rated MVA of the two generators is also changed in each case starting from 110MVA with the 28.3% penetration level up to 150MVA lacking DG units. The reserve power of each generator is assumed to be always 10% of the rated value and, hence, it is in proportional with the nominal power. Consequently, the two generators will pro-vide loftier reserve power when they are used to fully supply the load due to their loftier rating. In each investigated case, the reactive power of each generator is adjusted to obtain the common power factor like that lacking DG units. V. CONCLUSION This study addresses the impact of DG with varied penetration levels on the stability of power systems. A hypothetical network with two conventional power plants and many DG units is simulated. Based on the results and discussion, it can be concluded that DG can improve the stability of power systems if fitting types and appropriate sites are selected. Regarding the oscillatory stability, the utilization of DG improves the damping of the electromechanical modes and slightly amplifies their frequency. This fact is confirmed through the timedomain simulation of some disturbances. The transient stability study shows that the full power-angle deviations among the generators are de-creased with amplify of the penetration level of the DG units. However, the link of some DG units when the voltage lessens lesser than 80% of the nominal value symbolizes wholesale disturbance to the network. With much power from the DG units, the absolute reserve power from synchronous genera-tors and the network inertia constant are minored due to the lesser rated power of the rotating synchronous generators. As a result, the frequency response shows faster behavior with loftier full-frequency deviations when much DG units are employed. The voltage profiles at load terminals
5,049.8
2020-02-10T00:00:00.000
[ "Engineering" ]
ConPlot: web-based application for the visualization of protein contact maps integrated with other data Abstract Summary Covariance-based predictions of residue contacts and inter-residue distances are an increasingly popular data type in protein bioinformatics. Here we present ConPlot, a web-based application for convenient display and analysis of contact maps and distograms. Integration of predicted contact data with other predictions is often required to facilitate inference of structural features. ConPlot can therefore use the empty space near the contact map diagonal to display multiple coloured tracks representing other sequence-based predictions. Popular file formats are natively read and bespoke data can also be flexibly displayed. This novel visualization will enable easier interpretation of predicted contact maps. Availability and implementation available online at www.conplot.org, along with documentation and examples. Alternatively, ConPlot can be installed and used locally using the docker image from the project’s Docker Hub repository. ConPlot is licensed under the BSD 3-Clause. Supplementary information Supplementary data are available at Bioinformatics online. Introduction Recent developments in the field of evolutionary covariance have enabled increasingly accurate residue-residue contact predictions (e.g. Kandathil et al., 2019) with wide utility in structural bioinformatics (de Oliveira and Deane, 2017) and structural biology (Simkovic et al., 2017). For example, they enable more accurate protein ab initio modelling (e.g. Zheng et al., 2019), identification of protein domain boundaries (Rigden, 2002;Sadowski, 2013), building search models for molecular replacement (Simkovic et al., 2016) and identification of similar local folds (Ovchinnikov et al., 2017). Classical representations of these predictions consist of twodimensional binary matrices called contact maps (Godzik et al., 1993). These typically omit contacts between sequential near neighbours resulting in a blank space on and near the diagonal axis of the matrix. A multitude of properties can be predicted by other sequencebased methods and researchers often need to consider diverse sources of information in order to form a complete and integrated picture. The diagonal of the contact map has been used in the past for secondary structure information (e.g. Taylor, 2016), but we are not aware of a tool to facilitate production of such images and, in any case, the typically empty space off-diagonal offers the possibility to display multiple tracks of data. Furthermore, although other interactive tools to work with contact maps have been developed (Kozma et al., 2012;Pietal et al., 2007;Vehlow et al., 2011) there currently seems to be no web-based application for convenient display and exploration of predicted contact maps and distograms. Here we present ConPlot, the first tool of its kind, that presents sequence-based predictions in the form of multiple coloured data tracks near the diagonal axis of contact maps and distograms. This integration enables researchers to easily analyse a variety of data simultaneously and facilitates discovery of structural features. Materials and methods Written in Python, ConPlot is based on the Dash (Plotly Technologies Inc., 2020) web framework, which is an open-source Python library focused on the creation of interactive data visualization web sites. For data input, ConPlot has a parser module with functions to process a variety of commonly used sequence 2763 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. Applications Note predictions and contact map formats plus the CASP RR RMODE 2 format of binned inter-residue distances. Upon visiting the web application, users are assigned an unique universal identifier (UUID), which identifies their session until they leave the site and is used as a key to access data in a REDIS database (Redis Labs Inc., 2020). This database is used for the purpose of cache storage, and the assignment of these UUIDs ensures that data can only be accessed by the user who uploaded it. For long-term storage, optional account creation enables use of a persistent database, implemented using POSTGRESQL (Stonebraker et al., 2018), which can also be shared between collaborating registered users. ConPlot is available at www.conplot.org, where it is deployed as a Docker container. Private use is possible using a Docker image from the project's Docker Hub repository. Documentation and a set of tutorials are also accessible at the ConPlot website. ConPlot features ConPlot's plots represent sequence-based predictions as coloured tracks displayed near the diagonal of the contact maps. Up to 9 tracks can be added to the diagonal, numbered from -4 to þ4 according to their position relative to the diagonal track 0. These tracks are fully customizable regarding data, colour palettes and optional track mirroring across diagonal. The two halves of the contact map can be set to display different contact maps of matching sequence for comparison. The user can easily explore and interact with the data, zooming into different areas and hovering over data points to display detailed information. For ease of use, ConPlot parses these popular file formats: PSIPRED secondary structure predictions (McGuffin et al., 2000), IUPRED sequence disorder predictions (Dosztá nyi, 2018), TOPCONS membrane topology predictions (Tsirigos et al., 2015) and CONSURF sequence conservation prediction (Ashkenazy et al., 2016). A custom data file format allows the user to display any other kind of information as a track. These custom files contain all the data required for ConPlot to create a coloured track in the form of a series of instructions with categorical information about the colour of the different sections of the additional track (see Supplementary Fig. S1 and the website help section). Lastly, ConPlot can extract residue contact information from a PDB model and superpose predicted and model-derived contact maps: satisfaction of long-range contact predictions can then be used to infer model quality (de Oliveira et al., 2016;Miller and Eisenberg, 2008;Ovchinnikov et al., 2017). Use case To illustrate protein structural feature visualization from integrated contact and other predictions, we present analysis of a currently uncharacterized archaeal sequence (encoded by locus Mt2055 from Methanolobus tindarius; UniProt code W9DY28) from Pfam entry PF06695. Residue contact predictions were made using DeepMetaPSICOV (Kandathil et al., 2019). Inspection in ConPlot, alongside transmembrane topology predictions from TOPCONS (Tsirigos et al., 2015) and secondary structure predictions from PSIPRED (McGuffin et al., 2000) revealed an unsuspected reentrant loop structure between residues 16-42 (Fig. 1): a predicted transmembrane region (light red) with a break in the centre that separates two distinct predicted helices (dark red; from residues 16-25 and 28-42) in contact with each other (Yan and Luo, 2010) ( Supplementary Fig. S2). CONSURF (Ashkenazy et al., 2016) data (blue gradient) showed that this region is conserved and probably functionally important. Attempts to model the structure using the membrane topology prediction in conjunction with the RosettaMembrane protocol (Alford et al., 2015) were unsuccessful. However, models featuring a re-entrant loop which could be validated by their satisfaction of long-range contact predictions (Fig. 1) were eventually obtained using DMPfold (Greener et al., 2019) ( Supplementary Fig. S3). The predicted contact map and models also highlight a second reentrant loop from residues 105-131, in accordance with weak evidence that the protein family resulted from a tandem duplication (Mesdaghi et al., 2020). Conclusion We present ConPlot, a new web-based application for the visualization of (predicted) protein contact maps alongside sequence annotations such as secondary structure predictions, transmembrane helical topology or sequence conservation. This juxtaposition facilitates structural analysis and prediction in the era of covariancebased contact predictions. Black points indicate matches between the two maps, red points indicate contacts present in the model but not predicted and grey points are contacts predicted but not present in the model. Central track 0 in the diagonal is used for the TOPCONS transmembrane prediction (blue-outside cell, yellow-inside cell, light red-predicted transmembrane helix). PSIPRED secondary structure prediction is visualized by the tracks þ1 and -1 adjacent to the centre of the diagonal (red-helix, green-coil). Tracks þ2 and -2 represent CONSURF sequence conservation prediction (blue gradient, darker bluemore conserved, lighter blue-less conserved). Outermost tracks þ3, -3, þ4 and -4 were added using a custom file in which the location of the suspected re-entrant loops is highlighted in purple: between residues 16-42 and residues 105-131. A companion figure illustrating the use of 'Heatmap mode' (for distograms or to illustrate contact prediction probabilities) is included as Supplementary Figure S4
1,898.4
2021-01-28T00:00:00.000
[ "Computer Science" ]
Quercetin-Rich Ethanolic Extract of Polygonum odoratum var Pakphai Leaves Decreased Gene Expression and Secretion of Pro-Inflammatory Mediators in Lipopolysaccharide-Induced Murine RAW264.7 Macrophages Polygonum odoratum var. Pakphai has been used in traditional Thai medicine for the treatment of flatulence and constipation and to relieve the inflammation caused by insect bites. Quercetin (Q), which is abundant in plant-based foods, has been found to exert anti-inflammatory properties. This study evaluated the anti-inflammatory activity of P. odoratum ethanolic extract in RAW264.7 macrophage cells. Leaves were extracted with 50% ethanol, phenolics and flavonoids were then analyzed using UHPLC-QTOF-MS and HPLC-DAD. RAW264.7 cells were induced with lipopolysaccharides (LPSs). They were then treated with the extract and prostaglandin E2 (PGE2), and interleukin-6 (IL-6) and tumor necrotic factor-alpha (TNF-α) concentrations were determined. Levels of cyclooxygenase-2 (COX-2), inducible nitric oxide synthase (iNOS), IL-6 and TNF-α mRNAs were analyzed using qRT-PCR. Chemical analysis demonstrated that the extract was abundant with Q while also containing catechin, gallic acid, epicatechin gallate and coumarin. The extract increased the viability of RAW264.7 cells and dose-dependently decreased nitric oxide production, PGE2, IL-6 and TNF-α levels in the medium from the LPS-induced RAW264.7 cell culture. Consistently, COX-2, iNOS, IL-6 and TNF-α mRNA levels were decreased in a concentration-dependent manner (p < 0.05). Thus, the quercetin-rich ethanolic extract derived from P. odoratum var Pakphai leaves can exert anti-inflammatory activity in LPS-induced RAW264.7 cells through a reduction of the pro-inflammatory mediator response. Introduction The current state of inflammation is recognized as an immune process that responds to specific stimuli, such as certain microorganisms, tissue injuries and the presence of certain chemicals [1]. Inflammatory processes involve vascular changes and white blood cell responses. Macrophages are the main phagocytotic cells that respond by releasing several pro-inflammatory mediators, such as tumor necrosis factor-alpha (TNF-α), interleukin-1beta (IL-1β), interleukin-6 (IL-6) and other chemo-attractants, to eliminate noxious stimuli and assist in repairing the affected tissue [2]. In addition, TNF-α activates the expression and synthesis of inducible nitric oxide synthase (iNOS), resulting in an increased production of nitric oxide radicals (NO • ). During the inflammation process, cyclooxygenase-2 (COX-2) Extraction Yield, Total Phenolic and Flavonoid Contents Interestingly, in terms of the mean ± standard error of the mean (SEM) values, 50% of ethanolic extractions of P. odoratum cv. Pakphai leaves produced an average yield of 18.96% (w/w), for which the total phenolic content (TPC) and total flavonoid content (TFC) values were 52.59 ± 0.58 mg gallic acid equivalent (GAE)/g extract and 19.97 ± 0.11 mg quercetin equivalent (QE)/g extract, respectively. UHPLC-ESI-QTOF-MS/MS Profiling of the Extract MS experiments on the UHPLC-MS system coupled with the ESI source, QTOF and an ion trap mass spectrometer were carried out in order to investigate and identify the presence of different isobaric compounds and to perform a comprehensively qualitative analysis of the phenolic constituents present in 50% ethanolic extract of P. odoratum Lour. cv. Pakphai leaves. The ESI-MS base peak chromatogram of the extract shows a relatively complex mixture containing peak quantities of phenolics, flavonoids and lignans. Appar-ently, these two phenolics included epicatechin (EC) at T R 17.312 min and gallocatechin (GC) at T R 18.242 min. Two flavonoids included quercetin 3 -O-glucuronide (QG) at the time of retention (T R ) at 9.455 min and 5,7,8,3 ,4 -pentahydroxyisoflavone or quercetin (Q) at T R 14.787 min, and one lignan included 3-methylellagic acid 8-rhamnoside at T R 11.016 min. These were identified using fragmentation patterns observed in tandem mass spectra ( Figure 1). Accordingly, estimated and exact mass formulae and the chemical structures of the possible compounds are presented in Table 1. Interestingly, hesperidin (T R 21.531 min), which is a flavanone glycoside found in citrus fruits whose aglycone form is called hesperetin, was detected in the extract. However, six other peaks eluted at T R values of 0.356, 0.677, 0.913, 2.112, 17.112 and 22.146 min could not be identified herein due to a lack of pertinent information and an established database. Molecules 2022, 27, x FOR PEER REVIEW 3 of 18 complex mixture containing peak quantities of phenolics, flavonoids and lignans. Apparently, these two phenolics included epicatechin (EC) at TR 17.312 min and gallocatechin (GC) at TR 18.242 min. Two flavonoids included quercetin 3′-O-glucuronide (QG) at the time of retention (TR) at 9.455 min and 5,7,8,3′,4′-pentahydroxyisoflavone or quercetin (Q) at TR 14.787 min, and one lignan included 3-methylellagic acid 8-rhamnoside at TR 11.016 min. These were identified using fragmentation patterns observed in tandem mass spectra ( Figure 1). Accordingly, estimated and exact mass formulae and the chemical structures of the possible compounds are presented in Table 1. Interestingly, hesperidin (TR 21.531 min), which is a flavanone glycoside found in citrus fruits whose aglycone form is called hesperetin, was detected in the extract. However, six other peaks eluted at TR values of 0.356, 0.677, 0.913, 2.112, 17.112 and 22.146 min could not be identified herein due to a lack of pertinent information and an established database. HPLC-DAD Analysis of Phenolic Compounds in the Extract For the purposes of calibration, different concentrations of mixed standard phenolics (10 µL) were injected into the HPLC system to indicate the relative phenolic compounds present in the extract eluted from the column and to establish the standard curves needed to calculate the appropriate concentrations. In our findings, the standard GA, C, EGCG, EC, ECG, CO and Q quantities came out at specific T R values of 7.02, 12.27, 14.00, 15.40, 17.28, 21.27 and 26.48 min, respectively. Correspondingly, GA, catechin (C), epicatechin 3-gallate (ECG), coumarin (CO) and Q were detected in the ethanolic extract of P. odoratum Lour. cv. Pakphai leaves (Figure 2A), while epigallocatechin 3-gallate (EGCG) and EC were not detected ( Figure 2B). Accordingly, concentrations of the five phenolics were determined from the standard curves and have been expressed in Table 2, of which Q was the most abundant and ECG was the second most abundant. HPLC-DAD Analysis of Phenolic Compounds in the Extract For the purposes of calibration, different concentrations of mixed standard phenolics (10 μL) were injected into the HPLC system to indicate the relative phenolic compounds present in the extract eluted from the column and to establish the standard curves needed to calculate the appropriate concentrations. In our findings, the standard GA, C, EGCG, EC, ECG, CO and Q quantities came out at specific TR values of 7.02, 12.27, 14.00, 15.40, 17.28, 21.27 and 26.48 min, respectively. Correspondingly, GA, catechin (C), epicatechin 3-gallate (ECG), coumarin (CO) and Q were detected in the ethanolic extract of P. odoratum Lour. cv. Pakphai leaves (Figure 2A), while epigallocatechin 3-gallate (EGCG) and EC were not detected ( Figure 2B). Accordingly, concentrations of the five phenolics were determined from the standard curves and have been expressed in Table 2, of which Q was the most abundant and ECG was the second most abundant. 11 20. HPLC-DAD Analysis of Phenolic Compounds in the Extract For the purposes of calibration, different concentrations of mixed standard phenolics (10 μL) were injected into the HPLC system to indicate the relative phenolic compounds present in the extract eluted from the column and to establish the standard curves needed to calculate the appropriate concentrations. In our findings, the standard GA, C, EGCG, EC, ECG, CO and Q quantities came out at specific TR values of 7.02, 12.27, 14.00, 15.40, 17.28, 21.27 and 26.48 min, respectively. Correspondingly, GA, catechin (C), epicatechin 3-gallate (ECG), coumarin (CO) and Q were detected in the ethanolic extract of P. odoratum Lour. cv. Pakphai leaves (Figure 2A), while epigallocatechin 3-gallate (EGCG) and EC were not detected ( Figure 2B). Accordingly, concentrations of the five phenolics were determined from the standard curves and have been expressed in Table 2, of which Q was the most abundant and ECG was the second most abundant. 13 22 HPLC-DAD Analysis of Phenolic Compounds in the Extract For the purposes of calibration, different concentrations of mixed standard phenolics (10 μL) were injected into the HPLC system to indicate the relative phenolic compounds present in the extract eluted from the column and to establish the standard curves needed to calculate the appropriate concentrations. In our findings, the standard GA, C, EGCG, EC, ECG, CO and Q quantities came out at specific TR values of 7.02, 12.27, 14.00, 15.40, 17.28, 21.27 and 26.48 min, respectively. Correspondingly, GA, catechin (C), epicatechin 3-gallate (ECG), coumarin (CO) and Q were detected in the ethanolic extract of P. odoratum Lour. cv. Pakphai leaves (Figure 2A), while epigallocatechin 3-gallate (EGCG) and EC were not detected ( Figure 2B). Accordingly, concentrations of the five phenolics were determined from the standard curves and have been expressed in Table 2, of which Q was the most abundant and ECG was the second most abundant. 24 13.47 ± 6.11 6.73 ± 0.01 Abbreviations: C = catechin, CO = coumarin, EC = epicatechin, ECG = epicatechin gallate, EGCG = epigallocatechin 3-gallate, GA = gallic acid, HPLC-DAD = high-performance liquid chromatography-diode array detector, MW = molecular weight, ND = not detectable, Q = quercetin, TPC = total phenolic content, T R = retention time. Toxic Effect on RAW264.7 Macrophage Cells Clearly, all concentrations of the extracts exerted no cytotoxic effects on RAW264.7 macrophages. Concentrations of 25-200 μg/mL significantly increased cell viability in a concentration-dependent manner, while concentrations of >200 μg/mL did not have any influence ( Figure 3). Effect on Nitric Oxide Production and Inducible Nitric Oxide Synthase Gene Expression In the present study, LPS treatment markedly increased levels of NO • production in RAW264.7 macrophages when compared with control cells without treatment (p < 0.05); nevertheless, increasing NO • values were dose-dependently reduced in macrophages following treatment with the extract (100, 200 and 400 μg/mL) with a half maximal inhibitory concentration (IC50) value of >400 μg/mL) (p < 0.05) ( Figure 4A). Moreover, iNOS mRNA levels were evidently increased in LPS-induced cells (p < 0.05) when compared with control cells without LPS induction. Consistently, the extracts were found to decrease in accordance with the increasing iNOS mRNA expression in a concentration-dependent manner (p < 0.05) when compared with the cells that did not undergo extract treatment ( Figure 4B). Effect on Nitric Oxide Production and Inducible Nitric Oxide Synthase Gene Expression In the present study, LPS treatment markedly increased levels of NO • production in RAW264.7 macrophages when compared with control cells without treatment (p < 0.05); nevertheless, increasing NO • values were dose-dependently reduced in macrophages following treatment with the extract (100, 200 and 400 µg/mL) with a half maximal inhibitory concentration (IC 50 ) value of >400 µg/mL) (p < 0.05) ( Figure 4A). Moreover, iNOS mRNA levels were evidently increased in LPS-induced cells (p < 0.05) when compared with control cells without LPS induction. Consistently, the extracts were found to decrease in accordance with the increasing iNOS mRNA expression in a concentration-dependent manner (p < 0.05) when compared with the cells that did not undergo extract treatment ( Figure 4B). Inhibitory Effects on Cyclooxygenase 2 Gene Expression and PGE2 Production In terms of the relevant pharmacological effects, cyclooxygenases (COXs) comprise constitutive COX or cyclooxygenase 1 (COX-1) and inducible COX (COX-2), which catalyze a conversion of arachidonic acid to prostaglandins, in particular PGE2, resulting in inflammation. Herein, we have investigated whether the ethanolic extract would exert inhibitory effects on COX-2 gene expression and PGE2 production in LPS-induced murine RAW264.7 macrophages, indicating an anti-inflammatory activity. Thus, we have assessed levels of COX-2 mRNA expression in the macrophages with the use of qRT-PCR and PGE2 production in the culture medium using ELISA. According to our findings, COX-2 mRNA expression in LPS-induced macrophages was highly upregulated by LPS stimulation (p < 0.05), while increases in COX-2 mRNA were dose-dependently decreased by treatment with the extract (p < 0.05) ( Figure 5). Consistently, the results provide Inhibitory Effects on Cyclooxygenase 2 Gene Expression and PGE 2 Production In terms of the relevant pharmacological effects, cyclooxygenases (COXs) comprise constitutive COX or cyclooxygenase 1 (COX-1) and inducible COX (COX-2), which catalyze a conversion of arachidonic acid to prostaglandins, in particular PGE 2 , resulting in inflammation. Herein, we have investigated whether the ethanolic extract would exert inhibitory effects on COX-2 gene expression and PGE 2 production in LPS-induced murine RAW264.7 macrophages, indicating an anti-inflammatory activity. Thus, we have assessed levels of COX-2 mRNA expression in the macrophages with the use of qRT-PCR and PGE2 production in the culture medium using ELISA. According to our findings, COX-2 mRNA expression in LPS-induced macrophages was highly upregulated by LPS stimulation (p < 0.05), while increases in COX-2 mRNA were dose-dependently decreased by treatment with the extract (p < 0.05) ( Figure 5). Consistently, the results provide evidence that PGE 2 levels were significantly increased in LPS-induced RAW264.7 macrophages when compared with control cells without LPS induction. The treatment with the extract decreased PGE 2 production in a concentration-dependent manner with an IC 50 value of 231.9 µg/mL) (p < 0.05) when compared with LPS-induced cells without the extract treatment ( Figure 6). evidence that PGE2 levels were significantly increased in LPS-induced RAW264.7 macrophages when compared with control cells without LPS induction. The treatment with the extract decreased PGE2 production in a concentration-dependent manner with an IC50 value of 231.9 μg/mL) (p < 0.05) when compared with LPS-induced cells without the extract treatment ( Figure 6). evidence that PGE2 levels were significantly increased in LPS-induced RAW264.7 macrophages when compared with control cells without LPS induction. The treatment with the extract decreased PGE2 production in a concentration-dependent manner with an IC50 value of 231.9 μg/mL) (p < 0.05) when compared with LPS-induced cells without the extract treatment ( Figure 6). Effects on Production and Gene Expression of Pro-Inflammatory Cytokines LPS stimulation was found to increase levels of IL-6 concentrations and mRNA (p < 0.05) in murine RAW264.7 macrophages when compared with the control cells; however, treatment with the extract significantly lowered the increasing IL-2 levels in a concentration-dependent manner ( Figure 7A,B). Similarly, LPS also increased levels of TNFα concentrations and mRNA (p < 0.05) when compared with the control cells, while treatment with the extract significantly decreased the production and gene expression of TNF-α in a concentration-dependent manner with an IC 50 value of 285.3 µg/mL) ( Figure 7C,D). The results indicate the anti-inflammatory effects of ethanolic extract of P. odoratum Lour. cv. Pakphai leaves in macrophages. Effects on Production and Gene Expression of Pro-Inflammatory Cytokines LPS stimulation was found to increase levels of IL-6 concentrations and mRNA (p < 0.05) in murine RAW264.7 macrophages when compared with the control cells; however, treatment with the extract significantly lowered the increasing IL-2 levels in a concentration-dependent manner ( Figure 7A,B). Similarly, LPS also increased levels of TNF-α concentrations and mRNA (p < 0.05) when compared with the control cells, while treatment with the extract significantly decreased the production and gene expression of TNF-α in a concentration-dependent manner with an IC50 value of 285.3 μg/mL) ( Figure 7C,D). The results indicate the anti-inflammatory effects of ethanolic extract of P. odoratum Lour. cv. Pakphai leaves in macrophages. Discussion Traditionally, the P. odoratum Lour. plant was cultivated in urban gardens by members of Hmong hilltribes in Lao who typically produced edible vegetables with high mineral profiles (e.g., Ca, Mg and Mn), various fish seasonings, natural components of suggested pregnancy and post-partum diets and formulated a variety of medicinal herb recipes [6]. Giving it its aromatic odor, the aerial parts of this plant contain many volatile Figure 7. Levels of IL-6 production (A) and mRNA expression (B), as well as TNF-α production (C) and mRNA expression (D), in the medium obtained from the cultured RAW264.7 macrophage induced by LPS and treated with 50% ethanolic extract of P. odoratum Lour. cv. Pakphai leaves. Data obtained from three independent experiments are expressed as mean ± SEM values. Accordingly, * p < 0.05 was considered significant when compared to the control cells without LPS induction; # p < 0.05 was considered significant when compared to LPS-induced cells without the extraction treatment. Abbreviations: LPS = lipopolysaccharide, SEM = standard error of the mean values, TNF-α = tumor necrotic factor-alpha. Discussion Traditionally, the P. odoratum Lour. plant was cultivated in urban gardens by members of Hmong hilltribes in Lao who typically produced edible vegetables with high mineral profiles (e.g., Ca, Mg and Mn), various fish seasonings, natural components of suggested pregnancy and post-partum diets and formulated a variety of medicinal herb recipes [6]. Giving it its aromatic odor, the aerial parts of this plant contain many volatile organic compounds, including (Z)-3-hexenal, (Z)-3-hexenol, decanal, undecanal, dodecanal, 3-sulfanyl-hexanal and 3-sulfanyl-hexan-1-ol [18]. Due to its pungent taste, the leaves of this plant contain various active compounds, such as polygodial. Furthermore, the leaves have been determined to contain 1,4-dialdehyde, which is derived from drimane terpenoids [18]. Additionally, fresh P. odoratum leaves were found to contain a range of essential oils that comprise twenty-five compounds, of which dodecanal, decanal and anisaldehyde were the most abundant and exhibited an inhibitory effect on tyrosine activity and Salmonella choleraesuis growth [16,19]. Our previous study has demonstrated that only the methanol extract of P. odoratum var. Pakphai leaves were abundantly present in phenolics and flavonoids together with E-15-heptadecenal and 3,7,11,15-tetramethyl-2-hexadecen-1-ol, which exhibited strong free-radical scavenging properties [7]. The chromatogram of the leaf extract shows the chemical composition of QG, 5,7,8,3 ,4 -pentahydroxyisoflavone and 3methylellagic acid 8-rhamnoside, of which QG was found to possess potent anti-cancer [20], anti-aging [21] and antioxidant properties [22]. A number of previous studies have reported on the anti-inflammatory properties of QG that suppressed a pro-inflammatory mediator response on LPS-stimulated RAW264.7 macrophage cells [23,24] and inhibited vascular permeability in mice [25]. Likewise, 5,7,8,3 ,4 -pentahydroxyisoflavone or quercetin is an isoflavone derivative that has been reported to possess remarkable anti-inflammatory effects [26] and anti-proliferative activities [27]. Our present study also produced a polyphenolic profile using HPLC-DAD and UHPLC-ESI-QTOF-MS/MS methods. It has been confirmed that 50% (v/v) ethanolic extract of the plant leaves contain high levels of quercetin and have also been found to contain epicatechin gallate, coumarin, gallic acid and catechin. With regard to the potential health benefits of this plant, the leaves contain high amounts of bioactive ingredients that exhibit antioxidant, anti-hemolytic and anti-bacterial properties. As an alternative form of osteoporosis therapy, the oral administration of Morus alba and P. odoratum leaves over a period of 3 months effectively decreased levels of bone oxidative stress markers and osteoclast density but elevated serum calcium, alkaline phosphatase (ALP) and osteocalcin levels. The administration also increased osteoblast density and cortical thickness in treated ovariectomized rats [28]. Consistently, the data obtained from a randomized double-blind placebo-controlled clinical trial has supported the contention that the consumption of M. alba leaf extract (50 mg/day) combined with P. odoratum leaf extract (1500 mg/day) for 8 weeks could elevate serum levels of ALP, osteocalcin and TPC but could effectively reduce serum beta-isomerized C-terminal telopeptide levels in menopausal subjects [29]. We have previously demonstrated that water, dichloromethane and the methanolic extracts derived from Pakphai leaves were not toxic to RAW264.7 macrophages and exhibited an anti-inflammatory activity in LPSinduced RAW264.7 cells, of which an IC 50 value of dichloromethane = 53.75 ± 0.75 µg/mL was achieved by inhibiting the production of NO • [7]. The present study using a colorimetric MTT test has determined that extract doses at 100-400 µg/mL were not toxic to RAW264.7 cell cultures but seemed to increase cell viability. Similarly, Okonogi and coworkers reported on the nontoxic effect of the ethanolic extract fractions (10-100 µg/mL) that included scutellarein-7-glucoside and quercitrin on RAW264.7 cells. Importantly, the doses they administered were in much lesser quantities than the ones we had used in our experiments [15]. Although macrophages play essential roles in anti-inflammatory defense mechanisms, the abnormal activation of macrophages has been reported in the development of many inflammatory disorders, including sepsis, rheumatoid arthritis, inflammatory bowel disease and cancer. Under pathogenic conditions, an excessive amount of pro-inflammatory mediators and cytokines are produced from abnormally activated macrophages that eventually provoke an inflammatory response [30]. Therefore, the inhibition of abnormal macrophage activation might be an invaluable therapeutic goal in the treatment of inflammatory disorders. Several previous studies have reported on the anti-inflammatory activity facilitated by quercetin, which was found to be abundant in the ethanolic extracts of various plantderived products and compounds. The ethanolic extract of Myrsine seguinii is rich in Q, which was found to inhibit inflammatory responses (such as production of NO • and PGE 2 ) in LPS-stimulated RAW264.7 cells and LPS-induced mouse peritonitis by blocking the Src/Syk/nuclear factor kappa B (NF-κB) and the IL-1 receptor-associated kinase 1/activator protein 1 (IRAK-1/AP-1) pathways [31]. Additionally, 50% ethanolic extract of persimmon leaves potently inhibited the production of NO • , PGE 2 and IL-6 in LPS-induced RAW264.7 cells [32]. Likewise, the ethanolic extract of the Euphorbia kansui root containing ingenane diterpenoids (euphorkans A and B), together with 16 known analogues, exerted an anti-inflammatory activity that was consistent with Q by inhibiting the effects of certain inflammation mediators, such as TNF-α and IL-6, in a concentration-dependent manner through the inhibition of NF-κB activity [33]. Moreover, the ethanolic extract of the QingX-iaoWuWei decoction, containing quercetin and other ingredients, exhibited synergistic anti-inflammatory activity in the LPS-induced RAW264.7 cell culture via JUN, MAPK1 and AKT1 targets and the mitogen-activated protein kinases (MAPK)/phosphatidylinositol-3 kinase (PI3K)/Akt serine/threonine kinases pathways [34]. Herein, our findings suggest the potent anti-inflammatory effect of specific major active compounds, particularly quercetin and isoflavone derivatives, in the ethanolic extracts of P. odoratum var. Pakphai leaves through suppression in the responses of certain pro-inflammatory mediators, such as NO • , PGE 2 , IL-6 and TNF-α in LPS-stimulated murine macrophage cells. The underlying mechanism of quercetin on the pro-inflammatory mediator in macrophages has been demonstrated, wherein LPS activates MAPKs, such as extracellular signal-regulated kinases, c-Jun N-terminal kinase and p38 MAPK [37]. In addition, the transcription factors that occur under LPS-stimulated inflammatory conditions are downstream targets of the MAPK pathways, which are known to regulate various gene encoding inflammatory mediators. The ethanolic extract reinforces the anti-inflammatory activity exerted by some of the other extracts that had been previously used. This confirmed the presence of a more lipophilic phenolic compound. Accordingly, quercetin was found to participate in the potent anti-inflammatory response in the macrophage cell culture in this study and in animals included in other related studies. Cell Culture The murine macrophage (RAW264.7) cell line was purchased from the American Type Culture Collection (ATCC, TIB-71, VA, USA) and was cultured in DMEM, supplemented with 10% (v/v) FBS with 100 U/mL penicillin and 100 µg/mL streptomycin (DMEM + ) and then maintained at 37 • C in a 5% CO 2 incubator. The cell confluent was expected to be 70-80% after harvesting. Preparation of P. odoratum Ethanolic Extract Polygonum odoratum leaves were freshly harvested from a P. odoratum field located at Mae Fah Luang University, Chiang Rai, Thailand, and identified by Dr. Jantrararuk Tovaranon, PhD. at the School of Science, Mae Fah Luang University, Chiang Rai. The herbarium specimens (Herbarium number: MD2018080001-1) were prepared and deposited in the Medicinal Plant Innovation Center of Mae Fah Luang University. The leaves were dried using a shade drying method and ground into a fine powder with a mechanic milling machine (Wells-Index, Muskegon, MI, USA). The dried powder (50 g) was extracted with 50% (v/v) ethanol (750 mL) at room temperature for 72 h using an orbital shaker. The supernatant was filtered through Whatman's No.1 filter paper (polyethersulfone-type, Whatman International Limited Company, Maidstone, UK). The ethanolic filtrate was evaporated to the point of dryness using a freeze-drying machine (Labconco™, Labcono Corporation, MO, USA). The extract was divided in aliquots in dark vials and kept in a freezer at −20 • C. They were then reconstituted in 0.1% DMSO solution before being used. Determination of TPC The TPC of the extract was determined using the Folin-Ciocalteu method with slight modifications [38]. Briefly, the extract solution (0.1 mL) was mixed with working Folin-Ciocalteu reagent (1:1 dilution) (1.0 mL) and 7% (w/v) sodium bicarbonate (1.0 mL). It was incubated at room temperature for 30 min and an absorbance (A) value of 765 nm was measured against the reagent using a Shimadzu UV-1800 UV/VIS spectrophotometer (Cole-Parmer, Vernon Hills, IL, USA). The TPC was calculated from the standard curve constructed using standard GA at concentrations of 10-100 µg/mL and expressed as mg GAE/g extract. Determination of TFC TFC was determined using an aluminum chloride colorimetric method [39]. Briefly, the extract solution (0.1 mL) was mixed with 10% (v/v) aluminum chloride solution (1.0 mL) and 1 M potassium acetate solution (1.0 mL). It was then incubated at room temperature for 40 min, and an A value of 415 nm was measured against a blank reagent using a Shimadzu UV-1800 UV/VIS spectrophotometer (Cole-Parmer, Vernon Hills, IL, USA). The TFC was calculated from the standard curve constructed using standard Q at concentrations of 10-200 µg/mL and expressed as mg of the QE/g extract. UHPLC-ESI-QTOF-MS/MS Analysis The extract (1 mg) was initially dissolved in methanol (1 mL), then 10-fold diluted to reach a concentration of 1 µg/mL and manually filtered through a syringe membrane filter (Whatman's polytetrafluoroethylene type, 0.22-µm pore size, Sigma-Aldrich Chemicals Company, St. Louis, MO, USA). The filtrate was transferred to HPLC autosampler vials and the mode of the phytochemicals was analyzed using the UHPLC-ESI-QTOF-MS/MS technique [27]. The system included the use of an UHPLC machine (Agilent 6500 Series, Agilent Technologies, Santa Clara, CA, USA) equipped with an ESI source employing orthogonal nebulization and a heated counterflow drying gas system to achieve excellent sensitivity and robust, reliable degrees of performance. The chromatographic instrument was connected in series to a DAD (Agilent 1260 Infinity II Series, Agilent Technologies, Santa Clara, CA, USA) and an MS detector (Agilent 6550 Series, Agilent Technologies, Santa Clara, CA, USA). The MS detector was set up for full scanning of the mass spectra from m/z 100 to 1000 g/mole and employed in positive ion mode with a nebulizer. The gas temperature was set at 350 • C and gas flow was set to 13 L/min. In the analysis process, the sample (1.0 µL) was automatically injected into the machine and fractionated on the column (Zorbax eclipse plus C18 type, dimension 2.1 mm × 50 mm, 1.7 µm particle size, Agilent Technologies, Santa Clara, CA, USA) and eluted by the mobile phase comprising solvent A (deionized water containing 0.1% formic acid) and solvent B (acetonitrile containing 0.1% formic acid). The gradient elution was performed at a flow rate of 400 µL/min for a total running time of 26 min, for which the program was set up to start at 5% solvent B for 1 min, then increased to 17% solvent B within 13 min and increased up to 100% solvent B within 20 min. The 100% solvent B elution was maintained in order to wash the column for 2 min before it was decreased to 5% solvent B over a period of 3 min. Analysis of possible compounds was performed using Agilent Mass hunter B 8.0 software (Qualitative navigator, Qualitative workflows) and the PCDL database. Peak identification was performed by comparing the retention time, mass spectra and fragmentation patterns with reference to the compounds obtained from various data libraries and networks (such as ChemSpider). Cell Viability Assay Cell viability was determined using the MTT method [41]. Briefly, RAW264.7 cells (5 × 10 3 /well) were cultured in DMEM + and seeded in a 96-well plate until reaching 80% confluence. The cells were treated with the extract (0-100 µg/mL) for 24 h, washed with PBS and incubated with the MTT solution (5 mg/mL in PBS) at 37 • C for another 4 h. Finally, 0.1% (w/v) DMSO solution was added to dissolve the formazan product and the A value was measured at a wavelength of 540/630 nm against the blank reagent using a Shimadzu UV-1800 UV/VIS spectrophotometer (Cole-Parmer, Vernon Hills, IL, USA). The percentage of viable cells after the treatment was calculated when compared with the untreated cells representing 100% viable cells. Assessment of Pro-Inflammatory Mediators RAW264.7 cells were cultured in DMEM + at 37 • C for 24 h and seeded in 24-well plates at a density of 5 × 10 4 cells/well. Afterwards, the cells were cultured in DMEM + with or without LPS (2 µg/mL) at 37 • C for 24 h, treated with the extract (100, 200 and 400 µg/mL) at 37 • C for another 24 h and harvested using trypsin-EDTA solution. After centrifugation at 12,000× g rpms for 10 min, the supernatant was assayed using NO • , PGE 2 , IL-6 and TNF-α concentrations as has been described below. Determination of Nitric Oxide Concentrations The supernatant was incubated with Griess reagent at room temperature for 15 min and the A value was measured at 650 nm against a blank reagent using a Shimadzu UV-1800 UV/VIS spectrophotometer (Cole-Parmer, Vernon Hills, IL, USA) [42]. NO • concentrations were calculated from the standard curve established from different concentrations of sodium nitrite. Quantitative Real-Time Polymerase Chain Reaction Total RNA was extracted using the PureLink™ RNA Mini Kit (Invitrogen Life Sciences, Carlsbad, CA, USA) according to the manufacturer's instructions. Complementary DNA (cDNA) was synthesized from 1 µg of total RNA using a SensiFASTTM cDNA synthesis kit (Bioline Reagent, London, UK). Accordingly, qRT-PCR was performed using a Sensi-FASTTM SYBR No-ROX kit (Bioline Reagent, London, UK) and a reaction mixture that consisted of SYBR Green 2 × PCR Master Mix and a cDNA template, as well as the use of forward and reverse primers. The PCR protocol included an initial hold at 95 • C for 2 min, followed by a 2-step PCR program of 95 • C for 15 s and 58 • C for 30 s for 39 cycles [23]. The relative levels of iNOS, COX-2, IL-6 and TNFα mRNA expressions were normalized to that of glyceraldehyde 3-phosphate dehydrogenase (GAPDH) using the 2∆∆CT method. The primer sequences used in this study are shown in Table 3. Table 3. Primer sequences for pro-inflammatory mediator expression in qRT-PCR. Statistical Analysis Data were analyzed using IBM SPSS Statistics Program version 2.2 and are expressed as the mean ± SEM values of three independent experiments. Statistical significance was determined using a one-way analysis of variance (ANOVA) post hoc Tukey's test, for which a p value < 0.05 was considered significantly different. Conclusions This study highlighted the determination that the ethanolic extract of Polygonum odoratum Lour. cv. Pakphai leaves contain, bioactive phenolics such as gallic acid, catechin and epicatechin 3-gallate, as well as flavonoids with a high content of quercetin and coumarin. These substances are known to increase the viability of RAW264.7 macrophage cells. Importantly, the extract inhibited iNOS gene expression while decreasing nitric oxide production. Furthermore, an increase in COX-2 activity was observed, along with a decrease in PGE2 production and reduced IL6 and TNFα production and gene expression in LPS-induced RAW264.7 cells. The findings indicate the anti-inflammatory activity of this plant through the attenuation of a pro-inflammatory mediator response in macrophage cells. In terms of its potential value-added properties, the ethanolic extract appears to be a nutraceutical product that could be used as an alternative form of treatment or a complementary treatment for inflammatory diseases. In further studies, we will intensively investigate the anti-inflammatory role of Pakphai ethanolic extract in allergic asthma and its underlying mechanisms in rats challenged by the environmental allergen ovalbumin and LPS. Additionally, gastric antiulcer activity of the extract will be assessed in ethanol/hydrochloric acid-induced mice and rats.
7,316.4
2022-06-01T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
The Effect of Iron on Gluconic Acid Production by Aureobasidium Pullulans New processes have been previously described for the continuous and discontinuous production of gluconic acid by Aureobasidium pullulans (de bary) Arnaud. Little is known about the regulatory mechanisms of gluconic acid production by A. pullulans. The response of growth and gluconic acid metabolism to a variable profile of iron concentrations was studied with A. pullulans in batch and chemostat cultures. A surprisingly high optimum N-dependent iron ion concentration in the feed medium, in the range between 0.5 mM and 3.0 mM Fe (optimum 1-2 mM), was found to be particular requirement for economically profitable continuous production of gluconic acid with 3 g/l NH 4 Cl. Increased iron concentration promoted growth on defined glucose medium. 223.3 g/l gluconic acid were continuously produced at a formation rate of the generic product (R j) of 16.8 g/(lh) and a specific gluconic acid productivity (m p) of 2.5 g/(gh) at 13 h residence time (RT) with 1mM iron, compared with 182 g/l reached at 0.1 mM. The product selectivity (product yield based on glucose) increased continuously by raising iron concentration following a saturation curve, reaching a maximum of about 98% (mol/mol) at 2 mM Fe and 76.2% conversion, compared with only 84.3% determined at 0.1 mM. The process is not obligatory growth limiting or growth related and residual nitrogen was found in all of continuous experiments, e.g. 197 mg/l of nitrogen at 0.1 mM and 201 mg/l at 2 mM of iron. INTRODUCTION The physiological D-form of gluconic acid is the oxidation product of glucose usually formed by the microbial oxidation of glucose.Gluconic acid is a colorless, or nearly colorless, light brown syrupy liquid with a mild acid taste.As a multifunctional carbonic acid of great interest, naturally occurring in plants, fruits, wine (up to 0.25%), honey (up to 1%), rice, meat, vinegar and other natural sources, belonging to the bulk chemicals and due to its versatile physiological and chemical characteristics, gluconic acid (pentahydroxycaproic acid, C 6 H 12 O 7 ) itself, its salts (e.g.alkali metal salts, in especially sodium gluconate) and the gluconolactone form have found extensively various uses in the chemical, pharmaceutical, food and animal feed (improves growth performance), textile, detergent, leather, photographic, construction (it increases cement's resistance against fracture, frost and water) and other biological and chemical industries as well as for analytical purposes [1][2][3][4][5][6][7].Gluconic acid is a mild neither caustic nor corrosive non toxic and readily biodegradable organic acid (98% after 2 days) with an excellent sequestering power [7]. Essentially only Aspergillus niger (predominantly) based on the process developed by [14] or Gluconobacter oxidans have been applied as far for the industrial production of gluconic acid.Alternatively, new superior fermentation processes using Aureobasidium pullulans have been extensively described for the continuous and discontinuous production of gluconic acid by isolated strains of yeast-like mold, which reached gluconic acid concentrations of 230-450 g/l at residence times of about 12-20 hours and 504 g/l in fed batch mode, offering numerous advantages over the traditional discontinuous fungi processes of the last 100 years.In contrast to A. pullulans the multicellular fungus Aspergillus niger is unsuitable for a continuous production of gluconic acid by free growing cells, whereas Gluconobacter is sensitive to high osmotic pressures and produces a relative large quantity of keto-acids, complicating processing and product recovery [4, 7, 15, 16,]. Bioconversion of glucose to gluconic acid is a simple dehydrogenation reaction which takes place without any involvement of complex metabolic pathways.However, numerous parameters influence and regulate gluconic acid pro-duction like oxygen, pH, temperature and medium composition [4,7,17]. Glucose in the medium is oxidized extracellularly in a two-step reaction to gluconic acid even in the absence of cells through the action of glucose oxidase and catalase derived from A. niger, in where nearly 100% of the glucose is converted to gluconic acid under the appropriate conditions [18].Gluconic acid production by A. niger and A. pullulans is a high oxygen (>100% air saturation), pH (pH above 6.5) and temperature depending process (30-31 o C), also strongly influenced by the composition of bioreactor medium [3,4,7,13,17,19]. Little is known about the regulatory mechanisms of gluconic acid production in A. pullulans and regarding the effect of trace elements and other medium compounds on the production of gluconic acid. The response of growth and gluconic acid metabolism to a variable profile of iron concentrations was studied in batch and chemostat cultures of glucose-grown yeast-like A. pullulans, in order to elucidate the very significant role of iron ions in gluconate metabolism. Pre-Culture Glucose 30 g/l, NH 4 Cl 3 g/l, KH O 4 mg/l, H 3 BO 3 40 mg/l, CaCO 3 5 g/l, thiamine-HCl 2 mg/l, biotin 0.25 mg/l, 24 h at 30°C and 100 rpm.For further experiments 10 M manganese and iron were taken for inoculum and the concentration of manganese and iron were varied in the fermentation media between 0.25 and 2 M manganese and between 50 and 500 M iron. CSTR Experiments CSTR's cells were grown in a magnetically stirred 1 liter double glass fermenter as has been described previously [4].The medium contained (g/l): Glucose 360 g/l, NH 4 Cl 3 g/l, KH 2 PO 4 0.7 g/l, MgSO O 0.2 mg/l, thiamine-HCl 2 mg/l, biotin 0.25 g/l, pyridoxine-HCl 0.625 mg/l, Ca-D-pantothenate 0.625 mg/l, nicotinic acid 0.5 mg/l.The vitamins and NH 4 Cl were added separately to the autoclaved medium, which was sterilized for 30-60 min at 121 o C, by sterile filtration (Sartorius, Göttingen, Germany).All chemicals were of highest purity commercially available.The fermentations were carried out at 30 o C, 1300 rpm, 5 l/h of pure oxygen and pH 6.5 automatically adding a 45% NaOH solution. Analysis Optical density (OD 660 nm ), dry biomass (filter method) and the concentration of glucose and gluconic acid were determined as has described in previous works [17]. RESULTS The effect of iron on growth and gluconic acid of A. pullulans DSM 7085 (isolate 70) was investigated under batch and chemostat cultivation (CSTR's) applying constant medium feed rate at RT of about 13 and 18 hours, carried out in 1 liter magnetically stirred glass fermenters on media described in materials and methods.Iron concentrations between 0.25 and 2 M were investigated in batch cultures and 0.1-2 mM iron in CSTRs. Batch Experiments Because the gluconic acid process using A. pullulans was still poorly investigated the effect of varying iron and (0 or 10 μM) and manganese concentration (0-2000 μM) was initially examined in preliminary orientation batch experiments, carried out in 1 L glass fermenter with 500 ml working volume at 30°C und pH 6.5, 1000 rpm and 5 l pure oxygen/h, using a defined mineral medium described in material and methods.The pH was controlled automatically by the addition of 4 N NaOH. It is possible to control growth and gluconic acid formation of A. pullulans by varying iron and manganese concentration.The highest product concentration and OD were reached in experiments with a medium containing both, iron and manganese as compared with experiments either without iron or manganese (Table 1).An OD of only 0.2 and 2.3 g/l gluconic acid were reached after 24 h in experiment without manganese and iron in comparison to OD of 0.55 and 11.1 g/l gluconic acid reached with 10 μM iron and 10 μM of manganese.Only 18.7 g/l and an OD of 0.52 were reached after 48 h without Fe and Mn, compared with 127.2 g/l and 6.03 OD obtained with 10 M Mn and 10 M Fe.Very high molar selectivities were obtained under iron and manganese limitation and in experiments with 10 M Fe and 10 M Mn, approaching 98% without Fe and Mn due to very low pellet forming biomass.32 g/l gluconic acid were produced after 54 h without Fe and Mn compared with 147.1 g/l reached with 0 M Fe und 10 M Mn and 118,9 g/l obtained with 10 M iron und 0 M Mn.Further increase of manganese concentration accelerated gluconic acid and biomass formation even in absence of iron displaying an optimum.Furthermore, the effect of iron on growth and gluconate formation was investigated in batch experiments at a variable iron concentration between 0.05 and 0.5 mM.Whereas OD increased with increasing Fe differences in gluconic acid production were insignificant, reaching the highest gluconic acid concentration of 160 g/l (after 100 h) with 0.1 mM Fe at an OD of 7.9. CSTRs Experiments The effect of iron on growth and gluconic acid production was investigated in CSTR experiments at 30°C and pH 6.5 with a medium as described in materials and methods containing 3 g/l NH 4 Cl and 360 g/l glucose.Fig. (1) shows the effect of varying iron concentration on growth and gluconic acid production of A. pullulans at 13 h RT. More than 190 mg/l residual nitrogen was determined in all of CSTR experiments at 13 h RT; 197 mg/l nitrogen at 0.1 mM and 201 mg/l at 2 mM iron.Insignificant differences were observed in biomass formation ranging between 7.3 g/l biomass at 0.1 mM and 6.9 g/l at 2 mM iron (Fig. 1).No correlation was observed between the optical density and biomass concentration at varying iron concentration, indicating the strong influence of iron concentration on cell morphology of dimorphic (single or multicellular) yeast-like fungus A. pullulans, meaning that single cells have a higher optical density.The higher OD/biomass at lower iron con-centrations indicates the occurrence of smaller single cells (higher total cell surface, higher OD). At 13 h RT, the highest formation rate of the generic product of 16.8 g/(l h), specific gluconic acid productivity (m p ) of 2.5 g/(g h) and gluconic acid concentration of 223.3 g/l were obtained at 1 mM iron.Comparatively, only 182 g/l (81.5%), 14.5 g/(l h) Rj and 1.82 g/(g h) mp were reached at 0.1 mM and 215.7 g/l (96.6%), 15.66 g/(l*h) Rj and 2.27 g/(g*h) mp at 2 mM iron.Surprisingly, major variations were found in product selectivity as a function of iron concentration at 13 h RT and incomplete conversion of glucose.Product selectivity increased continuously by increasing iron concentration and showing a saturation effect reached a maximum of about 98% at 2 mM and 76.2% conversion, compared with only 84.3% at 0.1 mM.At iron concentration higher than 0.5 mM, selectivity was above 90% (Fig. 2).No significant differences were observed in gluconate concentration at 18 h RT (Fig. 3) in contrast to 13 h, because of the nearly complete consumption of glucose.Optimization CSTRs studies are supposed to be operated at about 80% of conversion (g converted glucose/g feed glucose).XT diagrams as a function of varying residence time or dilution rate have been published in previous work [16]. The highest biomass of 8 g/l was achieved at 18 h RT with 1 mM iron, compared with 5.95 g/l reached with 0.1 mM.More than 250 g/l gluconic acid was produced with all of investigated iron concentrations, reaching the highest concentration of 260.9 g/l and a Rj of 14.5 g/(l*h) at 0.1 mM iron compared with 252.4 g/l and 14.02 g/(l*h) measured at 2 mM (Fig. 3).Selectivity dropped with increasing iron concentration at 18 h, because of intensifying nitrogen limitation which causes byproduct formation, like pullulan and other organic acids (e.g.fumaric acid).Fumaric acid concentra-tions between 0.026 mM (presence of nitrogen, 6.8 h RT) and 15.2 mM (absence of nitrogen, 50.1 h RT) have been found as a byproduct in gluconic acid production by A. pullulans as a function of RT and nitrogen limitation (data not shown here).A molar selectivity of 85.1% (92.7% g/g) was calculated at 1 mM and 82% at 2 mM iron (89.3% g/g).Glucose was not consumed completely and a residual glucose of 16.5 g/l was measured at 2 mM iron and about 95% conversion. DISCUSSION The development of any new multi-step biotechnological process requires three basic steps, namely: the identification and characterization of a suitable biological system (microorganism, biocatalyst), the increase of bioreactor productiv- 2).Iron effect on R j , m p and selectivity at a residence time of about 13 h (3 g/l NH 4 Cl, 360 g/l glucose, 5 mM Mn, 30°C and pH 6.5). Fig. (3). Iron effect on growth and gluconic acid production at about 18 h RT (3 g/l NH 4 Cl, 360 g/l glucose, 5 mM Mn, 30°C and pH 6.5).Chemostat cultures are dynamic systems for sophisticated medium optimization and process development, in where single parameters can be investigated in detail under steady state conditions, identifying and optimizing various optima for microbial growth and production.Very high glucose concentrations can be applied in feeding medium because of the wash out effect of chemostat due to continuous microbial glucose consumption, resulting to low residual steady state glucose concentrations.Alternatively, high initial glucose concentrations slow down the growth of A. pullulans in batch cultures.Additionally, chemostat results obtained under steady state conditions are easily reproducible and compensation effects between growth and production are detected based on volumetric and biomass-specific productivity of the generic product [4,16]. Enzymatic glucose conversion to gluconic acid is enabled either by fungal enzyme glucose oxidase or bacterial glucose dehydrogenase (e.g.bacteria such as Gluconobacter, Acetobacter).The enzymatic complex glucose oxidase/catalase is present in several microorganisms belonging e.g. to the classes Aspergillus and Penicillium or alternative in yeastlike mold Aureobasidium pullulans (Pullularia pullulans).Bacteria like Acetobacter and Gluconobacter use different mechanisms for converting glucose into gluconic acid.G. oxydans contains two types of glucose dehydrogenase which convert glucose into gluconolactone without the formation of hydrogen peroxide.Enzymes derived from cloned genes of GMO organisms may also be involved in the production of gluconic acid production or alternative economical host systems can be developed [7,15,21,22].Strain improvement has been reported to be an essential step in developing industrial microbial production processes [23]. Yeast-like mould A. pullulans is well known for pullulan formation, a polysaccharide [24][25][26][27][28].About gluconic acid's production by A. pullulans (formerly known as Dematium or Pullularia pullulans) has been previously reported using various carbon sources [29][30][31][32], however it hasn't been considered as a potential gluconic acid producer before.No information was available regarding fermentation conditions or process optimization and development and those pioneer works were mainly restricted to shake flask experiments [16].Little is known about the regulatory mechanisms of gluconic acid production by yeast like mold A. pullulans, which has been shown to be a superior gluconic acid producer [4,7,[15][16][17]33]. A. pullulans isolate 70 (DSM 7085) was isolated from wild flowers in Germany and no genetic works were carried out, in order to increase gluconic acid production, in opposite of today's industrial gluconic acid production, which uses improved mutant strains of several generations, predominantly recycled mycelia of Aspergillus niger, or Gluconobacter suboxidans in discontinuous submerse fermentations [34][35][36].Today's sodium gluconate fermentation using A. niger in submerse fermentation bases on the process developed by [14]. The response of growth and gluconic acid metabolism to a variable profile of different iron concentrations was studied with glucose-grown batch and chemostat cultures of yeast like mold A. pullulans; the emphasis was focused on the physiological parameters of yeast growth and gluconic acid production, demonstrating the tremendous significance of iron ions in this process.In preliminary orientation batch cultures, iron as well as manganese ions have been identified to be critical nutrients, strongly influencing growth and gluconic acid formation in A. pullulans, whereby manganese showed a stronger effect than iron.Serving as cofactors of essential enzymes for the production of gluconic acid (Glucose-Oxidase, Catalase) iron as also manganese are significant trace elements for optimal gluconate production in Aspergillus niger and A. pullulans.In commercial fermentations they are added into bioreactor in undefined amount with maize steep water and other medium compounds. It is also well known that iron is the integral component of many metalloenzymes; such as aconitate-hydratase, catalase, peroxidases and components of mitochondrial electron transfer chain [37,38] and often iron concentration was used as variable factor to control microbial metabolism [39][40][41].Iron concentration had in accordance to batch experiments no relevant effect on the growth of A. pullulans.A very strong iron effect on gluconate production was observed in batch cultures at the lower concentrations of 0-10 μM.Substantial differences of iron effect on gluconic acid fermentation occurred at the lower RT (lower conversion) of 13 h, whereas no significant differences were observed in gluconate production at 18 h RT, because of the nearly complete consumption of glucose.Optimization CSTRs studies are supposed to be operated at about 80% conversions (g converted glucose/g feed glucose) for identifying the real effects.A surprisingly high particular requirement of iron ions (0.5-3.0 mMol at 3 g/l NH 4 Cl) has been detected for profitably performing gluconic acid fermentation in A. pullulans, displaying an optimum iron concentration between 1-2 mM for product concentration, product yield, selectivity, R j and m p.With increasing iron concentration from 0.1 to 2 mM, product selectivity increased continuously showing a saturation behavior and reached almost 100%.A more intensive respiration (respiration chain) as well as byproduct formation (e.g.fumaric acid and pullulan formation) are eventually the main reasons for low selectivity at lower iron concentrations.For example, [42] reported that iron supply enhances mycelial growth form of dimorphic A. pullulans and decreases pullulan formation.Mycelial form of A. pullulans favors gluconic acid formation in our investigations, suppressing pullulan formation, instead, showing the relationship between morphology and gluconic acid production. The increase of mp and and Rj at almost constant biomass at 13 h RT at increasing iron concentration up to 1 mM, indicates that iron stimulates synthesis or/and activity of glucose oxidizing enzymes in A. pullulans.[43] reported about the stimulation of glucose oxidase synthesis in A. niger by the supply of iron sulfate and KCl.Similar effects found [44,45] in Penicillium strains.[45] as well as [46] reported on the other side that supply of 0.001% FeSO 4 7 H 2 O or of 2.1-40.4M iron didn't stimulate production of calcium gluconate by P. chrysogenum or with A. niger, respectively.Furthermore, presence of iron has been reported to favor, in addition to sodium salts, accumulation of oxalic acid and of yellow-green pigments in mycelia of A. niger, affecting product separation [47]. The discrepancy between biomass and OD can be explained by the influence of iron on dimorphic growth behavior of yeast-like A. pullulans.Single cell growth (higher total cell surface, higher OD) promotes pullulans formation instead of gluconic acid.Nitrogen and Iron limitation have been reported to enhance pullulans production in A. pullulans.Iron limitation possibly promotes fumaric acid byproduction as well. RT is also of major importance for optimum production and must be taken into consideration, since gluconic acid is partly utilized at long RTs, because of almost complete glucose consumption (100% conversion).This is also confirmed by the reduction of selectivity at increasing iron concentration at 21 h RT, due to almost complete glucose consumption and progressive product redirection or reconsumption. The present results of very high gluconic acid concentrations reached at very low RT (13 h) by free growing cells of A. pullulans are the best known that have been published in the international bibliography, encouraging the use of A. pullulans processes for future industrial applications in gluconic acid business as innovative alternatives to the discontinuous fungi processes. CONCLUSIONS Yeast-like A. pullulans offers the positive characteristics of both groups of eucaryotic (fungi and yeasts) and prokaryotic microorganisms (bacteria) at once, enabling a continuous process operation by free growing cells running at very high glucose and product titers.Medium optimization carried out in chemostat resulted in strong increase of yield and selectivity, productivity and product concentration at very short residence times.Present results of very high gluconic acid concentrations reached at very low RT (13 h) by free growing cells of A. pullulans without any biomass retention are the best known that have been published in the international bibliography, encouraging the use of new Aureobasidium pullulans processes [4,5,16,17,20,33] for future industrial applications in gluconic acid business as innovative alternatives to the discontinuous fungi processes of the last 100 years.Based on our deep investigations on the influence of important fermentation parameters on gluconic acid production by A. pullulans, such as pH, temperature, air saturation, medium composition, residence time, biomass retention, cascading of two fermenters etc., further investigations would accelerate the continuous and discontinuous production of gluconic acid decreasing production costs to minimal levels.Present results show nature's latent potential for still unknown high producing microbial wild strains as a comparison to extensive publicity of genetic engineering research and development for future applications in gluconic acid research. R j = Formation rate of the generic product, g gluconic acid/(l*h) (volumetric productivity) m p = Specific gluconic acid productivity, g gluconic acid/(g biomass*h) (Biomass specific productivity) Fig. ( Fig. (2).Iron effect on R j , m p and selectivity at a residence time of about 13 h (3 g/l NH 4 Cl, 360 g/l glucose, 5 mM Mn, 30°C and pH 6.5). l.h) mp (g/g.h)Selectivity ity by sophisticated media optimization and adaptation of fermentation technology to the developed process and downstream processing (cell separation by centrifugation or ultrafiltration, product separation, evaporation and drying). product/g converted glucose (%) Yield = G product/g initial glucose (%) Conversion = G converted glucose/g initial glucose (%) XT diagram = Parameters as a function of residence time
4,881.8
2008-07-23T00:00:00.000
[ "Biology", "Chemistry", "Environmental Science" ]
Anomalous Spectral Shift of Near- and Far-Field Plasmonic Resonances in Nanogaps The near-field and far-field spectral response of plasmonic systems are often assumed to be identical, due to the lack of methods that can directly compare and correlate both responses under similar environmental conditions. We develop a widely tunable optical technique to probe the near-field resonances within individual plasmonic nanostructures that can be directly compared to the corresponding far-field response. In tightly coupled nanoparticle-on-mirror constructs with nanometer-sized gaps we find >40 meV blue-shifts of the near-field compared to the dark-field scattering peak, which agrees with full electromagnetic simulations. Using a transformation optics approach, we show such shifts arise from the different spectral interference between different gap modes in the near- and far-field. The control and tuning of near-field and far-field responses demonstrated here is of paramount importance in the design of optical nanostructures for field-enhanced spectroscopy, as well as to control near-field activity monitored through the far-field of nano-optical devices. T he interaction of light with noble metal nanostructures excites collective electron oscillations in the form of localized plasmonic resonances. As a result, such plasmonic nanostructures are able to confine light within extremely small volumes, millions of times smaller than a wavelength-sized box. Squeezing light into such small regions creates near-thousandfold field enhancements, which are ideal for intense surfaceenhanced Raman scattering (SERS), thus allowing only a few atoms, molecules, or nano-objects to be directly tracked. 1 So far, researchers have typically assumed that both the localized near-field and radiated far-field support their resonant behavior (i.e., strongest field enhancements) at similar spectral wavelengths. As a result, optimization of SERS has depended on measurements of the far-field scattering spectrum. Here we show that when the optical field is tightly confined by nanoscale gaps, the resulting multiple order plasmon resonances supported at different wavelengths interfere with each other differently to build up the signal from the near-and far-fields. As a result, significant spectral shifts are observed. We experimentally demonstrate this using a spectral-scanning technique that simultaneously records dark-field scattering spectra and tunable-pump SERS measurements on each nanostructure individually. We utilize plasmonic constructs for this that provide extremely robust nanoscale gaps, 2 using Au nanoparticles separated from a bulk Au film by an ultrathin molecular spacer, known as the nanoparticle-on-mirror (NPoM) geometry. 3−5 In contrast to the red-shifts always found in isolated nanoparticles, 6 the near-field NPoM resonance from SERS is found to be always blue-shifted from the scattering peak. We explain this through a transformation optics model that allows the decomposition of the total signal into individual modes that show different radiative properties. In particular the n = 1 and 2 modes interfere constructively in the far-field, but destructively in the near-field. From this understanding, our experiments also allow us to show that the SERS background arises from a completely different process from the SERS vibrational signal, as it follows instead the farfield spectral enhancement. Our insights provide a solid intuition to predict how near-fields behave within a wide variety of plasmonic nanoconstructs. Direct measurements of the near-field plasmonic enhancement spectra either are probe-based techniques or must exploit nonlinear processes, since only then do the evanescent fields contribute most strongly. Probe-based techniques are not suitable for single NPoM measurements, and second-harmonic generation is not very reliable for this task, as it possesses both bulk and surface contributions and is thus very sensitive to many additional aspects of nanoscale geometry. Thirdharmonic generation techniques 7 are also possible but so far are primarily single-wavelength studies. 8−12 The other favorable process for this task is SERS, but this has also been challenging because of the requirement for wide tuning of the Raman pump laser, while ensuring high-contrast tunable filtering of the scattered light from the background Rayleigh pump scatter. As a result, most experiments work with arrays of nanoparticles 13 or use a limited number of excitation wavelengths on individual nanostructures. 14−18 Alternative approaches with fixed excitation wavelength that attempt to tune the plasmon resonance suffer uncontrolled changes in confinement and enhancement. 8−12 Recent experiments 15 have managed to deliver wavelength-scanned Raman and dark-field measurements on lithographically defined plasmonic dimers in order to ascertain how quantum tunneling affects the SERS amplitude. Lithography however generates considerable uncertainty in the gap sizes and morphologies. Compared to such nanoparticle dimers, the NPoM geometry guarantees much better control of gap size between gold film and nanoparticle, higher reproducibility, and a much simpler and robust nanoassembly procedure and has thus been recently utilized in many experimental studies. 3−5 The well-defined architecture also precisely defines the orientation of the optical fields and of the molecules that are currently studied in SERS, and thus allows precise comparison of the near-and far-field response. Our experimental setup is optimized to realize both dark-field microscopy and broadband-tunable SERS measurements on the same single nanoparticle at the same time ( Figure 1a). For darkfield scattering spectroscopy white light is focused on the sample through a high numerical aperture (NA = 0.8) 100× objective, and a cooled spectrometer detects the scattering of single nanoparticles, which are kept well spatially separated (coverage <1 μm −2 ). To realize broadband-tunable SERS a sub-nanometer linewidth-tunable laser source is required. To create this, a 200 fs Ti:sapphire oscillator pumps a femtosecond optical parametric oscillator to give tunable output over visible and near-infrared wavelengths from 500 to 1040 nm (see Methods). The output is spectrally narrowed below 1 nm using an acousto-optic programmable dispersive broadband filter (AOPDF), 19 yielding fully automated tuning with multi-milliwatt output powers. This Gaussian beam is focused in an inverted microscope to a diffraction-limited spot using the same 100× objective. For each excitation wavelength, Rayleigh-scattered light is filtered out using a computer-controlled translatable custom-built array of multiple linear variable long wave pass (LP) filters with an overall optical density (OD) of 11, maximum transmission of 80%, and cutoff spectral width of 10 nm. Stokes Raman signals are recorded with a spectrometer and cooled EMCCD camera. Calibration on bulk solids confirms this system is capable of Raman measurements across the entire visible and infrared. For the near-field measurements here, a p-terphenylthiol (TPT) self-assembled molecular monolayer (SAM) is used as a nanoscale spacer between the flat gold surface and 60 nm gold nanoparticles placed on top. The gap thickness, which depends on the orientation angle of the molecules on the gold surface, is found to be d = 1.4 ± 0.1 nm through phase-modulated ellipsometry measurements, in good agreement with previous work. 1 Individual nanoparticles are first optically characterized by dark-field spectroscopy (Figure 1b), which shows for all spectra a strong coupled plasmon mode resonance in the nearinfrared around 730 ± 20 nm. The <20 nm (fwhm) variation in spectral peak arises from the 10% distribution of Au nanoparticle diameters. The tightly confined hot spot created within the gap (lateral size (Rd) 1/2 ≈ 6.5 nm) is then an ideal situation to compare near-and far-field spectra. When the laser (at pump wavelength λ p = 630 nm in Figure 1) is focused away from any Au nanoparticle, the Raman scattering from the molecular SAM is below our detection noise level (Figure 1c, black). However, focusing on an NPoM elicits hundreds of times stronger optical fields, greatly enhancing the SERS signal of TPT molecules confined within the plasmonic nanogap (Figure 1c, red). Three dominant vibrational lines are seen corresponding to a C−H rocking mode (1080 cm −1 ) and to in-plane stretching of the benzene rings (1256 and 1585 cm −1 ). The average laser power at the sample is kept below 1 μW, which is needed to avoid any shifts in the NPoM dark field spectra or any changes in SERS over time (Supporting Information). Similar SERS signals are obtained from each NPoM. By scanning the laser wavelength, we measure the plasmonicinduced SERS enhancement from TPT to access the near-field spectrum of the NPoM (Figure 2a). The vibrational peaks do not shift with pump λ p , but their amplitudes show strong Plotting the extracted experimental SERS enhancements I SERS (ν, λ p ) of each of the three main TPT peaks against the outgoing wavelength (Figure 2b) shows they reach their maxima close to the coupled mode observed in far-field scattering, but are blue-shifted by ∼22 meV. This is contrary to the behavior for isolated plasmonic nanoparticles, in which case the nearfield resonance is found to be red-shifted compared to the scattering. 6,20−22 This red-shift is associated with the damping of a plasmonic resonance. 6,22 A localized surface plasmon can be interpreted in terms of a driven damped oscillator. When damping is present, the maximum oscillation amplitude occurs at a lower energy than the natural frequency of the oscillator, while maximum dissipation occurs at the natural frequency, giving a spectral red-shift. A more complete description of the oscillator model describing plasmonic resonances was discussed by Kats et al. 23 Blue-shifted near-field spectra were previously reported 24 for large nanoparticle arrays. Using single nanostructures here confirms this behavior does not originate from any sample inhomogeneity or periodic effect. Finite difference time domain (FDTD) simulations of the maximum near-field and the scattering far-field optical response of a single NPoM confirm this behavior ( Figure 2c) and show it is a general property of all closely coupled plasmonic resonators such as dimers (Supporting Information). We note that the overall spectral shifts between simulation and experiment here probably arise from faceting of the nanoparticle, which increases the coupling and red-shifts the coupled plasmon resonance. 25,26 Results on a range of NPoMs show that the near-field resonance is always blue-shifted from the far-field resonance by 4 to 55 meV depending on the nanoparticle (Figure 3a). Different sized nanoparticles tune the coupled mode, but in all cases I SERS follows a Gaussian spectral profile with a ∼ 2-fold reduction in line width compared to the corresponding resonant plasmonic mode in scattering. This is also unexpected since both dark-field and Raman scattering require each photon to couple in and to couple back out, both resonantly enhanced by the plasmonic antenna. Article To understand the origin of this effect and the spectral narrowing of the near-field resonance, it is useful to separate out the contributions from each plasmonic resonance supported in the gap, which thus requires further theoretical insights. Contrary to an isolated nanoparticle, when two plasmonic resonators (here a nanoparticle and mirror) are coupled, the near-field enhancement is heavily dependent on the physical geometry of the nanogap and on the excited modes within it. By implementing a transformation optics technique 27 (Supporting Information), we model the optical response of the gap modes, both in the near-field and in their radiative emission. The transformation optics technique is developed here in a two-dimensional system to provide the key intuition to understand the nature and composition of the fundamental modes. We find that the dipole-localized surface plasmon polariton resonance (n = 1) strongly interferes with the quadrupole mode (n = 2) (corresponding field distributions in Figure S6 of the Supporting Information). This leads to similar field patterns around most of the nanoparticle, but pronounced differences inside the gap (Figure 4a−d). Critically, they have opposite phase, ϕ n , in the near-field (i.e., destructive superposition, Figure 4e arrow), but they radiate coherently to the far-field (i.e., constructive interference, Figure 4f). Hence, the second gap mode shifts the far-field (σ scat ) resonance to longer wavelengths ( Figure 4d) and the near-field (SERS) to shorter wavelengths (Figure 4c). Even higher-order modes become more significant for extremely small separations (<0.3 nm for the NPoM discussed here), where the nanoparticle couples even stronger with the mirror. Their phase shifts alternate with even/odd n in the near-field, but all modes add coherently in the far-field, shifting σ scat to even longer wavelengths. Both experiment and simulations confirm this, showing an increased blue-shift for coupled modes that are located further in the infrared. As the nanoparticle moves away from the mirror, the blue-shift observed in the coupled regime decreases until we reach the decoupled regime (isolated nanoparticle) and a red-shift is observed instead (Figure 3b), in agreement with previous results. On the other hand, it has been reported 28,29 that for geometrically more complex structures (such as trimers), the field enhancement in the near-field shows a maximum "in regions where there is no hint of a resonance in the absorption/ extinction". 29 Such structures support multiple modes at nearby frequencies, which commonly result in complex spectral shifts in the far-field response. Our NPoM system provides a unique opportunity to isolate a precise modal structure and perform a well-defined modal analysis on a robust spectral composition, which is not typical in other SERS systems. It should be noted that all theoretical calculations are performed purely classically, ignoring nonlocality and electron spill-out from the plasmonic metals; the basic concepts here will be little altered by quantum effects for this range of particle−surface separation distances. Further insights can be extracted from the dependence of the SERS background on λ p . This background contribution has been highly debated in the literature, 30,31 with competing explanations of plasmon luminescence, image molecules, and inelastic electron scattering and from contamination, however, clarifying data are still lacking. For each NPoM we extract the background in the vicinity of the 1585 cm −1 peak and plot it as a function of the emitted wavelength (Figure 3a open circles). This clearly shows the SERS background does not match the spectral shape of the near-field enhancement in the gap but closely follows the far-field optical scattering response of the plasmonic NPoM, in both spectral position and line width. Recent work 30,31 shows that much of the SERS background must come from optical penetration inside the metal, where it can induce inelastic scattering of the electrons. In the experiments here, molecules are placed only in the gap (hence they probe only the near-fields), while the n = 1, 2 modes localize the light around the entire nanoparticle surface. Hence, our spectral-tuned measurements thus prove that the SERS peaks and background must arise from different sources, as also suggested by super-resolution imaging studies. 32 As a result, we prove that the SERS background observed here has a component that tracks the far-field enhancement, as well as an equally intense spectrally constant component arising from the surrounding planar substrate. We have thus shown that the clear identification and spectral separation of the near-and far-field resonances can be achieved using precision spectrally tuned SERS measurements on single nanoparticles. Both experiment and theory agree in the resonance shifts and spectral widths, which are found to be controlled by the coherent superposition between different plasmon gap modes. In the spherical nanoparticle-on-mirror geometry the dominant modes are the n = 1 and 2, which have opposite phase in the near-field but the same phase in the farfield, resulting in a blue-shift of the SERS peak compared to the ACS Photonics Article dark-field scattering and a 2-fold smaller resonance line width. This intuitive understanding of how the resonance positions are determined is generally applicable to coupled plasmonic systems. It also shows that the ever-present SERS background does not come from the same spatial locations as the near-fieldcontrolled SERS peaks. ■ METHODS Sample Preparation. Gold substrates are prepared by evaporating 100 nm gold (Kurt J. Lesker Company, PVD 200) on a silicon (100) wafer (Si-Mat, Germany) with a rate of 1 Å/ s. To obtain atomically smooth films, a standard template stripping method is used: silicon substrates are glued onto the freshly evaporated gold using an epoxy glue (EpoTek 377), 33 and the resulting gold/epoxy/silicon sandwich is peeled off the silicon wafer. Self-assembled monolayers of 1,1′,4′,1″-terphenyl-4-thiol (Sigma-Aldrich, 97%) are formed by submerging the freshly template-stripped substrates into a 1 mM solution in water-free ethanol (Sigma-Aldrich, reagent grade, anhydrous) for 24 h. The samples are subsequently thoroughly rinsed with ethanol and blown dry. Gold nanoparticles (BBI Solutions, UK) are deposited by drop casting from the as-received solution. The deposition time is adjusted in order to obtain the desired nanoparticle coverage. The samples are rinsed with Milli-Q water in order to remove any salt residues. Ellipsometry. The thickness of the self-assembled monolayers is measured using both ellipsometry (Jobin-Yvon UVISEL spectroscopic ellipsometer) and normalizing plasmon resonance spectroscopy. 34 For the ellipsometry measurements an angle of incidence of 70°is used. The data are modeled and fitted using a three-layer model. A thickness of 1.5 nm is determined with a refractive index of n = 1.45. Dark-Field Spectroscopy. Optical dark-field images are recorded on a custom Olympus GX51 inverted microscope. Samples are illuminated with a focused white light source (halogen lamp). The scattered light is collected through a 100× dark-field objective (LMPLFLN-BD, NA = 0.8) and analyzed with a fiber-coupled (50 μm optical fiber) Ocean Optics QE65000 cooled spectrometer. We use a standard diffuser as a reference to normalize white light scattering. For each sample, we record optical spectra from 20 randomly selected isolated nanoparticles. Tunable SERS. An ultrafast laser system based on a 200 fs Ti:sapphire oscillator (Spectra Physics MaiTai delivering 200 fs pulses, fwhm 10 nm, at 80 MHz repetition rate) pumps a femtosecond optical parametric oscillator (Spectra Physics Inspire). This light source provides a tunable output over a wide range of visible and near-infrared wavelengths from 500 to 1040 nm. The monochromaticity of the output beam is reduced below 1 nm spectral bandwidth using an acousto-optic programmable dispersive broadband filter (AOPDF, Dazzler, Fastlite). Relying on interactions between a polychromatic acoustic wave and a polychromatic optical wave in the bulk of a birefringent crystal, it is fully automated across a wide wavelength range (500−900 nm), yielding average output powers of several milliwatts. SERS experiments are performed on the same modified Olympus GX51 inverted microscope used for dark-field spectroscopy. A monochromatic wavelength-tunable laser beam is focused on the sample using a 100× objective (NA = 0.8). Raman scattering is collected through the center of the objective and analyzed with a Shamrock SR-303i fully automated spectrometer coupled with an EMCCD camera water cooled to −85°C. For the current experiments we use a 600l/mm 650 nm blazed grating. Rayleigh scattering is filtered out with a set of three long pass linear variable filters (DELTA); this filtering system allows the detection of a minimum Raman shift of about 400 cm −1 over the studied spectral range. The system is calibrated using a silicon substrate as a reference. Spectral acquisitions are taken using an integration time of 10 s and an average laser power on the sample below 1 μW. The enhancement factor per molecule (EF) is calculated for each nanoparticle by integrating the Raman peak areas and taking the ratio between SERS at 1585 cm −1 (I SERS ) and the corresponding unenhanced signal from the bulk powder (I R ): where N SERS and N R are the estimated number of molecules contributing to SERS and Raman signals, respectively (Supporting Information). From a spot size of 0.4 μm and assuming that N SERS = 200 molecules are confined in each hot spot, we estimate the measured EF to be ∼10 8 for this excitation wavelength. We compare this to predictions from numerical simulations of this geometry, which suggest EF = |E p | 2 |E SERS | 2 = 10 6 −10 7 , where |E p | is the field amplitude enhancement at the incident laser wavelength, while |E SERS | is the field amplitude enhancement at the outgoing wavelength (Stokes emission). By fitting Lorentzian lines to each vibrational peak and subtracting the SERS backgrounds, the spectral evolution I SERS (ν, λ p ) of the three main TPT peaks is extracted. Normalizing these to the incident laser power (we separately confirm all signals are linear in pump power), these are plotted as a function of the excitation wavelength and directly compared to the dark-field spectrum of the same nanoparticle ( Figure 2b). FDTD Simulation. The electromagnetic response of the nanoparticle on mirror geometry has been simulated by threedimensional FDTD calculations using Lumerical FDTD Solutions v8.9. The structure has been modeled as a gold sphere of 60 nm diameter on top of a 200 nm thick gold layer, with a 1 nm thick dielectric sheet in between. For the gold, we referred to the dielectric constants reported in Johnson and Christy. 35 The gold nanoparticle was illuminated with ppolarized plane waves from an angle of incidence of θ i = 55°. The scattered light was then collected within a cone of halfangle θ c = 53°, based on the numerical aperture of the objective. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsphotonics.5b00707. Laser power effects on the optical response of single nanoparticles, terphenylthiol powder absorption measurements, biphenyl-4-thiol-tunable SERS measurements on different size nanoparticles, FDTD simulations for a dimer geometry, transformation optics technique, scanning electron microscopy correlation (PDF)
4,842.6
2016-02-02T00:00:00.000
[ "Physics" ]
Phenol Contents and Antioxidant Activity of Sonokeling (Dalbergia latifolia Roxb) Wood Dalbergia latifolia or sonokeling is a native species of Java, Indonesia, used as an important wood for furniture and building materials, due to the high of durability and beautiful color. This study, therefore, was aimed to investigate the phenol composition, represented by total phenolic, flavonoid, and flavanol content, as well as antioxidant activity, conducted by DPPH (1, 1-diphenyl-2-picrylhydrazyl) method on D. latifolia wood. The sample was extracted using ethanol-toluene solvent in a Soxhlet apparatus, and subsequently subjected to column chromatography. This treatment yielded 12 fractions, which were then evaluated for phenol contents and antioxidant activity. The results showed a high antioxidant activity and total phenolic content in Fr.1Fr.3, while latifolin was detected and characterized by GC-MS and a literature comparison. Therefore, it was established that the antioxidant activity of D. latifolia wood extractives properly correlated with the total phenolic, but not with the total flavonoid and flavanol contents. Introduction Dalbergia latifolia called sonokeling (Javanese) or Java palisander (English) is a native species from Indonesia, known to possess beautiful wood, with a brown to dark brown color (Orwa et al. 2009). In addition, they are classified as highly resistant, naturally durable (Kalynasundaran and Ganti 1975), placed in strength class II (Dwianto et al. 2019), and also deliver good acoustical properties (Karlinasari et al. 2012). Hence, the wood is commonly used in the manufacture of furniture and building materials. Based on the chemical properties, Sekine et al. (2009) isolated some neoflavonoids compounds from the heartwood, D. latifolia, characterized as latifolin and its derivatives, which were then tested for antitermite and antifungi activities (Sekine et al. 2009). The wood, bark, and leaves extracts were also reported to confer anticancer and antioxidant effect (Khalid et al. 2011;Niraimathi and Sundaraganapathy 2014;Liu et al. 2018;Tripathi 2018). Other investigations performed on the genus Dalbergia demonstrated the propensity for the leaf extract of D. saxatilis to increase kidney toxicity (Ismail et al. 2015), D. sisoo to function as a photoprotective and DNA protective agents (Yasmeen and Gupta 2016), while D. parviflora contained antioxidant isoflavonoids (Castellano and Torrens 2015). This antioxidant activity is affiliated with the protection of cell body from free radicals continuously which is produced internally, where the excess quantities are responsible for various disease manifestations (Young and Woodside 2001). Numerous radicals are known to be highly reactive with other molecules, e.g., DPPH (1, 1-diphenyl-2picrylhydrazyl), which is unstable in dark purple color. Meanwhile, phenolic compounds as antioxidants play the role of donating proton to reduce DPPH-H to the nonradical form of DPPH, which is an activity of polyphenols from plants (Ku et al. 2007;Gan et al. 2010). This study, therefore, investigated the wood extractives obtained from D. latifolia wood, in order to determine the phenol contents and antioxidant activity. Sample Collection and Extraction The sample of D. latifolia wood was purchased and collected from a wooden industry in Bantul, Yogyakarta, Indonesia. The 10 g of the heartwood and sapwood were mixed and milled to powder, followed by drying at oven temperature 40°C for a week, and then extraction was conducted using ethanol-toluene (2/1, v/v) in soxhlet apparatus for 6 h. Column Chromatography and Gas Chromatography Mass Spectrometry (GC-MS) Analysis Si-gel 60 with size of 63-210 μm (Kanto Chemical Co., Inc., Japan) was used for column chromatography, where nhexane, ethyl acetate (EtOAc), acetone, and methanol (MeOH) were loaded as eluent. Conversely, a GC-MS-QP 2010 (Shimadzu, Japan) machine was implemented to detect the compound, as 1 µl of the sample (1 mg/ml) was directly injected with column temperature from 100° C (1 min) to 320°C at 5°C/min; while that for injection and detection were 250°C and 320°C, respectively. In addition, DB-1 capillary column (30 m x 0.25 mm I.D. and 0.25 μm; GL Sciences, Tokyo, Japan) was used in the machine, using helium as the carrier gas, and the acquisition mass were set from 50-800 amu. Subsequently, the mass spectrum obtained for each sample was compared with data from the NIST library and the literature (Sekine et al. 2009). Total Phenolic Content (TPC) The Folin-Ciocalteu method by Diouf et al. (2009) was used as a reference during the investigation of TPC. Approximately 2.5 ml of Folin-Ciocalteu phenol reagent (10 times dilution) was mixed with 0.5 ml of the sample (0.25 mg/ml) and incubated for 2 min, then 2 ml of 7.5% aqueous sodium carbonate was added and incubated again for 30 min. Finally, the mixture was placed in the equipment, followed by the sample absorbance reading at 765 nm, and the results of TPC were expressed as (+)-gallic acid equivalents (mg GAE/g extract). Total Flavonoid Content (TFC) TFC evaluation involved the AlCl3 method (Brighente et al. 2007), where 2 ml of the sample prepared at 1 mg/ml concentration was reacted with 2% AlCl3.6H2O solution (2 ml). This mixture was then incubated for 1 h at 20°C, followed by the absorbance reading at 415 nm, and the results expressed in quercetin equivalents (mg QE/g extract). Total Flavanol Content (TVC) The TVC was determined using the vanillin-HCl method (Richard et al. 1978), where 0.5 ml of the sample (1 mg/ml concentration) was mixed with 3 ml of 4% vanillin reagent and 1.5 ml of HCl. This reaction was performed for 15 min at ambient temperature, followed by the absorbance reading at 500 nm, and standard calibration used was of (+)catechin (mg CE/g extract). Determination of DPPH Radicals Scavenging Activity The determination of DPPH radicals scavenging or antioxidant activity was conducted according to Gao et al. (2006), where each 0.1 ml methanolic extract at different concentrations were mixed with 5 ml of 0.004% DPPH in methanol and incubated for 30 minutes. Therefore, the sample absorbance was read at 517 nm, using UV-Vis spectrophotometer, and the antioxidant activity was calculated using the following equation: Where Ao is the absorbance of blank and A1 is absorbance of sample. The antioxidant activity also was represented as IC50, which is an expression for the concentration responsible for inhibiting 50% activity. Extraction and Isolation The yield of D. latifolia ethanol-toluene extract was not mentioned in this report, although Table 1 shows the result of sample fractionation. Conversely, the isolation process involved the use of column chromatography with n-hexane as solvent, whose polarity was increased with EtOAc, acetone, MeOH, and water. At the inception, Fr. 1 and Fr. 2 had the highest yield of 0.74 g and 0.34 g (Table 1), which were collected in the eluent of n-hexane 100% and nhexane/EtOAc 80%, respectively. Therefore, it is established that D. latifolia wood extractives is dominated by apolar compounds, although Fr. 12 yielded 0.59 g in the MeOH-water soluble fraction (polar compounds). This fraction is observed to possess a comparably higher content, and predicted to comprise of more polar components, including tannins. Characterization of Fr.1-Fr.12 The 12 fractions were analyzed using GC-MS by direct injection, where only Fr. 1, Fr. 2, and Fr. 3 demonstrated a compound with a higher peak at a similar retention time of 41.5 mins (Figure 1), suggesting the tendency of similar components. Meanwhile, none was detected from Fr. 4-Fr. 12, as the presence of polar compounds possibly requires further processing by silylation or methylation. Further discussions were performed to characterize Fr. 1-Fr. 3, through a comparison with the mass spectra of latifolin, as demonstrated in Table 2. Furthermore, the molecular weight obtained for Fr. 1-Fr. 3 was at m/z 286, alongside a base relative intensity at m/z 154. Therefore, a similarity was established between the fragmentations and latifolin, as reported by Sekine et al. (2009), making it the main compound, based on literature comparisons. This was also demonstrated in previous investigations performed on the genus Dalbergia, as an isolate from D. parviflora (Muangnoicharoen and Frahm 1982), with the molecular structure displayed in Figure 2. (7), 255 (40), 240 (7), 227 (7), 211 (7) Phenol Contents and Antioxidant Activity The phenol content and antioxidant activity of Fr. 1-Fr. 12 were displayed in Table 3, as the highest TPC concentration was observed in Fr. 1 and Fr. 2, while Fr. 1 and Fr. 4 demonstrated the most significant level of TFC, and high TVC values were identified in Fr. 7 and Fr. 9. A comparison with previous works showed a markedly lower amount of TPC, especially in the Fr. 1 (469.8 mg/g) and Fr. 2 (415.5 mg/g), than in the bark of D. latifolia at 641.8 mg/g (Khalid et al. 2011), although higher than reported by Tripathi (2018) in the leaves (29.1 mg/g). Conversely, the TFC value for Fr. 1 (171.6 mg/g), Fr. 4 (173.2 mg/g), and Fr. 9 (170.5 mg/g) were higher than previous reports on the bark of D. latifolia, at 46 µg/ml (Khalid et al. 2011). The test conducted with DPPH demonstrated a higher level of antioxidant activity in Fr. 1-Fr. 3 (Table 3), which was affiliated with the presence of latifolin as the main compound. Therefore, the high values in these fractions were assumed to have been affected by the neoflavonoids, despite the comparably higher level of the positive control, encompassing cathecin, quercetin, and gallic acid. This study also demonstrated lower antioxidant activity in Fr. 1-Fr. 3 when compared to the bark of D. latifolia (Khalid et al. 2011). However, the values recorded were higher than D. saxatilis woody roots (Isyaka et al. 2015). Masendra, Denny Irawati, Almaratush Shoolichah Ridlo, and Ganis Lukmandaru Figure 3 shows the plots between antioxidant activity and phenol contents, which displayed a good pattern against TPC, suggesting the dependence of D. latifolia on TPC for effectiveness (3a), while Figure 3b and 3c were resulted in random plots. This relationship is in agreement with several prior studies (Eddebbagh et al. 2016;Guedes et al. 2017;Amamra et al. 2018;Hossain et al. 2019). Correlation between Phenol Contents and Antioxidant Activity The measurement of TPC indicates the presence of phenols, hence a better understanding of the particular compound responsible in the antioxidant activity requires the conduction of specific phenol evaluation, through TFC and TVC. Figure 3b showed a low correlation with TFC, although fractions demonstrating high effectivity were generally observed to possess high concentrations. Furthermore, GC-MS data in the current study and a previous work (Sekine et al. 2009) reported on the presence of neoflavonoids on the extracts of D. latifolia, as the common fractions with more significant activity also possessed higher total flavonoids. The neoflavonoids identified in this research, including latifolin probably does not refer to the total flavonoids measured by the unit of standard, quercetin (Figure 4a). Based on the analysis of regression, the direct correlation against TFC was also weak, which is inconsistent with the previous reports by Eddebbagh et al. (2016) and Amamra et al. (2018), although in agreement with the study by Ghasemi et al. (2009) on peels and tissues of 13 citrus species. Further determination on more specific phenols was also conducted for TVCs, and the values obtained correlated properly with antioxidant activities (Figure 3c). Similar with TFC, it was impossible to establish a good correlation against TVC, due on the generally opposing values recorded, as shown in Table 3. This outcome suggests the weak dependence of antioxidant activity on TVC, which was expressed in catechin unit, possessing latifolin as the main compound and a different type flavonoid. Conversely, flavanols or catechins are flavanones 3-hydroxy derivatives, also referred to as flavan-3-ols, due to the bound of the hydroxyl group with the C ring at position 3 ( Figure 4b). This compound is classified as a neoflavonoid, possessing the 4-phenylchromone skeleton, which is different from the 2-phenylchromen-4-one backbone (Phance et al. 2016). The varying concentration of catechins and latifolin as the main compounds in D. latifolia wood possibly lead to the reduced value of TVC in Fr. 1-Fr. 12, and is also associated with the absence of a good correlation against antioxidant activity. Meanwhile, the flavanols responsible for the donation of proton in the sample were not detected, which is different from the study of Henning et al. (2003) conducted on green tea extract, but in agreement with the research on tea extracts by Gao et al. (2013). Conclusions Based on the results and discussion, it was established that Fr. 1-Fr. 3 showed comparably higher TPC and antioxidant activity than other fractions. Conversely, more significant levels of TFC were observed in Fr. 1 and Fr. 4, while Fr. 7 and Fr. 9 demonstrated relatively better TVC concentrations. The GC-MS analysis detected latifolin in Fr. 1-Fr. 3, which was assumed to be responsible for antioxidant activity. Furthermore, the differences observed in the results of correlation against TPC, TFC, and TVC suggests the dependence of D. latifolia wood on TPC for effectiveness.
2,931.6
2020-12-18T00:00:00.000
[ "Materials Science", "Environmental Science" ]
Half-Life Volatility Measure of the Returns of Some Cryptocurrencies This paper explores the half-life volatility measure of three cryptocurrencies (Bitcoin, Litecoin and Ripple). Two GARCH family models were used (PGARCH (1, 1) and GARCH (1, 1)) with the student-t distribution. It was realised that, the PGARCH (1, 1) was the most appropriate model. Therefore, it was used in determining the half-life of the three returns series. The results revealed that, the half-life was 3 days, 6 days and 4 days for Bitcoin, Litecoin and Ripple respectively. This shows that, the three coins have strong mean reversion and short half-life and that it takes the respective days for volatility in each of coin to return half way back without further volatility. Introduction In recent years, a new type of currencies, a synthetic one, emerged. This new type of currency is named as "synthetic" because it is not the decision of a nation or state, nor represents any underlying asset or tangible wealth source. It appears as a new tradable asset resulting from a private agreement and facilitated by the anonymity of internet. Among this synthetic currencies, Bitcoin (BTC) emerges as the most important one, with a market capitalization of 17 billion, as of June 2018. There are other cryptocurrencies, based on block chain technology, such as Litecoin (LTC), Ethereum (ETH), and Ripple (XRP) among others. Cryptocurrency is an asset derived from mathematical cryptography; it is based on a new technology called the block chain (Bradbury, 2013;Ali et al., 2014). Its other fundamental characteristics are: being decentralised, and having a fixed total One open question today is if cryptocurrencies are in fact, or may be considered as a currency. Until now, we cannot observe that cryptocurrencies fulfils the main properties of a standard currency. They are barely accepted as a medium of exchange (e.g. to buy some products online); it is not used as unit of account (there are no financial statements valued in any cryptocurrency), and we can hardly believe that, given the great swings in price, anyone can consider any cryptocurrency as a suitable option to store value. Given these characteristics, none of the cryptocurrencies could be considered as an ideal asset for speculative purposes. There are no underlying assets to relate their value to and there is an open platform to operate round the clock. Most of the existing studies focused on Bitcoin returns. For example, Baur et al. (2017) show that Bitcoin returns are essentially uncorrelated with traditional asset classes such as stocks or bonds, which points to diversification possibilities. Others investigated the determinants of Bitcoin returns. The findings of Li Xin and Chong Alex Wang (2017), among others, suggest that measures of financial and macroeconomic activity are drivers of Bitcoin returns . Kristoufek Ladislav (2015) considered financial uncertainty, Bitcoin trading volume in Chinese Yuan and Google trends as potential drivers of Bitcoin returns. The inclusion of Google trends as some sort of proxy for sentiment or interest is fairly common within the literature (see, for example, Polasik et al. (2015)). A recurrent theme in the literature is the question to which asset class Bitcoin belongs, with many comparing it to gold; others to precious metals or to speculative assets (see, among others, Baur et al. (2017); or Bouri et al. (2017)). Some have classified Bitcoin as something in between a currency and a commodity (see, for example, Dehrberg Anne Haubo (2016)). For other recent contributions, see Cheah et al. Some literatures try to model Bitcoin volatility. Among the first papers was Balcilar et al. (2017), who analysed the causal relation between trading volume and Bitcoin returns and volatility. They found that volume cannot help predict the volatility of Bitcoin returns. Dehrberg Anne Haubo (2016) explored Bitcoin volatility using GARCH models. The models estimated in Dehrberg Anne Haubo More recently, Conrad and Kleen (2018) used the GARCH-MIDAS model to extract the long and short term volatility component of cryptocurrencies. They considered measures of volatility and risk in the US stock market as well as a measure of global activity. It was realized that, S & P 500 volatility had a negative and highly significant effect on long term Bitcoin volatility. They also found the, the S & P 500 volatility risk premium had a significantly positive effect on long term Bitcoin volatility and that there is a positive association between Baltic dry index and long-term Bitcoin volatility. Salisu et al. (2018) exploited several condition heteroskedasticity models with various supported distributions in order to find the best distribution as well as the best GARCH-type model that could be used to model volatility of Bitcoin returns. They established that, pre-testing the residuals of the Bitcoin returns for the best distribution could help to identify the appropriate distribution when modelling with GARCH-type models regardless of the data frequency. The purpose of this paper is to investigate the half-life volatility measure of some cryptocurrencies. This is to provide players on the cryptocurrency market with information pertaining to the half-life measure and volatility persistence of some of the cryptocurrencies so as to make informed choice on their investments. Methods of Data Analysis The daily closing prices were converted into compound returns given by; where t r is the continuous compound returns at time t, t p is the current closing coin price at time t and 1 t p − is the previous closing coin price. Volatility Modelling The error term of the GARCH-type model is fitted into the following distributions: Gaussian, Student's-t and Generalized Error. For flexibility, the first lag is allowed in the relevant variables both in the mean and variance equations. An ARMA (1, 1) is the mean equation common to the GARCH-type models given by; where, 0 β ≠ ; The respective variance equation of the latter varies for the two GARCH-type models (GARCH (1, 1) and PGARCH (1, 1)) but with a common lag combination (1, 1) for the ARCH and GARCH components respectively. The GARCH-type models considered are: 1) The Generalized ARCH (GARCH) Model-GARCH (1, 1); 2) The Power GARCH (PGARCH) Model-PGARCH (1, 1); ( ) where, o α is a constant, 1 α and 1 β are the standard ARCH and GARCH parameters, γ is the leverage parameter and δ is the parameter for the power term, and 0 δ > , 1 1 γ ≤ . When 2 δ = , the above equation becomes a classic GARCH model that allows for leverage effects and when 1 δ = , the conditional standard deviation will be estimated. Also, the flexibility of the PGARCH model could be increased by considering δ as another coefficient that must be estimated. Mean Reversion Mean reversion means that current information has no influence on the long run forecast of the volatility. Persistence dynamics in volatility is generally captured in the GARCH coefficient(s) of a stationary GARCH-type model. In stationary GARCH-type models, the volatility mean reverts to its long run level, at a rate given by the sum of ARCH and GARCH coefficients, which is usually close to one (1) for financial time series. The average number of time periods for the volatility to revert to its long run level is measured by the half-life of the volatility shock. The mean reverting form of the basic GARCH (1, 1) model is given by; Half-Life Measure of Volatility One measure of volatility persistence is the volatility half-life τ, Engle and Patton (2001) defined half-life as the time required for the volatility to move half way back towards its unconditional mean. More precisely, τ is the smallest k such that where k is the number of days, | t k t σ + is the conditional expected value of volatility k days into the future and 2 σ is the unconditional long run level of volatility ( the mean level to which the unconditional variance eventually reverts). Also, the GARCH (1, 1) process is mean reverting if ( ) Thus, the forecast conditional variance reverts to the unconditional variance as the forecast horizon increases. For k ≥ 2 and a GARCH (1, 1) process, the value of | t k t σ + is given by; From Equation (6) and Equation (7), the number of days k for a GARCH (1, 1) process is given by; Therefore, the half-life of a GARCH (1, 1) process is given by; The three mean returns recorded kurtosis ranging from 11.1786 to 29.9421 meaning the three mean returns are leptokurtic thus, highly volatile. respectively. It could be seen that, there is evidence of volatility clustering and long memory. That is, high returns tend to be followed by high returns and low returns followed by low returns. Also, the nature of volatility of the returns decays slowly. Table 2 shows the normality and autocorrelation test of the return series. It was realized that, the Jarque-Bera test for normality was significant at the 5% level of significance for the three return series meaning the return series were not normally distributed. The Ljung-Box Q-statistics for the return and squared returns show evidence of autocorrelation in both the return and squared return series since Q (30) and Q 2 (30) were significant at the 5% level of significance. Further Analysis To confirm that the returns series were not normally distributed, the Quan- In testing for stationarity, the ADF test was employed. It is evidence from Table 3 that, the three returns series showed evidence of stationarity at the 5% level of significance. In volatility modelling, it very paramount that one check for the presence of ARCH effects in the returns series. To examine the returns series for ARCH effects, the ARCH-LM test was employed. It is evidence from Table 4 that, all the three returns series exhibited ARCH effects at the 5% level of significance at lag 10, 20 and 30 since the P-values are all less than 0.05 significance level. It is always necessary to know the error distribution to use when performing ized Error Distribution (GED)). Table 5, reports on the AIC and BIC after fitting the residuals to a GARCH model. It was realized that, all the three returns Salisu et al. (2018) who used the pretesting of residual in selecting the appropriate error distribution. A symmetric univariate GARCH model was employed to handle the magnitude of the returns. From Table 6, it is evidence that all the returns series were not stationary since their summation of α and β are all greater than one, meaning volatility in these returns series using GARCH (1, 1) model is an explosive process and that, volatility will not mean revert. Nevertheless, even though the model exhibited non-stationarity, the ACRH-LM test was not significant for all the returns series meaning further conditional heteroscedasticity was remove from the returns series. Since the GARCH (1, 1) exhibited non-stationarity, there was the need to employ another volatility model. Hence, an asymmetric volatility model; PGARCH was considered. An asymmetric GARCH model was employed to model the magnitude and sign of volatility in the returns series and also since the symmetric GARCH model was not stationary for all the returns series. From Table 7, it could be seen that, the PGARCH (1, 1) was stationary for all the three returns series since some systematic trends in them with Bitcoin exhibiting much systematic trend followed by Litecoin and then Ripple. The PGARCH (1, 1) model was employed to investigate the half-life volatility measure of the returns series. From Table 8 it is evidence that, all the three returns series exhibited volatility persistence and long memory since all the three return series had their ARCH and GARCH components less 1. The half-life of Bitcoin, Litecoin and Ripple were 3 days, 6 days and 4 days respectively. This means that, they all have strong mean reverting rate and short half-life. Again, a shock in Bitcoin will take 3 days for it to return half way back to its volatility. A shock in the returns of Litecoin will take 6 days for it to mean revert without any further volatility. Volatility in Ripple will last for 4 days after which it will return half way back to its mean without further volatility. For investors on the coin market, it will be prudent to stay with coins with strong mean reversion and thus short half-life so as not to suffer much volatility with your portfolios. Therefore, in this paper, it advisable to stay with Bitcoin since it has the strongest mean reversion rate and the shortest half-life, followed by Ripple and then Litecoin. Conclusion This paper determined the half-life volatility measure of three cryptocurrencies the volatility modelling, a pre-testing of the residuals was done in which the student's-t distribution was selected. A symmetric GARCH (1, 1) model was employed but it was realized that; it was non-stationary for all the returns series. Therefore another GARCH-family model was employed; the PGARCH (1, 1) which exhibited stationarity in all the three returns series. The PGARCH (1, 1) was then considered in determining the half-life of the three returns series. The results revealed that, the half-life was 3 days, 6 days and 4 days for Bitcoin, Litecoin and Ripple respectively. This shows that, the three coins had strong mean reversion and short half-life. It is therefore prudent for investors to stay with Bitcoin since it has the shortest half-life.
3,099.6
2019-03-13T00:00:00.000
[ "Economics", "Business" ]
Magnetic Microrobotic Swarms in Fluid Suspensions Microrobotic swarms have attracted extensive attentions due to their potential in medical and bioengineering applications. Because of the small sizes of swarm agents, integrating actuators, sensors, and control circuits are difficult. Microrobotic swarms in different fluid environments should be actuated and navigated by external physical fields, chemical fuels, and biological power. Magnetic fields have advantages, including real-time control, programmability, and high penetrability, and thus they are widely used to actuate magnetic microrobotic swarms. This review summarizes the recent remarkable progress in the magnetic actuation and navigation of magnetic microrobotic swarms. After development and evolution, the design of magnetic agents, and techniques of magnetic actuation and automatic control are now in place. Magnetic microrobotic swarms formed by different agents have been proposed, such as nanoparticles, artificial bacterial flagella, and bacteria. By tuning the applied fields, the morphology, orientation, and position of swarms can be adjusted on demand. Reconfigurability and motion dexterity are endowed to the microrobotic swarms. The wireless magnetic actuation systems for microrobotic swarms are introduced, and the characteristics of microrobotic swarms actuated by different customized magnetic fields are described, such as rotating, oscillating, and hybrid fields. The results show that the swarm intelligence has been enhanced. Finally, the current challenges and opportunities in this field are discussed. The developments in materials, actuation methods, control strategies, and imaging modalities will transform the magnetic microrobotic swarms from lab to practical clinic. Introduction Over the past decade, there has been an increasing interest in microrobotic swarms at small scales [1]. The swarms are formed by thousands or even millions of small individual agents [2]. Tasks that are challenging for single agent can be accomplished by swarms collectively, such as targeted delivery and hyperthermia [3]. The imaging contrast is enhanced by using the swarms, benefiting the image-guided navigation and manipulation [4]. The swarms also have high environmental adaptability by regulating their morphology in response to environmental stimuli [5]. Due to the small sizes of agents, the traditional internal energy storage devices cannot be integrated, and external physical fields have been used to energize the agents wirelessly, such as magnetic fields, optical fields, acoustic fields, and electric fields [6,7]. Chemical fuels and biological power are also exploited to propel the swarms [8]. Magnetic fields serving as one of the actuation methods can control the agents in real time, and the dexterous and precise control capabilities are endowed to the microrobotic swarms [1]. Meanwhile, the magnetic fields cause negligible effects on human bodies and can penetrate deep tissues, which further expand the applications of microrobotic swarms. Recently, the microrobotic swarms driven by magnetic fields have been investigated in different aspects, ranging from the design of actuation methods and motion control algorithms to biomedical applications [9][10][11]. Using biocompatible and biodegradable materials, the cytotoxicity of swarms is reduced. By tuning parameters of the applied magnetic fields, the swarm patterns are adjusted for adapting tortuous environments, and thus, hard-to-reach regions are accessible [12]. With effective automatic control methods, the swarms can be propelled to move following desired trajectories [13]. Different imaging modalities, such as magnetic resonance imaging (MRI) [14], fluorescence imaging [15], and ultrasound imaging (US) [16], allow the swarms to be detected in vivo, and biomedical applications can be accomplished [3]. This review presents the recent progress of microrobotic swarms energized by magnetic fields. The magnetic actuation systems for microrobotic swarms are introduced. Then, we focus on the characteristics of swarms actuated by different magnetic fields, such as rotating, oscillating, and hybrid fields. Finally, we present the existing challenges and future opportunities of microrobotic swarms. Magnetic Actuation The magnetic fields have been widely employed as external power sources for actuating the microrobotic swarms. Herein, the mechanisms of the magnetic actuation are discussed, and different magnetic actuation systems are presented. Magnetic Actuation Mechanisms When the magnetic agents are placed in the external magnetic fields, the magnetic torque τ or magnetic force F will be imparted on them as [17, 18•] where m is the magnetic moment of the agents and B is the magnetic flux density at the position of the agents. If the external magnetic fields are uniform, the agents will experience a magnetic torque when their easy magnetization axis are not aligned with the field direction. Therefore, the agents will rotate homogeneously, converting the magnetic energy to kinetic energy. When the magnetic agents are subject to a magnetic field gradient, they will be attracted to a region with high magnetic flux density by the magnetic force, which is influenced by multiple factors, including the magnetization and size of the agents, and the gradient of the fields. Magnetic Actuation Systems The magnetic actuation systems consisting of electromagnetic coils and permanent magnets have been investigated for generating uniform and gradient fields. The magnetic fields generated by the electromagnetic coils can be adjusted quickly, such as the magnitude and direction [19]. Meanwhile, magnetic fields in any direction can be realized based on the superposition of fields from different coils. As shown in Fig. 1(a), the Helmholtz coil is composed of three pairs of coaxial coils, where the currents flowing in coaxial coils have the same handedness. The generated fields at the center of the Helmholtz coil are almost uniform, and thus, the Helmholtz coil is suitable for magnetic torque control [20]. Apart from the static coils, the actuation system consisting of three mobile electromagnetic coils has been developed, named RoboMag ( Fig. 1(b)) [21]. The working space of the mobile coils was larger than that of static coils as the position of the swarms can be tracked, and the collision among the three coils was avoided with optimization algorithms. By considering permanent magnets as the source of magnetic fields, the actuation systems can provide large working space and high field strength [19]. A 50-mmdiameter sphere permanent magnet was used to gather magnetic nanoparticles into microrobotic swarms, which were navigated in a porcine coronary artery with flowing blood [22•]. The permanent magnets were also integrated into a rotational platform for actuating swarms, as shown in Fig. 1(c) [23]. The vibration was reduced by containing the magnets in a 3D-printed case. Meanwhile, the magnet arrays have been developed with discrete ferromagnets and wood jigs, which were operated wirelessly using a motorized platform ( Fig. 1(d)) [24]. The desired magnetic potential energy maps were generated by adjusting the external magnet arrays, and thus, the magnetic agents distributed on the fluid-air interface can be attracted to the convergent fields, forming swarms with specific patterns. Although some magnetic actuation systems have been developed, the actuation for microrobotic swarms in deep regions still is a challenge, as the field strength and gradient attenuate rapidly with the distance [25]. In this case, the external electromagnets and current in coils should be larger, where the cooling systems are required to dissipate the heat. Gradient Fields Gradient fields are the inhomogeneous fields where the field strength varies with position, as shown in Fig. 2(a). When magnetic agents are placed in the gradient fields, they will be forced to move by the magnetic force. The gradient fields have been used to separate the microparticles of 0.2 μm and 1 μm in sizes in fluids [26]. The particles of 0.2 μm represented by green fluorescence moved to lower outlets of a microchannel, while the larger particles (1 μm) labeled by red color were mainly concentrated at the upper outlets ( Fig. 2(b)). In de-ionized water, the iron oxide nanoparticles have been assembled into chain-like structures by applying an external gradient field ( Fig. 2(c)) [27]. Meanwhile, Hwang et al. gathered the magnetic nanoparticles into a microrobotic swarm using a permanent magnet [28]. The suspended nanoparticles were attracted into the swarm by controlling the swarm to move following a preprogrammed trajectory, as shown in Fig. 2(d). Furthermore, multiple magnets can be programmed to generate desired magnetic potential energy maps spatially and temporally. Dong et al. realized static and time-varying generation of microrobotic swarms by programming the external magnetic force distribution [24]. The swarms had the capability of passing through tortuous environments by changing the morphology adaptively, and meanwhile, nonmagnetic objects with different shapes can be transported with suitable swarm patterns ( Fig. 2(e)). Apart from the swarms consisting of artificial magnetic agents, the living organisms can also be assembled into swarms. Lanauze et al. achieved the aggregation of magnetotactic bacteria Adapted with permission from [21]. Copyright 2020 IEEE. c Rotational permanent magnets. Adapted with permission from [23]. Copyright 2021 IEEE. d Permanent magnet arrays. Adapted with permission from [24]. Copyright 2020 SAGE MC-1 (MTB) based on the internal iron oxide nanoparticle chains [29]. When adjusting the magnetic field lines toward a convergence point in space, the moving direction of the bacteria would align with the field lines and the flagella will propel them to the convergence regions. Therefore, the MTB swarms can move to different parts of a Petri dish in a controlled manner ( Fig. 2(f)). Rotating Fields Rotating field is a controllable homogeneous magnetic field with time-varying field direction ( Fig. 3(a)). The agents in the rotating fields will experience a magnetic torque that attempts to align their magnetic components with the field lines, inducing the rotation motion [18•]. As shown in Fig. 3(b), Janus particles have been assembled into a swarm by applying the rotating magnetic fields [30]. Using the same fields, Yu et al. investigated the generation and locomotion of vortex-like microrobotic swarms consisting of paramagnetic nanoparticles [31]. When the distance between two independent swarms was less than a critical value, coaxial rotation and merging of the two swarms were triggered ( Fig. 3(c)). Meanwhile, the vortex-like swarms can be navigated from the left original reservoir to the right target reservoir by passing through a confined channel. Apart from the spherical agents, the nanoparticles with other shapes also have been investigated for generating microrobotic swarms. As shown in Fig. 3(d), circular swarms formed by peanutshaped hematite colloidal particles have been reported using the external rotating fields [32]. It is demonstrated that the swarms can be controlled to move following a desired trajectory, and a non-magnetic microsphere can be manipulated by the swarms collectively. In addition, the screw-like artificial bacterial flagella (ABFs) also have been investigated for generating microrobotic swarms. Actuated by the rotating fields, the rotating motion along the longitudinal axis of the ABFs will be triggered, inducing movement of the ABFs. Microrobotic [26]. Copyright 2021 The Royal Society of Chemistry. c Formation of chain-like structures. Adapted with permission from [27]. Copy-right 2008 AIP. d Aggregation and locomotion of a circular swarm using a permanent magnet. Adapted with permission from [28]. Copyright 2019 AAAS. e Collective navigation and cargo transportation of microrobotic swarms. Adapted with permission from [24]. Copyright 2020 SAGE. f Locomotion of MTB swarms in a Petri dish. Adapted with permission from [29]. Copyright 2014 SAGE swarms consisting of ABFs have been reported with the capability of swimming within an intraperitoneal cavity of mice [33]. Wang et al. fabricated surface-modified ABFs with different wettabilities, resulting in the change of step-out frequency (Fig. 3(e)) [34]. Specifically, when the surfaces of ABFs were hydrophobic, the step-out frequencies and maximum forward speed were larger compared with that of ABFs with hydrophilic surfaces. Therefore, by adjusting the frequency of the applied rotating fields, the swarms can be manipulated to move into different branches of a Y-shaped microchannel. Meanwhile, the microrobotic swarms have been formed at the fluid-air interface apart from the fluid-fluid environments. As shown in Fig. 3(f), the magnetic propellers were actuated to move upward and reached the fluid-air interface, generating a circular swarm with a boundary [35]. The rotating fields with changed field strength also have been investigated. Yu et al. reported the microrobotic swarms formed actuated by an elliptical rotating field, where the field strength changed periodically [36 ••]. With the applied elliptical rotating fields, the swarms shirked along the long axis of the field and elongated along the perpendicular direction, forming an elliptical pattern (Fig. 3(g)). The elliptical swarms can be transformed back to vortex-like swarms by adjusting the elliptical fields to the circular rotating fields. In addition, the rotating fields with two angles, a precession angle and a tilt angle, have been developed, named precessing fields [37]. Using the precessing fields, Fig. 3 Microrobotic swarms driven by rotating fields. a The schematic illustration of a rotating field. b A circular swarm formed by Janus particles. Adapted with permission from [30]. Copyright 2015 Wiley-VCH GmbH. c Merging of vortex-like swarms and navigation of the swarm in a curved channel. Adapted with permission from [31]. Copyright 2018 SAGE. d Swarm formation using peanutshaped agents and cargo manipulation using the swarms. Adapted with permission from [32]. Copyright 2019 AAAS. e Artificial bacterial flagella swarms moving into different branches of a Y-shaped microchannel. Adapted with permission from [34]. Copyright 2018 American Chemical Society. f A circular swarm at the fluid-air interface. Adapted with permission from [35]. Copyright 2017 IOP. g Reconfiguration between elliptical swarms and vortex-like swarms. Adapted with permission from [36 •• ]. Copyright 2021 IEEE. h Navigation of microrobotic swarms actuated by precessing magnetic fields. Adapted with permission from [37]. Copyright 2020 The Royal Society of Chemistry the microparticle chains in swarms were actuated to rotate around the precession axis [37]. By tuning the tilt angle and the precession angle, the swarms shown different locomotion behaviors with trochoidal trajectories, and the trajectory tracking of the swarms can be achieved (Fig. 3(h)). Using the same precessing fields, the dynamic assembly of magnetic droplets containing microparticle chains was accomplished at the fluid-air interface [38]. The expansion and shrinking of the swarms were accomplished, and the swarms had the capability of moving with cargoes. Meanwhile, the peanutshaped microparticles were energized to tumble by applying the precessing fields, forming ribbon-like swarms subjected to hydrodynamic and magnetic interactions [32]. Multiple microspheres were manipulated by the swarms in a large area synchronously. Oscillating Fields The oscillating magnetic fields are programmed with periodical change of field strength or direction. The 1-D oscillating magnetic field is shown in Fig. 4(a), whose field strength changes along one direction. Using the 1-D oscillating field, a snake-like microrobotic swarm was formed at the fluid-air interface with the capability of swimming on a straight line (Fig. 4(b)) [39]. The 2-D oscillating magnetic field is designed in a plane with simultaneously changed field strength and direction, as shown in Fig. 4(c). Between two immiscible liquids, i.e., the deep saturated solution of Na 2 SO 4 in water and silicone oil, nickel microparticles were assembled into aster-like swarms by applying the 2-D oscillating fields [40]. It is demonstrated that the size of the Adapted with permission from [39]. Copyright 2009 APS. c The schematic illustration of a 2-D oscillating field. d Elongation, shrinking, and locomotion of ribbon-like swarms actuated by the 2-D oscillating fields. Adapted with permission from [41]. Copyright 2018 Springer Nature. e Pattern reconfiguration of nanorod swarms compared with nanoparticle swarms actuated by the 2-D oscillating fields. Adapted with permission from [42]. Copyright 2021 American Chemical Society. f The schematic illustration of a 3-D oscillating field. g Locomotion of a carpet-like swarm actuated by the 3-D oscillating fields. Adapted with permission from [43]. Copyright 2019 Springer Nature. h Swarm spreading actuated by the 3-D oscillating fields. Adapted with permission from [44]. Copyright 2020 Elsevier. i Pattern transformation between a ribbon-like swarm and a vortex-like swarm for passing through a narrow channel. Adapted with permission from [45]. Copyright 2021 Wiley-VCH GmbH swarms increased when the applied frequency decreased. Yu et al. showed the reversible elongation and shrinking of ribbon-like swarms consisting of nanoparticles driven by the 2-D oscillating fields (Fig. 4(d)) [41]. Locomotion of the swarms was induced when the field plane tilted a small pitch angle to the substrate, and splitting of the swarms can be triggered on-demand. Using the same 2-D oscillating fields, a swarm of nickel nanorods was proposed and compared with the ribbon-like swarms formed by spherical nanoparticles [42]. When the field ratio was increased, the nanorod swarms were elongated while the aspect ratios of the nanoparticle swarms were reduced (Fig. 4(e)). Furthermore, the 3-D oscillating magnetic field is created by the combination of rotating fields and oscillating fields (Fig. 4(f)). Massana et al. investigated the generation and movement of carpet-like swarms formed by colloidal rollers using a 3-D oscillating field [43]. As the agents were coupled to the substrate, the rotational motion was transformed into translation, endowing the locomotion capability to the swarms ( Fig. 4(g)). Apart from gathering, it is also demonstrated that the 3-D oscillating field can be designed to disassemble and spread the microrobotic swarms based on hydrodynamic drag and magnetic interactions (Fig. 4(h)) [44]. Meanwhile, the microrobotic swarms have the reconfiguration capability by switching the applied fields. When moving in a confined environment and encountering a narrow channel, the vortex-like swarm was transformed to a ribbon-like swarm to adapt to the environment by changing the rotating fields to oscillating fields (Fig. 4(i)) [45]. After passing the narrow region, the swarm pattern was transformed back to the circular shape for smooth steering motion. Hybrid Fields The hybrid fields are the combination of magnetic fields and other fields, such as optical fields and acoustic fields. One of the power sources always is used to induce the swarm generation, and another power for locomotion and reconfiguration according to task requirements. As shown in Fig. 5(a), paramagnetic nanoparticles were energized by the magnetic fields, forming a microrobotic swarm on the substrate, and then, the swarm swirled up [46]. Copyright 2020 American Chemical Society. b Formation and locomotion of polymer microparticles driven by magnetic and optical fields. Adapted with permission from [47]. Copyright 2013 AAAS. c Assembly of magnetic microparticles and swarm locomotion actuated by magnetic and acoustic fields. The right image shows the disassembly of swarms when turning off the magnetic field. Adapted with permission from [48]. Copyright 2017 Springer Nature. d Microrobotic swarms actuated by magnetic and acoustic fields. The left image shows the movement of the agents in blood, where the blue and red lines are the trajectories of the agents by applying magnetic fields and acoustic fields, respectively. The right three images show the formation and locomotion of the swarms driven by hybrid fields. Adapted with permission from [49]. Copyright 2015 American Chemical Society and was transformed into a tornado-like swarm with exposure to light [46]. Palacci et al. realized the formation of swarms consisting of photoactivated colloidal particles when a blue light was turned on (Fig. 5(b)) [47]. After the magnetic fields were applied, the swarms were propelled to move following the direction of the magnetic field with stable patterns (Fig. 5(b)), and the swarms dissolved when only magnetic fields was applied. Furthermore, the combination of magnetic and acoustic fields also has been investigated for actuating the microrobotic swarms. Ahmed et al. reported the aggregation of superparamagnetic particles with the applied magnetic fields [48], and rolling motion of the swarms near and far away from the boundaries by using the acoustic fields (Fig. 5(c)). It is also demonstrated that the swarms disassociated without the magnetic fields (Fig. 5(c)). Combined with magnetic and acoustic fields, the agents containing a helical structure and a nanorod were also actuated in blood (Fig. 5(d)) [49]. Directional motion of the swarms was observed when the acoustic fields were turned off, and the swarms were reformed after applying the acoustic fields again (Fig. 5(d)). Despite the remarkable achievements of microrobotic swarms driven by magnetic fields, the swarm intelligence should be enhanced for expanding working dimensions. The design of magnetic agents, formation of swarms, and magnetic actuation systems are required to be further investigated. Conclusion To date, the research progress of magnetic microrobotic swarms has demonstrated that they can perform complex missions efficiently. This review summarizes the recent developments of microrobotic swarms driven by magnetic fields in fluid suspensions. Different wireless magnetic actuation systems are introduced. The swarms actuated by different magnetic fields are summarized, as shown in Table 1. The swarms have been endowed with different capabilities, such as active locomotion, pattern reconfiguration, and image-guided navigation. However, there are still restrictions hindering the further applications of microrobotic swarms, and breakthroughs are required. Aiming at potential biomedical applications, the precisely controlled locomotion of microrobotic swarms in biological fluids is needed [50]. Physical features of bio-fluids, such as macromolecules and environmental boundaries, will influence swarm behaviors, and they shall be taken into account when the swarm actuation method is designed [51]. Another No specific shape Fluid [49] key issue that should be solved is the actuation and navigation of microrobotic swarms in 3-D environments, in order to guarantee the controllable mobility in in vivo environments. The influences of gravity forces on the generation and locomotion of swarms should be overcome. In addition, cytotoxicity of the microrobotic swarms is an important issue to be concerned. The materials with promising biocompatibility should be chosen [52] in order to reduce the cytotoxicity on normal cells. The swarms should also be biodegradable to weaken the negative effects on human health after accomplishing demanded tasks [53,54]. The in vivo localization of microrobotic swarms through real-time imaging is critical for the implementation of precise feedback control. Although swarms have been navigated in vivo with vision feedback [23,55], the current imaging modalities, such as fluorescence imaging, photoacoustic imaging, MRI, US, and X-ray fluoroscopy, still have limitations. Specifically, the fluorescence imaging and photoacoustic imaging can only be used for swarm localization in shallow tissues [3]. When using MRI, image artifacts caused by magnetic materials have negative impacts on the accurate position tracking of swarms [56]. The imaging quality of US is influenced by unrelated objects in living bodies, such as bone and air pockets [57]. The X-ray fluoroscopy is hazardous for living bodies under long-period observation [1]. Therefore, the spatiotemporal resolutions of imaging methods need to be enhanced. Combining multiple imaging modalities and integrating machine learning techniques could provide better imaging consequences. The microrobotic swarms have also been exploited for environmental remediation and pollution control [58]. Compared to the conventional passive diffusion of solutes, the controllable swarms with locomotion capability bright more effective solutions to decontamination [59]. For instance, by triggering larger flow velocity and rotating fluid flow, magnetic biohybrid swarms can adsorb and remove heavy metal ions in contaminated water with enhanced efficiency [60]. Under rotating magnetic fields, organic pollutants were also degraded faster using helical swarms based on the increased interaction volume [61]. Although different swarms have been proposed for environmental protection, some important issues should be concerned. The swarms could also be sources of pollution by releasing organic molecules and inorganic ions into the environments [62]. The batch fabrication of agents can be hindered by complicated fabrication and post-processing procedures. Meanwhile, swarms should be equipped with recyclability to decrease the destructive impacts on the environment. With the developments of multidisciplinary technologies, the magnetically driven microrobotic swarms have been promising platforms for biomedical applications. Efforts on material design, magnetic actuation methods, and control strategies are still required, in order to improve the controllability, reconfigurability, and function versatility of swarms. We envision that microrobotic swarms from labs are marching closer to practical applications.
5,381.6
2022-08-05T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
The Bcl-2 Family: Ancient Origins, Conserved Structures, and Divergent Mechanisms Intrinsic apoptosis, the response to intracellular cell death stimuli, is regulated by the interplay of the B-cell lymphoma 2 (Bcl-2) family and their membrane interactions. Bcl-2 proteins mediate a number of processes including development, homeostasis, autophagy, and innate and adaptive immune responses and their dysregulation underpins a host of diseases including cancer. The Bcl-2 family is characterized by the presence of conserved sequence motifs called Bcl-2 homology motifs, as well as a transmembrane region, which form the interaction sites and intracellular location mechanism, respectively. Bcl-2 proteins have been recognized in the earliest metazoans including Porifera (sponges), Placozoans, and Cnidarians (e.g., Hydra). A number of viruses have gained Bcl-2 homologs and subvert innate immunity and cellular apoptosis for their replication, but they frequently have very different sequences to their host Bcl-2 analogs. Though most mechanisms of apoptosis initiation converge on activation of caspases that destroy the cell from within, the numerous gene insertions, deletions, and duplications during evolution have led to a divergence in mechanisms of intrinsic apoptosis. Currently, the action of the Bcl-2 family is best understood in vertebrates and nematodes but new insights are emerging from evolutionarily earlier organisms. This review focuses on the mechanisms underpinning the activity of Bcl-2 proteins including their structures and interactions, and how they have changed over the course of evolution. Introduction Apoptosis or programmed death of cells has played a significant role in metazoan evolution and prioritizes the organism over individual cells [1,2]. One form of apoptosis, intrinsic or mitochondrial regulated apoptosis, is initiated by a range of intra-and extracellular stimuli to regulate developmental and homeostatic processes [3]. The genes most closely associated with intrinsic apoptosis are the B-cell lymphoma 2 (Bcl-2) family and have been identified in the basal clades of metazoans, including Porifera (sponges), Cnidaria (anemones, corals, jellyfish), and Placozoa [4]. In mammals, these genes regulate the integrity of mitochondria where they either initiate the release of apoptogenic factors or prevent this process from occurring ( Figure 1). The threshold for cell fate is mediated by antagonism between prosurvival and proapoptotic members of the Bcl-2 family [5] and this fundamental interaction is conserved from sponges [6] to man [7]. Evolutionary gene losses have led to simplification of this process in some organisms, such as insects and nematodes (Figure 1b). Viruses have also acquired The Bcl-2 proteins fold to form a distinct helical bundle structure where the core of the α-helical bundle is composed of a central hydrophobic helix (helix α5) that forms a scaffold for packing up to eight α-helices (Figures 2 and 3). In a feature maintained from sponges to man [9], the Bcl-2 fold brings the BH regions into close proximity to assemble the canonical BH3-binding groove where an antagonist BH3 motif binds (Figure 3b,c). Whilst this "in-groove" interaction mechanism appears to be the primary mode of interaction for Bcl-2-mediated control of apoptosis, alternative modes have been proposed including a site spanning helices α1 and 6 [87] and the BH4 motif [88]. Furthermore, nonapoptotic roles including modulation of NF-κB signaling are also not mediated via an in-groove mechanism [89]. BH motifs are recognizable from the earliest metazoan Bcl-2 proteins but may be absent in viral proteins. The sequence signatures of each of the four BH motifs (BH1-BH4) differ ( Figure 2c) and are found in the order from the N-terminus: BH4, BH3, BH1, BH2 ( Figure 2a) and for prosurvival proteins are normally located on the same exon while the gene structure for the proapoptotic proteins is more complex. The presence of a BH3 motif is a key feature of the proapoptotic proteins and required for their proapoptotic activity [9,90], whereas some of the prosurvival proteins do not feature the BH3 motif. In addition to the presence of the BH motifs, many Bcl-2 proteins bear a C-terminal transmembrane (TM) region that is located on a separate exon. The TM region targets these proteins to intracellular membranes including the nuclear envelope, mitochondrial inner and outer membranes, Golgi apparatus, lysosomes, ER, and peroxisomes [91]. However, it is Bcl-2 family action at the mitochondrial outer membrane (MOM) that is the most central mechanistic feature for intrinsic apoptosis. Both the gene structure and synteny of Bcl-2 proteins are well conserved across phyla [92]. Our current understanding of intrinsic apoptosis is mainly derived from investigations of mouse, human, nematode (Caenorhabditis elegans), and fly (Drosophila melanogaster) apoptosis. These studies have shown the Bcl-2 family consists of two phylogenetically distinct groups of proteins: those that share the Bcl-2 fold [90,93] that are either prosurvival or proapoptotic and the proapoptotic intrinsically disordered 'BH3-only' proteins, that bear only the BH3 motif [9]. The BH3-only proteins are upregulated in response to diverse apoptotic stimuli [94] and their principal role is to antagonize the prosurvival proteins [9], but apoptosis also occurs in their absence [95]. Notwithstanding the conservation in the Bcl-2 family, there are substantial differences in Bcl-2-regulated apoptosis mechanisms. In mammals, there are nine multimotif Bcl-2 paralogs (six prosurvival Bcl-2, Bcl-x L , Bcl-w, Mcl-1, A1, Bcl-B, and three proapoptotic Bax, Bak, and Bok) and eight BH3-only proteins (Bim, Bad, Bmf, Bid, Bik, Noxa, Puma, Hrk) ( Table 1) that regulate intrinsic apoptosis through a network of specific binding interactions that control the integrity of the MOM [7] (Figure 1). A key step is Bax, Bak, or Bok oligomerization at the mitochondrial membrane that results in formation of membrane pores releasing cytochrome c to activate the caspase cascade. In contrast to mammals, MOM permeabilization (MOMP) and cytochrome c do not play a role in initiation of apoptosis in ecdysozoans C. elegans ( Figure 1) or D. melanogaster [7,42,96] (Figure 1). Combined, these results indicate the interactions of the Bcl-2 proteins have been maintained over the course of evolution. The role of prosurvival Bcl-2 proteins is not limited to regulation of apoptosis; other functions have been proposed in processes as divergent as autophagy, calcium homeostasis regulation, and metabolism [97][98][99]. The most well-understood process at a molecular level of these nonapoptotic roles is that of autophagy. Both Bcl-2 and Bcl-x L are able to bind the autophagy regulator Beclin-1 via a mechanism closely mimicking the canonical in-groove interaction with BH3-only proteins [49,100,101]. Beclin-1 has significant differences from the BH3-only proteins. In addition to being much longer (450 residues) than a typical BH3-only protein residues for human BH3-only proteins), Beclin-1 has its BH3-like motif located in an unstructured N-terminal region [102] with the BH3 motif spread over the junction of two exons. In addition to its unstructured N-terminal region Beclin-1 bears a coiled coil domain and a folded evolutionary conserved domain [103]. The spread of the BH3 motif over two exons differentiates Beclin-1 from the BH3-only proteins where apart from Bid the BH3 occurs in the second-to-last exon. The molecular basis of Bcl-2 proteins in nonapoptotic functions remains to be delineated. Bcl-2 proteins bear a C-terminal transmembrane (TM) region that is located on a separate exon. The TM region targets these proteins to intracellular membranes including the nuclear envelope, mitochondrial inner and outer membranes, Golgi apparatus, lysosomes, ER, and peroxisomes [91]. However, it is Bcl-2 family action at the mitochondrial outer membrane (MOM) that is the most central mechanistic feature for intrinsic apoptosis. Both the gene structure and synteny of Bcl-2 proteins are well conserved across phyla [92]. Monomeric N1 is shown in the same orientation as in (a), and the functionally relevant dimer is shown rotated by 90 o around the vertical axis. In (a), the extent of the BH motifs is indicated as ribbon colored as in Figure 2a and the helices α1-α8 are also indicated. The structures were aligned on human Bcl-x L and the orientation for all structures is the same as that in (a). The N and C termini are indicated. Dysfunctional apoptosis is one of the hallmarks of disease like cancer [104] and metazoans have coevolved with this disease [105]. While neoplasms in many vertebrates are well known [105], they have also been discovered in the early metazoans H. vulgaris [106], coral Acropora palmata [107], and molluscs [108], and in the case of hydra, occur as a result of dysregulation of apoptosis [106]. While some cancers are unique to a species, others occur across multiple species [109] and resistance to apoptosis is likely to be a central feature. There is strong interest in developing a molecular understanding in Bcl-2 function, interactions, and structures [7] but currently there is only a limited number of studies Biomolecules 2020, 10, 128 7 of 21 on early metazoan Bcl-2-regulated apoptosis [6,110]. However, these initial studies strongly suggest conservation of structures and mechanisms across metazoan history. Exactly how intrinsic apoptosis is manifested at a molecular level varies according to the organism, but all mechanisms rely on loss of prosurvival activity to initiate apoptosis. Consequently, there has been a drive to explore the interactions of this family and elucidate the network of functions they regulate. Virus-Encoded Bcl-2 Homologs The importance of the Bcl-2 family in homeostatic regulation has been exploited by viruses with many viral genomes containing a Bcl-2 protein, and in some instances, multiple Bcl-2 proteins [8]. Sequence, structural, and functional homologs of Bcl-2 are found in Herpesviridae as well as Nucleocytoplasmic Large DNA Viruses (NCLDVs) such as Asfarviridae and Iridoviridae [8]. Many of these virus-encoded Bcl-2 family members display substantial differences with regards to their sequence (Figure 2a,b) and interaction profiles to their mammalian proapoptotic Bcl-2 family counterparts as well as their overall structure, owing to the more rapid pace at which these proteins have evolved as part of a host-pathogen interface [8,111]. The first viral Bcl-2 homologs were identified in adenovirus [112] and the γ-herpesvirus Epstein-Barr virus (EBV) [44]. Adenoviral E1B19K was shown to be a potent inhibitor of apoptosis and could be interchanged with Bcl-2 during cellular transformation [112]. Whilst the vast majority of virus-encoded apoptosis regulatory Bcl-2 proteins act by utilizing the canonical ligand-binding groove to sequester proapoptotic Bcl-2 family members, it has become apparent that this is not the sole mechanism utilized. In addition to binding proapoptotic proteins, viruses may target host prosurvival Bcl-2 proteins through the BH3-binding groove in a manner similar to but not identical to a BH3 motif, and Hepatitis B virus X protein was shown to engage the groove of Bcl-x L allowing viral replication to proceed [113]. Bcl-2 Homologs Encoded by Herpesviridae Numerous Herpesviridae encode Bcl-2-like proteins such as BHRF1 from EBV, one of the earliest identified viral Bcl-2 homologs. BHRF1 adopts the classical Bcl-2 fold and utilizes the canonical ligand-binding groove to engage proapoptotic BH3 motif ligands [45,114]. BHRF1 was shown to prolong survival of cells [44,115], which is linked to its ability to engage proapoptotic Bcl-2 members Bim [116] and Bak [114]. An unusual herpesviral Bcl-2 homolog is found in murine γ-herpesvirus 68, M11 [117]. M11 is a potent inhibitor of TNFα and Fas-induced apoptosis, and was shown to bind multiple proapoptotic Bcl-2 proteins including Bim, Bak, and Bax [101]. However, M11 also binds the autophagy regulator Beclin-1, which bears a BH3-like motif, with nanomolar affinity (K D = 40 nM), which is bound via the canonical ligand-binding groove [101]. Indeed, functional studies suggest that autophagy may be the primary cell death pathway targeted by M11 [49]. Although the majority of herpesvirus-encoded Bcl-2 proteins target intrinsic apoptosis, γ68-encoded M11 clearly shows that other cell death pathways such as autophagy can also be viable targets. Indeed, M11 is not an exception, and adenoviral E1B19K was shown to be an autophagy inhibitor via engagement of Beclin-1 [118], as was the asfarvirus African swine fever virus (ASFV) A179L (see below). Poxvirus Bcl-2 Homologs The Poxviridae are a large superfamily of viruses amongst the NCLDVs comprising numerous families that are characterized by their relatively large genomes (130-360 kb) that frequently encode functional and structural homologs of Bcl-2. Most notable for human disease among the pox viruses are Variola virus, the causative agent of smallpox, and Vaccinia virus, which provides the basis for smallpox vaccine. Vaccinia virus (VACV) is the prototypical member of the Orthopoxviridae and encodes for prosurvival F1L. VACV F1L is a potent inhibitor of intrinsic apoptosis, but displays no detectable sequence identity with mammalian Bcl-2 [119,120]. Nevertheless, structural studies revealed that VACV F1L adopts a Bcl-2 fold, albeit with a previously not observed domain-swapped topology that rendered VACV F1L a constitutive dimer [53,121]. This unusual topology is paired with a remarkably restricted ligand-binding profile, with VACV F1L only engaging Bim, Bak, and Bax. Interestingly, similar domain swapping was subsequently also observed during Bax and Bak oligomerization [122], suggesting that the structural plasticity observed amongst the virus-encoded Bcl-2 proteins is also pivotal for the function of metazoan family members. In the context of a live viral infection, F1L inhibits Bim to prevent premature host cell apoptosis [121] and replaces Mcl-1 activity [123]. An extended unstructured N-terminal section prior to the Bcl-2 fold [124] may be involved in apoptosis regulation but results are conflicting [124][125][126]. Despite being closely related to VACV F1L, the Variola virus (VAR) F1L homolog does not bind Bim; instead it binds Bid, Bak, and Bax [127], and only counters Bax-mediated apoptosis. The domain-swapped Bcl-2 topology is not restricted to the Orthopoxviridae, with deerpoxvirus-encoded DPV022 [55] also adopting this unusual fold [54]. Amongst the Leoporipoxviridae, myxomavirus encodes for antiapoptotic M11L (Figure 3d), a potent inhibitor of intrinsic apoptosis [128]. Despite lacking detectable sequence identity to cellular Bcl-2 or Bcl-x L , M11L adopts a Bcl-2 fold [57,129] (Figure 2a,b and Figure 3g,h) and sequesters Bax and Bak to prevent apoptosis [57], unlike VACV F1L which operates via Bim neutralization [121]. Other poxvirus-encoded vBcl-2 members include fowlpox FPV039 [58,59] and canarypox virus (CNPV058) [61], sheep poxvirus [62], and orf virus ORFV125 [63]. Outside the herpes and poxviridae, ASFV encodes A179L [130], a Bcl-2 homolog that uses the canonical ligand-binding groove to engage all major host proapoptotic Bcl-2 members [51] as well as Beclin-1 [131], thus acting as a dual apoptosis/autophagy inhibitor [50,132]. Amongst the Iridoviridae, grouper iridovirus (GIV) harbors prosurvival GIV66 [65] that only binds Bim [64], and forms a novel noncovalent dimeric Bcl-2 architecture which leads to an occluded ligand-binding groove [64] and dimer dissociation upon Bim binding. While numerous Poxviridae encode Bcl-2 homologs that inhibit apoptosis, it has become apparent that another subset of poxviral Bcl-2 proteins exists that also modulate other functions. This group includes VACV N1 which, like M11L, has little shared sequence identity with mammalian Bcl-2 proteins (Figure 2b) but is a structural homolog (Figure 3g). N1 is a dual inhibitor of intrinsic apoptosis that adopts a dimeric Bcl-2 fold (Figure 3h) [67,133] where an additional interaction site enables modulation of NF-κB signaling that is regulated independently of the canonical Bcl-2 groove [89]. Other VACV-encoded NF-κB modulatory Bcl-2 proteins include A46 [68,134], A49 [70], A52, B14 [72], and K7 [75]. Despite targeting NF-κB signaling, substantial structural and mechanistic differences are evident across this group of Bcl-2 proteins. Although A52 and B14 utilize helices α1 and α6 to form a similar interface [72] as N1, the angle of orientation between the constituent monomers varies between the three proteins. Intriguingly, while apoptosis inhibitory Bcl-2 members are found in Herpes, Pox, Asfar, and Iridoviridae, more specialist functions are not widely found. Although several herpesviruses as well as ASFV harbor Bcl-2 homologs with autophagy inhibitory activity, no poxvirus has been shown to inhibit autophagy via a Bcl-2 homolog. Conversely, the NF-κB inhibitory activity found in poxvirus-encoded Bcl-2 homologs is not found outside the Poxviridae. Whether or not these differences are attributable to the unique and fundamentally different life cycles and primary sites of infection remains to be established. These findings differentiate the viral Bcl-2 proteins from those in metazoans but indicate the diversity of interactions possible with the Bcl-2 fold. The Nonmammalian Bcl-2 Family Of the four nonbilaterian basal clades of metazoans, Porifera, Placozoa, Cnidaria, and Ctenophora, multiple orthologous and paralogous Bcl-2 family members have been discovered in the genomes of organisms from Porifera, Placozoa and Cnidaria, but have not yet been identified in ctenophores. In contrast to higher organisms and viruses, experimental evidence for the function of the Bcl-2 family in basal metazoans is relatively sparse. Recent sequence, structural, and biochemical evidence gained from poriferan [6], placozoan [78], and cnidarian [110] Bcl-2 family members are elucidating the mechanisms of apoptosis in basal metazoans. Furthermore, these results strongly suggest the molecular basis of intrinsic apoptosis determined by the structures, interactions, and intracellular localization of Bcl-2 proteins was foundational in metazoan evolution. Sponges are currently considered the sister group to metazoans [135,136] and multiple Bcl-2 family proteins have been discovered in members of this phylum; for example, the genome of Amphimedon queenslandica contains seven potential Bcl-2 proteins [137], though little is known of their function. The demosponge Lubormirskia baicalensis harbors putative prosurvival and proapoptotic Bcl-2 proteins LB-Bcl-2 and LB-Bak-2 [76], and two Bcl-2 proteins, BHP1 and BHP2, have been identified in the sponge Geodium cydonium [77]. Structural and biochemical studies on BHP2 showed that a BH3 peptide derived from the BH3 region of L. baicalensis Bak-2 bound in the groove of BHP2 and many of the molecular features elucidated in mammalian Bcl-2 interactions were maintained [6]. Though the topology of BHP2 closely resembles those of other Bcl-2 proteins (Figures 2 and 3), a structure-phylogenetic analysis showed there were relatively subtle differences suggesting BHP2 has unique binding features when compared to mammalian and viral Bcl-2 proteins [6]. These findings not only indicate the structure conservation but the evolutionary conservation of the intermolecular interactions of the BH3 motif:Bcl-2-in-groove interaction between prosurvival and proapoptotic Bcl-2 proteins from sponges to man. The placozoan Trichoplax adhaerens has four putative Bcl-2 fold proteins in its genome [138] including two putative proapoptotic proteins, Bax (trBcl-2L3 or trBax) and Bak (trBcl-2L4 or trBak), and two prosurvival proteins, trBcl-2L1 and trBcl-2L2 [78]. TrBax is inhibited by human Bcl-2, suggesting the BH3-in-groove interaction is conserved. The putative role of trBak in T. adhaerens is somewhat different from that in humans, where it antagonizes the prosurvival activity of trBcl-2L1 and trBcl-2L2 rather than inducing cytochrome c loss from mitochondria, and thus it has been hypothesized that trBak effectively adopts the role of a BH3-only protein in mammals; however, further investigation is required to establish this proposal as no detailed interaction studies were undertaken. As for the case of G. cydonium, the underlying conservation of the Bcl-2:Bax interaction was demonstrated by the inhibition of trBax by human prosurvival proteins. The BH3-only proteins play a key role in mammalian apoptosis where they antagonize the action of prosurvival proteins (Figure 1), but their presence has not been detected in the genomes of Porifera or Placozoa. However, candidates for BH3-only proteins have been detected in the cnidarian H. vulgaris [139,140] in addition to Bcl-2 fold sharing prosurvival proapoptotic Bcl-2 proteins. Potential BH3-only proteins have been identified in H. vulgaris using a yeast two-hybrid screen [110]. The relatively short sequence of the BH3-only motif with essentially only a highly conserved Leu and absolutely conserved Asp four residues downstream has made it difficult to identify bona fide BH3-only sequences by sequence alone, making it necessary for biochemical verification [9,141]. In addition to the four proposed BH3-only proteins in H. vulgaris, there are nine putative Bcl-2 family members including two Bak-like and seven Bcl-2 like sequences [110]. Although further studies are required to establish the exact functional relationships for these proteins, the findings of Lasi et al. [110] point to a complex signaling network for the Bcl-2 proteins even in the earliest of metazoans. The genetic [96] and molecular and structural [79,142] foundations of Bcl-2-regulated apoptosis were established in the ecdysozoan C. elegans (Figures 1, 2a and 3f). Since these discoveries, the basis of prosurvival, proapoptotic, and BH3-only protein interaction has been verified in other organisms. The genomes of the lophotrochozoans Schmidtea mediterranea, S. japonicum, and S. mansoni bear multiple Bcl-2-like proteins including BH3-only components [143,144]. Investigation of apoptosis in platyhelminths (S. mediterranea and Dugesia dorotocephala) identified Bak and Bcl-2 orthologs and experimental data mitochondrial cytochrome c release is associated with MOMP and caspase activation [144]. Binding between the Bcl-2 proteins and BH3-only proteins in S. japonicum was established using immunoprecipitation experiments [143]. Mutational, structural, and biochemical studies defined the binding mode as similar to other Bcl-2:BH3 interactions and cytochrome c release on treatment with a BH3 motif peptide [143]. Combined, these studies on lophotrochozoans indicate that, in contrast to ecdysozoans, a tripartite mechanism exists with prosurvival, proapoptotic, and BH3-only proteins triggering MOMP and cytochrome c release to initiate intrinsic apoptosis in these organisms. The conclusion from these studies is that intrinsic apoptosis signaling in the protostomes has been modified by gene loss in some organisms but the underlying tripartite mechanism leading to MOMP is preserved in others and the MOM interaction remains central. Experiments on the cytosolic extracts from the echinoderms Strongylocentrotus purpuratus (purple sea urchin) and Dendraster excentricus (sand dollar) indicate caspase activation could be induced with cytochrome c, suggesting mitochondrial-regulated apoptosis occurs in the deuterostomes in a similar way to that in the protostomes [144]. However, others have suggested that apoptosis activation in echinoderms may not involve cytochrome c release from mitochondria as cytochrome c is not apparently necessary for Apaf-1-activated apoptosis in the starfish Asterina pectinifera [145]. In the nonmammalian vertebrates, the molecular basis of apoptosis is probably best defined in zebrafish, Danio rerio, where all three groups of the tripartite Bcl-2 family have been identified [146]. D. rerio has an extensive network of Bcl-2 proteins, but as yet there are relatively few details on the mechanism of action even in this well-studied model organism [147]. Genome duplication events in teleost fish [148] have given rise to many Bcl-2 paralogs in D. rerio [97], but the molecular basis of apoptosis is likely similar to that in mammals [25]. Structural studies on D. rerio NRZ show the structure and mode of BH3 interaction is near identical to other organisms [25]. These studies establish Bcl-2 signaling in deuterostomes share many aspects with those from the protostomes and establish the basis for intrinsic apoptosis in the bilaterians. The Role of Mitochondrial Membrane Interactions The defining event of intrinsic apoptosis in mammals is release of cytochrome c from the mitochondrial intermembrane space through supramolecular pores formed by Bax, Bak, or Bok oligomerization on the MOM [149] (Figure 1). Crucial to this action is the presence of a TM anchor and most members of the Bcl-2 family bear tail anchors, including BH3-only proteins, [150] necessary both for their localization at the MOM [5] and apoptotic activity [151]. TM anchor deletion mutants of Bax and Bak lose their apoptotic abilities [152], suggesting the MOM as an activator of Bax/Bak [153]. Similarly, deletion of the TM region of prosurvival Bcl-x L decreases its prosurvival activity [151]. The most striking feature of the TM regions is their poor sequence conservation [154]. Figure 2c illustrates the relatively weak conservation of the TM region compared to the BH motifs in Bax sequences. In mammalian apoptosis, cofactors such as the β-barrel Voltage Dependent Anion Channel-2 (VDAC2) may be important in Bax/Bak membrane recruitment [155], but this has not yet been demonstrated in basal metazoan apoptosis. While most investigations have focused on apoptosis in the mouse or humans, MOM association has also been observed for Bcl-2 proteins from placozoans, hydra, and viral proteins, indicating the fundamental nature of this activity to Bcl-2 action. The structures of the proapoptotic proteins Bax, [29] Bak, [27] and Bok [156,157] have an essentially identical core that suggests a common mode of action; however, their subcellular localization and dynamics differ significantly [91]. The crystal structure of mouse Bax shows that the TM region is helical and packed in the equivalent site as occupied by EGL-1-binding CED-9 (Figure 3d,f). Prior to apoptotic stimuli, Bax is largely cytosolic [158], with a fraction being shuttled to the mitochondrion surface [159], but subsequent to apoptotic stimuli, Bax accumulates at the MOM [160,161] via a process that is dependent on its TM residues [162]. In contrast to Bax, Bak is constitutively membrane-bound and Bok is only fractionally colocalized with mitochondria [163]. The prosurvival protein Bcl-x L translocates Bax from the MOM to the cytosol, and this process is dependent on the BH3-binding groove [159] and TM region of Bcl-x L [164]. Thus, the proapoptotic proteins have complex membrane interactions and dynamics and emerging data supports a similar view in basal metazoans. Experimental details on the localization and dynamics of Bcl-2 proteins are now emerging for the placozoan T. adhaerens, and a similar picture of complex dynamic behavior to mammalian Bcl-2 family proteins is emerging [78]. T. adhaerens Bcl-2 proteins are differentially partitioned between mitochondria, ER, and the cytosol, and the TM region is necessary for its membrane localization [78]. TrBcl2L1 is tightly associated with the MOM, while trBcl-2L2 (trMcl-1) and trBcl-2L4 (trBak) are cytosolic and only loosely associated with intracellular membranes [78] in a manner that mirrors the mammalian Bcl-2 proteins. Mammalian prosurvival proteins like the proapoptotic proteins are differentially partitioned between cytosol and membranes. For example, Bcl-2 is membrane integrated and Bcl-x L partitioned between cytosol and membranes. Like its mammalian counterpart, Trichoplax Bax (Bcl-2L3) translocates to the mitochondria and induces cytochrome c release [78]. However, one caveat of these discoveries is that they have been performed in heterologous systems using expression of Trichoplax proteins in mammalian cells and have yet to be confirmed in homologous systems. The studies on T. adhaerens Bcl-2 proteins and their membrane interactions point to the fundamental nature of membrane interactions in their mechanism of action. Although not all viral Bcl-2 proteins bear an obvious TM region, many do [165], and membrane interactions are necessary for their prosurvival activity. This trend closely mirrors what has been observed for mammalian proteins like Bcl-x L . Similarly for the viral Bcl-2 protein F1L, association with mitochondria is necessary for its prosurvival behavior [52], and M11L localizes at the mitochondria [128] and colocalizes with Bak at the MOM [166]. ASFV A179L localizes to mitochondria and ER [132]. Although C-terminal anchoring of Bcl-2 proteins is often maintained, there are exceptions; for example, vMIA targets the MOM through its N-terminal region [167]. It appears likely that the interaction of Bcl-2 proteins with membranes is fundamental to Bcl-2 function and has been conserved from the earliest metazoans and maintained in viral Bcl-2 proteins. While MOM localization of Bcl-2 proteins may be conserved, MOMP is not necessarily maintained as a mechanism of activating apoptosis. Ecdysozoans have undergone extensive gene loss [168] and have fewer Bcl-2 genes and mechanistic differences from mammalian apoptosis (Figure 1) [169]. For example, the sole Bcl-2 protein CED-9 in the nematode C. elegans is TM-anchored to mitochondria like its mammalian counterpart Bcl-2 and although they have closely related structures (Figures 2 and 3) and bind BH3-only proteins in their respective binding grooves, their role in apoptosis mechanisms is not identical [7]. CED-9 binds and antagonizes the apical caspase CED-4 and once released from its inhibition by the translationally upregulated BH3-only protein EGL-1 binding CED-9, CED-4 activates the caspase CED-3 [7,96]. In comparison, Bcl-2 is also localized to the MOM amongst other intracellular membranes [91] and binds BH3-only proteins but it does not bind Apaf-1, the mammalian caspase-activating protein corresponding to CED-4. Distinct from the nematode, the role of Bcl-2 proteins in the fly D. melanogaster is less well understood [170]. BH3-only proteins have not been found in the genome of D. melanogaster but two Bcl-2 proteins, Debcl and Buffy, have been recognized and it has been shown they interact [42] and both localize to the MOM with Buffy additionally found at the ER [171]. Thus, although the sequences, structure, and membrane binding may all be conserved elements in the Bcl-2 family, there are key differences in how they are manifested in the activation of caspases. Conclusions It is not clear how the Bcl-2 family arose; one hypothesis is that it occurred through horizontal gene transfer from a symbiont [172], but multiple Bcl-2 genes occurred early in metazoan evolution. Even in the sponges, a phylum considered to be the sister group of all metazoans [135], multiple Bcl-2 fold proteins have been identified in genomes, such as that of A. queenslandica [137] where seven such proteins were recognized. In contrast to the early appearance of Bcl-2 fold proteins, BH3-only proteins have not yet been identified in Porifera or Placozoa. Ctenophores appear to have lost the genes required for Bcl-2-regulated apoptosis altogether (Figure 4a). Emerging results from biophysical and biochemical measurements performed on the nonmammalian Bcl-2 family including those from sponges [6], placozoans [78] and cnidarians [110] indicate that the basic architecture of intrinsic apoptosis is maintained for these basal metazoans. Structural studies have shown that the molecular details of interactions have been conserved from sponges to man [6] and viruses have assimilated Bcl-2 proteins [111]. A key difference between sponges, placozoans, and cnidarians is an apparent absence of BH3-only proteins in sponges and placozoans. The essential role of the BH3-only proteins, at least in mammals, is to neutralize the prosurvival proteins to allow the MOM to activate the proapoptotic Bcl-2 proteins [153]. Based on these results, a hypothesis for a simple model for intrinsic apoptosis in the absence of BH3-ony proteins could be envisaged as the prosurvival proteins keeping the proapoptotic proteins in check. Alternately, as recently proposed, the Bak-like protein may partially fulfill the role of BH3-only proteins [78] (Figure 4b). The investigation of the more evolutionary distant members of the Bcl-2 family has exposed the substantial complexity in Bcl-2-mediated signaling at the foundation of metazoan evolution and underscores the pivotal role these proteins play in biology. Functional and mechanistic studies to date have only just begun to unravel the role Bcl-2 has played during the early stages of metazoan life, and future studies are likely to discover new twists to Bcl-2 signaling. Biomolecules 2020, 10, x FOR PEER REVIEW 12 of 19 proteins play in biology. Functional and mechanistic studies to date have only just begun to unravel the role Bcl-2 has played during the early stages of metazoan life, and future studies are likely to discover new twists to Bcl-2 signaling. Conflicts of Interest: The authors declare no conflict of interest.
7,025.2
2020-01-01T00:00:00.000
[ "Biology" ]
Suppression E ff ects of Hydroxy Acid Modified Montmorillonite Powders on Methane Explosions : In this paper, montmorillonite inhibitors modified with polyhydroxy functional groups by gluconic acid (GA) were successfully prepared. The particle size distribution, composition, surface functional groups, and pyrolysis characteristics of the pure montmorillonite powders (Mt) and the gluconic acid modified powders (G-Mt) were analyzed by using a laser particle analyzer, X-ray di ff raction (XRD), Fourier transform infrared (FTIR) and thermogravimetry–di ff erential scanning calorimetry (TG-DSC), respectively. The suppression e ff ect of Mt and G-Mt on the 9.5% methane–air premixed gas was tested in a 20 L spherical explosion device and a 5 L pipeline experimental system. The results show that G-Mt displays a much better suppression property than that of Mt. The optimal explosion suppression e ff ect concentration of Mt or G-Mt powders is about 0.25 g · L − 1 . In this concentration, for G-Mt, the maximum explosion pressure declined by 26.7%, the maximum rate of pressure rise declined by 74.63%, and the time for the flame front to reach the top of the pipe was delayed by 242.5%. On the basis of the experimental data, the better suppression e ff ect of G-Mt than Mt might be attributed to the presence of more hydroxyl groups on the surface. mixed-10-phosphaphenanthrene-10-oxide, and the results showed that the pyrolysis fragments could Introduction As is well known, a methane-air mixture is a type of explosive gas and is the main component of natural gas, biogas, and coalbed methane. It is a high-quality clean fuel as well as an important raw material for the manufacture of syngas, as well as many chemical products that are usually transported through the pipeline network [1]. However, it can lead to extremely serious consequences because of its explosiveness. Thus, in order to prevent or reduce the damage caused by methane explosions, many inhibitors have been developed such as inert gases, water mist, aerosol, powder inhibitors, and so on [2][3][4][5][6]. Chemical powders-especially inorganic powders-have attracted many studies owing to the advantages of their easy storage, low cost, high dispersion, and being environmentally benign. As reported by Luo, Liu and Zhang [7][8][9], diatomite, quartz, rock dust, and palygorskite powders presented certain explosion suppression effects on methane explosions. Sun, Ni, and Wang obtained novel composite powder inhibitors by NaHCO 3 powders with porous kaolinite, zeolite, and red mud, respectively [10][11][12]. Hu modified Mg(OH) 2 powders by the use of 9,10-dihydro-9-oxygen mixed-10-phosphaphenanthrene-10-oxide, and the results showed that the pyrolysis fragments could directly react with flame free radicals to achieve an excellent explosion suppression effect [13]. His research also indicated that the development of modified inorganic mineral powders which can react with active free radicals (·H, ·HO 2 , ·CH, ·CH 3 and ·HCO) [14][15][16][17][18] of the methane explosion is an effective methane explosion suppression method. Montmorillonite is a kind of 2:1 layered structure clay [19]. Because of its special structure, it is often used for functional modification [20]. Ca 2+ -montmorillonite is usually treated with acids to replace the divalent calcium cations with monovalent hydrogen ions, with the aim of altering the smectite layers and increasing the specific surface area and porosity [21]. Gluconic acid is a type of low carbon polyhydroxy acid, and the hydroxyl groups on saturated carbon are easily removed after protonation. Therefore, using gluconic acid to modify montmorillonite may increase the amount of the montmorillonite surface hydroxyl groups; thus, the modified montmorillonite may exhibit greater gas explosion suppression performance. In this study, montmorillonite was modified by gluconic acid, and montmorillonite powders with polyhydroxyl functional groups were obtained. Then, the explosion inhibition effects of the montmorillonite powders with polyhydroxyl functional groups on methane explosions were investigated using a 20 L stainless steel spherical vessel and a 5 L pipeline experimental system. Based on the experimental results, a possible suppression mechanism was discussed. Materials and Preparation Procedures The montmorillonite (purity > 95%) was obtained from Zhejiang Sanding Technology Co., Ltd. Gluconic acid (C 6 H 12 O 7 , 99%) was purchased from Nine-Dinn Chemistry (Shanghai) Co., Ltd. All of the raw chemical reagents used in the experiments were of analytical-grade purity and were used directly without further purification. The procedure used to prepare the modified montmorillonite inhibitors was as follows. Firstly, a solution of 0.5 mol/L gluconic acid was prepared and 10 g of montmorillonite powders were mixed with 40 mL of acid solution under a liquid-solid ratio of 4:1. Then, the mixture was stirred for two hours at a constant temperature of 298 K in a heat-collecting thermostatic heating magnetic agitator. Afterwards, it was rinsed 4-5 times using deionized water and then pumped with a filter pump 3-4 times to obtain precipitation. The mixture was dried for one week in a vacuum drying oven at 313 K. Finally, the powders for testing were collected through a 200-mesh sieve screen. Powder Characterisation Methods X-ray powder diffraction (XRD) analysis was performed by a Bruker AXS D8 advance diffractometer (AXS D8, Bruker, Madison, WI, USA) with Cu/K α radiation, at 40 kV and 25 mA, in a scanning range of 5-80 • (2θ). A Malvern Sizer 2000 (Mastersizer2000, Marvin instruments Ltd, Worcestershire, UK) was used to test the powder specification. The Fourier transform infrared (FTIR) spectrum of the KBr wafer was recorded using a Nicolet 6700 Fourier Transform Spectrometer (TENSOR-37, Brook spectroscopic instruments Ltd, Ettlingen, Germany). Thermogravimetry-differential scanning calorimetry (TG-DSC) analysis was conducted using a Simultaneous Thermal Analyser (NETZSCH, Selb, Germany) in a flow of air (20 mL·min −1 ) at a heating rate of 10 • C·min −1 . In this process, the starting temperature was 25 • C and the termination temperature was 800 • C. Powder Suppression Explosion Experiments The explosion pressure parameters of the methane explosion, suppressed by different concentrations of Mt and G-Mt, were tested by using the 20 L spherical explosion instrument, and the flame propagation behavior was measured in the pipeline test system, which had a cross-sectional area of 100 × 100 mm 2 and a length of 500 mm. It has been shown that the dust dispersion inside the 20 L sphere used for explosion experiments is not uniform [22], and also that dust particles may undergo fragmentation when passing through the nozzle of the 20 L spherical vessel [23]. The same is true in the pipeline test system. In order to ensure the accuracy and reproducibility of the results, each experiment was repeated at least three times. Fortunately, the experimental results showed that the experimental data in this study are repeatable. The physical properties of the montmorillonite powders and the palygorskite powders were similar; thus, the test parameters of each system were the same as those used in previous research [9]. Characterisation of Mt and G-Mt The XRD patterns of Mt and G-Mt are shown in Figure 1. It can be seen that the diffraction peaks of the two samples match the Mt standard pattern (JCPDS: 13-0135). The three principal peaks at 2θ = 5.887 • , 17.688 • , and 19.712 • correspond to the (001), (003), and (100) planes of Mt, respectively [24]. Moreover, the value of d 001 of the (001) crystal is 1.535 nm, indicating that the sample is typical calcium montmorillonite [25]. Compared with d 001 of the (001) crystal peak of Mt, the value of d 001 of G-Mt decreases slightly. This is because the structure of Mt is destroyed by GA [26]. An obvious peak is observed at 2θ = 26.639 • , which agrees with SiO 2 (JCPDS: 13-0135). Thus, it can be inferred that a small amount of quartz is included in the Mt. 99 Moreover, the value of d001 of the (001) crystal is 1.535 nm, indicating that the sample is typical calcium 100 montmorillonite [25]. Compared with d001 of the (001) crystal peak of Mt, the value of d001 of G-Mt 101 decreases slightly. This is because the structure of Mt is destroyed by GA [26]. An obvious peak is 106 The powder inhibitors used in this study were all sieved through a standard 200-mesh screen. 107 Then, their particle size distributions were determined using a Malvern Mastersizer 2000 laser The powder inhibitors used in this study were all sieved through a standard 200-mesh screen. Then, their particle size distributions were determined using a Malvern Mastersizer 2000 laser particle analyzer (Mastersizer2000, Marvin instruments Ltd, UK). The results of the two different powders are shown in Figure 2 clay could be predicted when the increase of the relative intensities of the aforementioned peaks was 119 observed [30]. Therefore, the 1643 and 3000-4000 cm −1 bands were processed by peak-splitting fitting 120 [31], and the corresponding results are shown in Figure 4. As can be seen from Table 1, the 1643 cm −1 121 νOH can be attributed to the interlayer water with a variety of orientations and interactions [32][33][34]. 122 The peak area value in peak 1 between Mt and G-Mt is similar, which might exclude the interference [27]. The -OH bending and stretching vibration are found at 3400-3700 cm −1 [28,29]. It can be seen from Figure 3 that the -OH absorption peaks of G-Mt are significantly stronger than those of Mt. Comparing the adsorption peaks of -OH in two samples, the possible intercalation and grafting of -OH within the clay could be predicted when the increase of the relative intensities of the aforementioned peaks was observed [30]. Therefore, the 1643 and 3000-4000 cm −1 bands were processed by peak-splitting fitting [31], and the corresponding results are shown in Figure 4. As can be seen from Table 1, the 1643 cm −1 ν OH can be attributed to the interlayer water with a variety of orientations and interactions [32][33][34]. The peak area value in peak 1 between Mt and G-Mt is similar, which might exclude the interference of water. However, ν OH and δ OH , belonging to the structural hydroxyl of G-Mt at 3000-4000 cm −1 , increases about twice as much as for Mt. observed [30]. Therefore, the 1643 and 3000-4000 cm −1 bands were processed by peak-splitting fitting 120 [31], and the corresponding results are shown in Figure 4. As can be seen from Table 1, the 1643 cm −1 121 νOH can be attributed to the interlayer water with a variety of orientations and interactions [32][33][34]. 122 The peak area value in peak 1 between Mt and G-Mt is similar, which might exclude the interference Figure 5 presents the TG-DSC curves of Mt and G-Mt. It can be seen from Figure 5 that the 131 decomposition process of Mt begins at 70 °C and ends at 250 °C. The mass loss is 12.17%, which is 132 attributed to the loss of intercalated moisture [19]. From 250 to 800 °C, the TG curve of Mt becomes 133 steady and shows little change with further increases in temperature. In addition, a small 134 endothermic peak of the Mt DSC curve from 250 to 800 °C, with a mass loss of 4.15%, can be observed, 135 which could be attributed to the removal of -OH from the crystal structure of Mt [35]. For the TG 136 curve of G-Mt, a large gradual mass loss of 5.63% occurs after 250 °C, and the total weight loss (Table 137 2) of G-Mt is larger than that of Mt, but the total endothermic quantity of G-Mt is smaller. The reason Figure 5 presents the TG-DSC curves of Mt and G-Mt. It can be seen from Figure 5 that the decomposition process of Mt begins at 70 • C and ends at 250 • C. The mass loss is 12.17%, which is attributed to the loss of intercalated moisture [19]. From 250 to 800 • C, the TG curve of Mt becomes steady and shows little change with further increases in temperature. In addition, a small endothermic peak of the Mt DSC curve from 250 to 800 • C, with a mass loss of 4.15%, can be observed, which could be attributed to the removal of -OH from the crystal structure of Mt [35]. For the TG curve of G-Mt, a large gradual mass loss of 5.63% occurs after 250 • C, and the total weight loss ( Table 2) of G-Mt is larger than that of Mt, but the total endothermic quantity of G-Mt is smaller. The reason for the smaller endothermic quantity of G-Mt can be attributed to the closely distributed hydroxyl groups on the surface of G-Mt, which can form H-bonded hydroxyl groups that are gradually removed at high temperatures [8,35]. 129 Figure 5 presents the TG-DSC curves of Mt and G-Mt. It can be seen from Figure 5 that the 131 decomposition process of Mt begins at 70 °C and ends at 250 °C. The mass loss is 12.17%, which is 132 attributed to the loss of intercalated moisture [19]. From 250 to 800 °C, the TG curve of Mt becomes 133 steady and shows little change with further increases in temperature. In addition, a small 134 endothermic peak of the Mt DSC curve from 250 to 800 °C, with a mass loss of 4.15%, can be observed, 135 which could be attributed to the removal of -OH from the crystal structure of Mt [35]. For the TG 136 curve of G-Mt, a large gradual mass loss of 5.63% occurs after 250 °C, and the total weight loss (Table 137 2) of G-Mt is larger than that of Mt, but the total endothermic quantity of G-Mt is smaller. The reason Suppression Effect of Mt and G-Mt The explosion pressure parameters were tested in a 20 L spherical vessel. The effects of Mt and G-Mt on the 9.5% methane explosion are presented in Figure 6. As can be seen from Figure 6a,b, both the maximum explosion pressure (P max ) and the maximum rate of explosion pressure rise ([dP/dt] max ) decrease after the addition of Mt or G-Mt. Meanwhile, the time for the pressure to reach a maximum value (T max ) obviously increases, as shown in Figure 6c. On increasing the concentration of Mt or G-Mt, both P max and [dP/dt] max decrease and T max increases. However, on further increasing the concentration of Mt or G-Mt, P max and [dP/dt] max no longer decrease and T max no longer increases, indicating that an optimal inhibitory effect concentration of Mt or G-Mt exists. According to Figure 6a-c, it can be found that the optimal concentration is about 0.25 g·L −1 . Furthermore, comparing the results of P max , [dP/dt] max , and T max , the inhibitory effect of G-Mt is more significant than Mt with the addition of the same mass concentration powders. G-Mt on the 9.5% methane explosion are presented in Figure 6. As can be seen from Figure 6a 152 indicating that an optimal inhibitory effect concentration of Mt or G-Mt exists. According to Figure 153 6a-c, it can be found that the optimal concentration is about 0.25 g·L −1 . Furthermore, comparing the 154 results of Pmax, [dP/dt]max, and Tmax, the inhibitory effect of G-Mt is more significant than Mt with the 155 addition of the same mass concentration powders. 156 In order to compare the suppression effect of Mt or G-Mt more clearly, the explosion pressure 157 curves of the 9.5% methane mixed with Mt and G-Mt, respectively, with a concentration of 0.25 g·L −1 , 158 are shown in Figure.6d. It can be seen that the methane explosion could be divided into two main 159 stages. The first is the pressure increase stage, in which the pressure increases rapidly with time. The 160 heat released from combustion exceeds the heat lost to the surroundings. The second stage is the 161 pressure decay process [10]. For 9.5% methane, the Pmax, [dP/dt]max and Tmax are 0.654 MPa, 87.648 162 MPa/s and 0.138 s, respectively, as shown in Table 3. As the G-Mt with a concentration of 0.25 g·L -1 is 163 added, the Pmax and [dP/dt]max decrease by 26.7% and 74.63% respectively, and Tmax increases by 164 55.8%. For Mt, these percentage values are 11.6%, 50.77%, and 50.74% respectively, which are much 165 lower than those of G-Mt. Thus, the inhibition effect of G-Mt is much better than that of Mt. In order to compare the suppression effect of Mt or G-Mt more clearly, the explosion pressure curves of the 9.5% methane mixed with Mt and G-Mt, respectively, with a concentration of 0.25 g·L −1 , are shown in Figure 6d. It can be seen that the methane explosion could be divided into two main stages. The first is the pressure increase stage, in which the pressure increases rapidly with time. The heat released from combustion exceeds the heat lost to the surroundings. The second stage is the pressure decay process [10]. For 9.5% methane, the P max , [dP/dt] max and T max are 0.654 MPa, 87.648 MPa/s and 0.138 s, respectively, as shown in Table 3. As the G-Mt with a concentration of 0.25 g·L −1 is added, the P max and [dP/dt] max decrease by 26.7% and 74.63% respectively, and T max increases by 55.8%. For Mt, these percentage values are 11.6%, 50.77%, and 50.74% respectively, which are much lower than those of G-Mt. Thus, the inhibition effect of G-Mt is much better than that of Mt. The flame propagation characteristics of ma ethane explosion with inhibitors are also important evaluation parameters. Hence, experiments were performed in the 5 L pipeline experimental system. The flame propagation images of the methane explosions with no powder, Mt and G-Mt are shown in Figure 7. It can be seen that the flame front of the methane explosion with no powder reaches the upper end 20 ms after ignition. However, when Mt or G-Mt are added, the explosion flame darkens and slows down. Simultaneously, an uneven flame structure appears, which can be attributed to the nonuniformity of the dust dispersion or dust particle fragmentation. Figure 8a,b presents the variation in the flame front position (FFP) for the 9.5% methane explosion with different concentrations of Mt and G-Mt, respectively. As can be seen from Figure 8a Figure 10b. The FPV of methane explosion with no powder shows the trend of initial acceleration followed by a steady propagation that may be due to the reflected explosion wave [36,37]. The value of FPV significantly decreases with the addition of 0.25 g·L −1 Mt or 0.25 g·L −1 G-Mt. Furthermore, the FPV curve of G-Mt with the same concentration is below that of Mt, indicating that the suppression property of G-Mt is better than Mt. These test results are consistent with those obtained in the 20 L spherical explosive device. The flame propagation characteristics of ma ethane explosion with inhibitors are also important 171 evaluation parameters. Hence, experiments were performed in the 5 L pipeline experimental system. 172 The as Equations (4)-(7), et al. [14,17]. Nie confirmed that the formation of free radicals (O, H, and OH) 228 was mostly sensitive to the forward reaction of Equation (6) and the reverse reaction of Equation (7) 229 [14]. Therefore, when Mt or G-Mt are added into the methane explosion, a large number of hydroxyl 230 radicals may be produced in the instantaneous high temperature, and these hydroxyl radicals can 231 reverse the direction of the key reactions of Equation (6) and Equation (7) Suppression Mechanism of Mt and G-Mt From the experimental results, the methane explosion suppression mechanism can be attributed to the combined efforts of physical and chemical inhibition effects. Physical Inhibition Effect On the one hand, the evaporation of surface and interlayer water of Mt or G-Mt can absorb the amount of heat released from the explosion reaction to cool the system down and inhibit the propagation of the explosion wave [2]. On the other hand, the inhibitor powders, which are widely dispersed in the reaction zone, can reduce the thermal diffusion coefficient of the unburnt gas as well as the heat transfer from the reaction zone to the unburnt zone by blocking and absorbing the heat radiation. Chemical Inhibition Effect Many studies have shown that there are 325 elementary reactions, involving 53 species, which occur in the process of a methane explosion [38,39]. The chain reactions are dominated by free radicals (especially ·O, H·, ·HO 2 , ·CH 3 , ·HCO, and ·OH) during the chemical reaction. Montmorillonite modified by gluconic acid alters the smectite layers and increases the specific surface area and porosity, which improves the probability of collisions between the surface hydroxyls of Mt or G-Mt and the active radicals. Therefore, the ·O, H·, and ·OH, et al., of the methane explosion may be eliminated by the surface hydroxyls of Mt or G-Mt [40,41]. Furthermore, the metal ions such as Ca 2+ , Mg 2+ , Al 3+ which are generated during the heat dehydration process of Mt or G-Mt can react with the above free radicals [42,43]. The related reactions are as follows: Furthermore, some of the key reactions play an important role in the methane explosion, such as Equations (4)-(7), et al. [14,17]. Nie confirmed that the formation of free radicals (·O, H·, and ·OH) was mostly sensitive to the forward reaction of Equation (6) and the reverse reaction of Equation (7) [14]. Therefore, when Mt or G-Mt are added into the methane explosion, a large number of hydroxyl radicals may be produced in the instantaneous high temperature, and these hydroxyl radicals can reverse the direction of the key reactions of Equation (6) and Equation (7) According to the TG-DSC curves, the total endothermic quantities of Mt and G-Mt are 1143 J/g and 570.3 J/g, respectively. The average particle size of G-Mt is obviously larger than Mt, but G-Mt presents a better explosion suppression efficiency than Mt. These results prove that more hydroxyl groups of G-Mt exert a better chemical inhibition effect in the methane explosion. Conclusions In this work, clean, nontoxic and low-cost inhibitors, with polyhydroxy functional groups modified by gluconic acid, were obtained through a simple stirring method. Montmorillonite powders modified by gluconic acid had a better explosion suppression capability than the pure montmorillonite powders. When G-Mt was used as a methane explosion suppression powder material, the maximum explosion pressure reduced by 26.7%, the maximum rate of pressure rise declined by 74.63%, and the time of the flame front reaching the top of the pipe was delayed significantly. Based on the characterization analysis of the powders and the results of methane explosion suppression, it was found that the hydroxyl functional groups on the surface of Mt and G-Mt present a positive inhibitory effect. However, the exact mechanism of their action is still unclear. Hence, more work is needed to fundamentally understand the hydroxyl functional groups' response to the methane explosion and to complement the experimental results with molecular reaction simulations. In general, this work has some guiding potential for the development and preparation of new explosion inhibitors.
5,339.6
2019-10-25T00:00:00.000
[ "Materials Science" ]
Comment on"Fast-slow mode coupling instability for coasting beams in the presence of detuning impedance" In this comment we show untenability of key points of the recent article of N. Biancacci, E. Metral and M. Migliorati [Phys. Rev. Accel. Beams 23, 124402 (2020)], hereafter the Article and the Authors. Specifically, the main Eqs. (23), suggested to describe mode coupling, are shown to be unacceptable even as an approximation. The Article claims the solution of this pair of equations to be in"excellent agreement"with the pyHEADTAIL simulations for CERN PS, which is purportedly demonstrated by Fig. 6. Were it really so, it would be a signal of a mistake in the code. However, the key part of the simulation results is not actually shown, and the demonstrated agreement has all the features of an illusion. Due to the driving impedance properties, only the slow modes can be unstable. An illustrative sketch of the spectrum is presented in Fig. 1, assuming smooth approximation and focusing detuning impedance, when the modes can cross but not couple. For the Article's PS example with lattice tunes Q β = ω β /ω 0 = 6.4, a slow mode with frequency ω β − 7ω 0 = −0.6ω 0 has the nearest mode −ω β + 6ω 0 = −0.4ω 0 , the backward one. The Article, including the title, calls the latter mode "fast," which is a terminological mistake: the value of the phase velocity of that allegedly "fast" mode is actually smallest among all the modes. In linear systems with time-independent coefficients, modes can couple only when their frequencies coincide. Clearly, positive-based modes can couple only with the negative-based ones, and vice versa; one of the modes must be stable (fast, zero, or backward), and another unstable (slow). For the PS example, the positive-based slow mode with n = n 1 = −7 might couple with the negative-based backward mode with n = n 2 = +6. The coupling can happen if the lattice tune difference between the two modes, 0.6 − 0.4 = 0.2, is compensated by the detuning impedance, presumably able to shift the betatron tune up by 0.1. Note that the difference between the harmonic numbers of the coupled modes, n 2 − n 1 = 13 = ceil(2Q β ), is just above the doubled betatron tune; the same is true for any pair of coupled modes. We beg pardon for this pedantic textbook explanation, but we feel obliged to make it since in the Article the terms are confused and the harmonic numbers are given without signs, making a false impression that the modes of the neighbor harmonics, 6 and 7, can sometimes be coupled. Let us now come back to Eqs. (23). They are derived from Eq. (22) by an ansatz that the collective oscillation y(s, t) is a linear combination of two harmonics, n 1 and n 2 . In where Z driv (Ω, s)β(s) = const, these cross terms are equal to zero. Thus, instead of Eqs. (23) of the Article, the mode coupling problem should be described by the following equations; Here ∆Ω tot ∝ i ds[Z det (0, s) + Z driv (Ω, s)]β(s) is the conventional uncoupled coherent tune shift at the sought-for frequency Ω ≈ ±ω β + nω 0 , and the cross-coefficients can be expressed as ∆Ω driv with ∆Ω driv As to the bottom plot of Fig. 6, we see there that the mode tunes are locked in the half-integer resonance. For the simulations, something like that has to be expected just on the ground of the sufficiently strong detuning quadrupole for a lattice with nonzero harmonic 13. For a perfectly smooth lattice, however, the half-integer tune 6.5 would be as good as any other tune; thus, the result of the simulations must be sensitive to the lattice smoothness. Since Eqs. (23) are fully insensitive to the phase advance per cell or other smoothness parameters, the agreement between the pyHEADTAIL simulations and theory in the bottom plot of Fig. 6 can be only accidental. We'd like also to note that, although the half integer resonance is not presented in the smooth approximation, it plays a significant role in any real machine. Approaching this resonance results in a big variation of beta-functions and, consequently, fast increase of effective impedance Z driv (Ω, s)β(s) and its coupling-related harmonic, mentioned above. We hope that our disagreement with key issues of the Article is clearly expressed, and we would appreciate a response of the Authors.
1,050
2021-04-06T00:00:00.000
[ "Physics" ]
Evaluation of a Generative Adversarial Network to Improve Image Quality and Reduce Radiation-Dose during Digital Breast Tomosynthesis In this study, we evaluated the improvement of image quality in digital breast tomosynthesis under low-radiation dose conditions of pre-reconstruction processing using conditional generative adversarial networks [cGAN (pix2pix)]. Pix2pix pre-reconstruction processing with filtered back projection (FBP) was compared with and without multiscale bilateral filtering (MSBF) during pre-reconstruction processing. Noise reduction and preserve contrast rates were compared using full width at half-maximum (FWHM), contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) in the in-focus plane using a BR3D phantom at various radiation doses [reference-dose (automatic exposure control reference dose: AECrd), 50% and 75% reduction of AECrd] and phantom thicknesses (40 mm, 50 mm, and 60 mm). The overall performance of pix2pix pre-reconstruction processing was effective in terms of FWHM, PSNR, and SSIM. At ~50% radiation-dose reduction, FWHM yielded good results independently of the microcalcification size used in the BR3D phantom, and good noise reduction and preserved contrast. PSNR results showed that pix2pix pre-reconstruction processing represented the minimum in the error with reference FBP images at an approximately 50% reduction in radiation-dose. SSIM analysis indicated that pix2pix pre-reconstruction processing yielded superior similarity when compared with and without MSBF pre-reconstruction processing at ~50% radiation-dose reduction, with features most similar to the reference FBP images. Thus, pix2pix pre-reconstruction processing is promising for reducing noise with preserve contrast and radiation-dose reduction in clinical practice. Introduction Digital tomosynthesis provides limited three-dimensional (3D) structural information about body structures by combining the advantages of digital imaging [1,2] and computed tomography. More specifically, digital breast tomosynthesis (DBT) reconstructs an entire image volume from a sequence of projection-view mammograms acquired within a small number of projection angles over a limited angular range to yield limited 3D structural information. Effects from the superposition of tissues are reduced with DBT, but in many situations, such as in dense breasts, such effects can persist. DBT decreases the camouflaging effects of the overlapping fibroglandular breast tissues, improves the conspicuity of subtle lesions, and could thus be used to improve the early detection of breast cancer [1,3,4]. To date, several digital mammography-based DBT systems have been developed [5][6][7], and there are ongoing studies aiming to define its utility and improvements [1,8]. Wu et al. evaluated image quality using a conventional reconstruction algorithm (filtered back projection; FBP [9]), statistical iterative reconstruction (IR) algorithms (maximum likelihood expectation maximization; MLEM [3]), and simultaneous IR algorithms (the simultaneous IR technique; SIRT [10]) and concluded that the MLEM algorithm provided a good balance between low-and high-frequency features [3], and the exploration of various DBT reconstruction methods have been reported [11][12][13][14]. Other researchers used a total variation-minimization algorithm (adaptive steepest descent projection onto convex sets) [15] with a gradient-based penalty term to enhance microcalcifications (MCs) on DBT images [16]. On the other hand, another study quantitatively compared DBT algorithms in terms of image quality and radiation doses [17]. In that report, IR was found to effectively decrease quantum noise and radiation exposure; however, the evaluation in that study was limited and merely compared existing methods (FBP vs. IR: SIRT and MLEM). With the aim of overcoming the drawbacks of previous algorithms, a recent report described the development of an improved processing method for iterative DBT reconstruction (multiscale bilateral filtering; MSBF) [18] with the simultaneous algebraic reconstruction technique algorithm [10]. Specifically, this method aimed to improve the contrast of MCs without compromising the image qualities of masses and soft-tissue background structures. The previous study evaluated only MCs and not masses. Existing DBT techniques used in clinical diagnostic studies have enabled the visualization of fine tissue structures with a shorter scan time. Nevertheless, all DBT systems are limited by the issue of patient radiation exposure, which highlights the need to preserve contrast in order to improve the sharpness of the image and detectability of the object. Furthermore, radiographic images can be degraded by quantum mottle, a consequence of spatial incident photon fluctuation. Quantum mottle is inversely associated with exposure, and therefore, any decrease in patient dose would be restricted by the degree of quantum mottle, even with a perfect detector. Even with a perfect detector, other factors are present, including X-ray scatter and X-ray spectrum, number of views, and other factors, to avoid minimizing image reconstruction and image processing, both of which are very important. Therefore, further decreases in patient doses and improvements in detection rely on innovations such as a new detector type, alternative X-ray sources, and an algorithm that improves image quality by incorporating suitable approaches. Moreover, DBT involves the reconstruction of images limited by a low signal-to-noise ratio due to the superposition of several low-exposure projection images. Furthermore, this characteristic causes a concurrent loss of plane-relevant details, which reduces the contrast of the reconstructed images. Several methods have been proposed to suppress this irrelevant plane information and enhance the image quality of DBT [18,19]. In reconstructed DBT images, noise further affects the visibility and detectability of subtle MCs. To overcome this limitation, several noise-suppression techniques have been proposed to enhance MCs [16,20,21]. However, most of the existing regularization methods for DBT reconstruction were designed for general image applications and are driven by local gradients [22,23]. In the accelerating evolution of deep learning, the transition from a convolutional neural network [24] to a generative adversarial network (GAN) [25,26] has contributed to digital tomosynthesis imaging [27][28][29][30][31][32][33][34][35][36]. Prior studies reported that GANs are particularly useful for reducing metal artifacts [29,30] and noise [27] and are expected to contribute to improvements in image quality processes to reduce the exposure dose. Some studies have recently reported the usefulness of deep learning to improve image quality and reduce noise in tomosynthesis [27,29]. Noise and radiation-dose reductions using deep learning for digital tomosynthesis of the breast and metal artifact reduction are possible [27,30]. Thus, application of deep learning can be used to improve image quality further and reduce the radiation-dose. In the DBT imaging field, recent reports have detailed the detection of masses and the image quality improvement process that introduces deep learning [28,[31][32][33][34][35]. With regard to image quality improvement processing that uses conditional GAN (cGAN, or pix2pix) [25], "pix2pix", which approximates the object image to the referenced image using the concept of an adversarial network using "generator" and "discriminator", has been shown to be useful for noise reduction. cGAN provides a powerful image translation framework that works well in many areas. In addition to cGAN, cycleGAN can be considered, but it requires at least two discriminators and generators, which complicates the structure. Therefore, cGAN can be used as a general-purpose solution to the image-to-image translation problem. Using the conventional approach to image processing and image reconstruction, it is difficult to accelerate the detection of masses and preserve the normal structure with accuracy [11][12][13][14][15][16]18,37]. In particular, as the noise associated with low dose imaging is increased, there is a tradeoff in the acceleration of the detection of masses and preservation of normal structure (e.g., structural distortion, oversmoothing or sharpness, occurrence artifact, etc.). However, with the use of pix2pix, it might be possible to overcome the problems of the conventional method in DBT imaging. Studies conducted to date have quantitatively compared various DBT algorithms in terms of image quality and radiation doses [38,39]. Although those reports demonstrated that IR could effectively decrease quantum noise and radiation exposure, the evaluations were limited and merely compared existing methods. In a related recent report, Gao et al. reported that denoising a deep convolutional neural network using adversarial training was useful for improving the MCs contrast in DBT using in silico data and applied to physical phantom images as a learning set [27]. Among the studies using deep learning, there are no reports on the quantitative evaluation of improvements in image quality or dose reduction of MCs and the detection of masses under various conditions with automatic exposure control (AEC) as the referenced dose because of changes in breast thickness. In particular, considering that pix2pix has the potential to reduce the dose and improve image quality, it can be expected that this logic (image-to-image translation process, in which a low dose image can be applied to the reference dose image) can be applied to DBT to improve the acquisition of image quality deterioration under low dose. In this paper, we report our experience using the application of pix2pix pre-reconstruction processing (FBP reconstruction after preprocessing pix2pix) to improve image quality with dose reduction and amend processing. Because the usefulness of preprocessing has been reported in improving image quality using deep learning processing of tomosynthesis [27], in this study we performed deep learning processing (pix2pix) on projection-based data. In addition, because the purpose of this study was not to compare the reconstruction algorithms, we used the exact solution (FBP) for evaluation. Our proposed preprocessing pix2pix exploits both the improved detection of MCs and the preservation of normal structures to improve both spatial resolution and contrast preservation. Our proposed pix2pix pre-reconstruction processing may overcome a previously unresolved problem associated with conventional algorithms, namely, the improved detection of MCs and preserved contrast of masses, by correcting reconstruction processing with dose reduction. In addition, we evaluate the usefulness of pix2pix pre-reconstruction processing for the purpose of improving image quality under dose reduction. Specifically, pix2pix pre-reconstruction processing is applied to the projection data (reference dose [automatic exposure control reference dose: AECrd] and low dose [approximately 50% and 75% reduction of AECrd]) when the phantom thickness is changed and the reconstructed image (FBP) with physical evaluation (spatial resolution, contrast, error, similarity). DBT This study used a DBT system (Selenia Dimensions; Hologic Inc., Bedford, MA, USA) that consists of an X-ray tube with a 0.3 mm focal spot (tube target: W, filtration: 0.7 mm aluminum equivalent) and a digital flat-panel amorphous selenium detector. A total acquisition time of 3.7 s and an acquisition angle of 15 • were set for all DBT procedures. The projection images were sampled during a single tomographic pass (15 projections, 1280 × 2048 matrix). To produce reconstructed tomograms of the required height, we used a 512 × 1024 matrix with 32 bits (single-precision floating number) per image. Phantom Specifications The BR3D phantom (model 020; Computerized Imaging Reference Systems, Inc., Norfolk, VA, USA) comprises multiple heterogeneous slabs and is intended to mimic the composition of glandular and adipose tissues and parenchymal patterns in the human breast. The slabs are composed of epoxy resins with X-ray attenuation properties corresponding to 50% glandular or 50% adipose breast tissue. In the phantom, the target slab was surrounded by nontarget slabs (top: 30, 40, 50 mm; bottom: 10 mm). Pix2pix Pix2pix is a GAN that trains generators and discriminators by providing various additional information and allowing it to be conditioned. Because of constraints on the additional information, the generator produces certain types of output, and the discriminator accepts only additional information that matches the real sample. The training objectives of the discriminator and generator can be expressed mathematically as follows: where P ld is the low dose projection domain, P re f is the reference dose projection domain, D is the discriminator, G is the generator, and z is the random noise vector (Gaussian noise). The training data set included 180 projection images, and each of the corresponding images related to the input image pair (P ld (90), P re f (90)) were randomly selected as the training set. The pix2pix was developed to solve the following problem: where α controls the relative importance of the two objectives L GANc and L GAN l1 . In this study, α was set to 50, the initial learning rate was set to 0.0002, and the momentum parameters were set to β 1 = 0.5, β 2 = 0.999 [25]. In pix2pix, we used the Adam optimization algorithm [41] with a batch size of 1. Appendix A (Table A1) shows the architecture of the building components. Optimization Parameters for Epochs The optimization epochs in the pix2pix network were evaluated based on the mean square error (MSE) [42] and structural similarity (SSIM) [43] for the projection image (straightforward on the detector; 0 degree). The MSE of the identified projection image can be obtained as follows: where I DBT_re f (i, j) is the (i, j)th entry of the reference dose projection image, and I DBT_low (i, j) is the (i, j)th entry of the low dose projection image in each epoch. The SSIM index between pixel values i and j was calculated as follows: where L umi is the luminance, C ont is the contrast, and S tru is the structure (ε = φ = η = 1.0). The mean SSIM (MSSIM) was then used to evaluate the overall image quality as follows: where i r and j r are the image contents at the rth pixel and Q is the number of pixels in the image. Optimization was evaluated based on the MSE and MSSIM for 40-mm phantom thickness. The lowest MSE, highest MSSIM, and epochs were selected as the optimum parameters. Evaluation of Image Quality The DBT system-derived real projection data were used for FBP reconstruction. We used MATLAB (MathWorks, Natick, MA, USA) to reconstruct and process all images (custom script for MATLAB environment). For each phantom image, we calculated the full width at half-maximum (FWHM), contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and SSIM and in the in-focus plane to evaluate the effects of each processing method. The target images of FWHM, CNR, PSNR, and SSIM were evaluated by selecting different in-focus planes in the longitudinal direction. For the FWHM in the in-focus plane (0.196, 0.23, and 0.29 mm; CaCO 3 ), the derived spatial resolution was evaluated as a quantitative measure of the reconstructed image quality. Subsequently, the FWHMs were measured for selected intensity profiles intersecting the five MCs on reconstructed DBT slices. Next, four summation neighboring (i.e., parallel and perpendicular to the X-ray sweep direction) were arranged to obtain an intensity profile. In addition, the contrast was derived from the CNR in the in-focus plane (3.9 and 4.7 mm ϕ; spheroidal masses [epoxy resin]) and used to quantitatively measure the reconstructed image quality. Tomosynthesis applications frequently use the CNR to estimate low-contrast detectability. In this study, we defined the CNR as follows: where µ Feature represents the mean object pixel value, µ BG represents the mean background area pixel value, and σ BG represents the standard deviation of the background pixel values (set up in four locations around the masses; up, down, left, right). Of these parameters, the latter includes both photon statistics and electronic noise from the results, and structural noise that might obscure the object of interest. For all regions of interest (ROIs) used to measure the CNR, the sizes were adjusted to an internal signal (circular ROI, 3.9 mm [diameter: 28 pixels], 4.7 mm [diameter: 40 pixels]). To assess the improvement of image quality on each in-focus plane image, the conventional algorithms (FBP reconstruction from the MSBF processing projections) were compared. PSNR represents the ratio of the maximum power that a signal can take and the noise that causes degradation, which affects the reproducibility of image quality on each in-focus plane image. The PSNR was defined as follows: We used a PV value of 1.0 because we assumed that the image data (single-precision floating number) was in the range [0, 1.0]. The MSE was calculated between the reference dose and low dose FBP images. This study compared the performance of pix2pix pre-reconstruction processing with that of MSBF pre-reconstruction processing. Here, the parameter settings (σ d ) were determinants of the image quality. Except for σ d , all other set values were as previously reported [18]. In this study, the parameter σ d was chosen to be 1.0 from the perspective of contrast preservation, in accordance with a previous study (α: 0.375, w α : 5 × 5 Laplacian filter, σ r : 0.01) [18]. The impulse shape of each reconstructed image was restored using two-dimensional image filtering, which multiplied the Fourier transform by a Ramachandran and Lakshminarayanan kernel, which generally produced precise 3D reconstruction images [11]. In this study, we compared the FWHM values with and without MSBF pre-reconstruction processing at different radiation doses between the four groups (reference dose, low dose without MSBF pre-reconstruction processing, low dose with pix2pix pre-reconstruction processing, and low dose with MSBF pre-reconstruction processing). The numbers of samples in the groups were reference dose (0. Optimization Parameters After measuring the MSE and SSIM of each training network at different phantom thicknesses (40,50, and 60 mm) and radiation doses (approximately 50% and 75% reduction of AECrd), the optimal epoch was selected at the lowest MSE and highest SSIM. Using the results of the optimization verification, each training network image was generated by setting 300 epochs for pix2pix, and then evaluated and compared with those of the images obtained using the conventional approach with and without MSBF pre-reconstruction processing ( Figure 1). The training was performed on a TITAN RTX (24 GB of memory) GPU. The total calculation time required to process pix2pix was 13.63 h (epochs 300). Figures 2-4 show the reconstructed images of the B3RD phantom acquired with pix2pix pre-reconstruction processing and each of the established methods for reconstruction with and without MSBF pre-reconstruction processing at a reduced radiation-dose (approximately 50% and 75% reduction of AECrd) and reference radiation-dose. Remarkably, the DBT images produced using pix2pix pre-reconstruction processing showed reduced noise and preservation of contrast in the radiographic vertical and horizontal direction, specifically in the peripheral regions of the MCs and masses. On the other hand, images produced with the help of MSBF pre-reconstruction processing demonstrated noise. Comparison of the differences between pix2pix pre-reconstruction processing and the conventional approach with and without MSBF pre-reconstruction processing resulted in the smallest with MSBF pre-reconstruction processing for noise reduction. With MSBF pre-reconstruction processing showed a certain reduction in noise, but the lack of preservation of contrast generated from around the MCs was remarkable. Without MSBF pre-reconstruction processing, higher noise levels increased with radiation-dose reduction, resulting in a deterioration in image quality. For MCs of 0.23 mm or greater, the reference dose and pix2pix pre-reconstruction processing (approximately 50% and 75% reduction of AECrd) showed equal average and median characteristics, but at 0.19 mm with approximately 75% reduction of AE-Crd, the result deteriorated at a horizontal direction of 50 mm or greater. At a pix2pix pre-reconstruction processing of up to approximately 50% reduction of the AECrd, the structure of the MCs was preserved regardless of the BR3D phantom thickness or MCs size, as compared with the reference dose. A comparison between reference dose and without MSBF pre-reconstruction processing (approximately 50% and 75% reduction of AECrd) showed comparable mean and median characteristics at greater than 0.23 mm; deterioration was observed at a vertical direction of all BR3D phantom thicknesses at a vertical direction of 0.19 mm. Comparisons between the reference and with MSBF prereconstruction processing (approximately 50% and 75% reduction of AECrd) deteriorated at all BR3D phantom thicknesses and MCs sizes. In particular, the result was affected with a size of 0.23 mm or greater. For all sizes of MCs, the differences in the FWHM, except for pix2pix pre-reconstruction processing compared with the reference and without MSBF pre-reconstruction processing (approximately 50% and 75% reduction of AECrd), were not statistically significant (Tables 1-3). For MCs of all sizes, the differences in the FWHM, except for pix2pix pre-reconstruction processing compared with MSBF pre-reconstruction processing (approximately 50% and 75% reduction of AECrd), were statistically significant (p < 0.05; Tables 1-3). These FWHM results showed that pix2pix pre-reconstruction processing was preserved in areas with MCs of BR3D phantom. Figures 8 and 9 show the whole image areas of the BR3D phantom and a plot of the SSIM and PSNR results. With regard to the similarity and error of the reference dose, pix2pix pre-reconstruction processing showed high similarity and low error under all conditions, regardless of low dose level (approximately 50% and 75% reduction of AECrd) and BR3D phantom thickness. Regarding PSNR, in pix2pix pre-reconstruction processing, PSNR decreased and errors increased in parts of in-focus planes (Figure 9a) for approximately 75% reduction of AECrd. MSBF pre-reconstruction processing showed high similarity compared without pre-reconstruction processing, but the result was that the error was large. Figure 10 depicts the placement of the ROI over an image of the BR3D phantom and a plot of the CNR results. With regard to the contrast of masses, with MSBF pre-reconstruction processing was the highest, followed by pix2pix pre-reconstruction processing, and without MSBF pre-reconstruction processing showed the lowest contrast characteristics for 4.7 mm mass. For pix2pix, the CNR was equivalent to that of reference under dose reduction (approximately 50% and 75% reduction of AECrd). Image Quality There was no deterioration in image quality with pix2pix pre-reconstruction processing under the low dose level (approximately 50% reduction of AECrd) conditions associated with changes in BR3D phantom thickness in FWHM, SSIM, and MSE, except for the contrast of masses. This result indicates that pix2pix may be useful for radiation-dose-reduction technology without a subsequent deterioration in image quality. Figure 10. Plots of the contrast-to-noise ratio (CNR) vs. pre-reconstruction processing, with and without multiscale bilateral filtering (MSBF) pre-reconstruction processing from the in-focus plane. Comparisons of the CNR of the in-focus plane images obtained via the reference, low dose [approximately 50% and 25% of automatic exposure control reference dose (AECrd)] with and without pre-reconstruction processing with varying phantom thicknesses. (a) 4.7 mm mass; (b) 3.9 mm mass. The in-focus plane image shows the masses and background areas of the CNR. h: approximately 50% reduction of AECrd; q: approximately 75% reduction of AECrd. (Tukey-Kramer test; p < 0.05 indicates a significant difference, *: significant, NS: not significant). Discussion Our experimental results clearly demonstrated the ability of pix2pix pre-reconstruction processing to improve the quality of DBT images for the low dose condition. In this study, the in-focus plane intensities of pix2pix pre-reconstruction processing, as compared with existing MSBF pre-reconstruction processing, improved spatial resolution, similarity, and image error without deterioration of the MCs images with whole image. Furthermore, pix2pix pre-reconstruction processing has the potential to reduce the radiation-dose by approximately 50% reduction of AECrd. Thus, pix2pix pre-reconstruction processing is a promising new option for imaging denoising, as it generated noise-reduced images and reduced radiation doses that were far superior to those obtained from images processed using conventional algorithms. The flexibility of pix2pix pre-reconstruction processing in the choice of imaging parameters, which is based on the desired final images and low dose DBT imaging conditions, promises increased usability. The projection-space combination of adversarial training approaches described here can be used to generate images to formulate denoising as a deep learning algorithm for projection completion problems, with the aim of improving the generalization and robustness of the framework. Because the direct regression of accurate projection data is difficult [17,18], we propose incorporating the prior projection image generation procedure and adopting a combination of adversarial networks and a projection completion strategy. This method can improve image quality by reducing noise while preserving masses and normal structures, which are common drawbacks of projection completion-based adversarial training methods. Therefore, we believe that our adversarial training approaches could effectively reduce noise in actual practice. The ability of pix2pix pre-reconstruction processing to obtain denoising and contrastpreserving images and to reduce the radiation-dose by approximately 50% reduction of AECrd ( Figures 5-9) may be due to the benefits of the first process, pix2pix. The imageto-image translation framework of pix2pix requires fully associated images. pix2pix is different from conventional noise reduction by reconstruction/processing, because it can preserve the structures and reduce the noise, therefore solving this problem. pix2pix has a generator that attempts to minimize this objective against an adversarial discriminator that tries to maximize. The generators use a U-Net [44] structure, and the two discriminators have a Patch-GAN-based structure [45] for learning. By applying another style to the image during the translation process, the low dose projection image can then be applied to the reference dose projection image. In MSBF pre-reconstruction processing, Laplacian pyramid decomposition (LPD) is used to achieve multiscale structure decomposition during MSBF pre-reconstruction processing. However, this function is not unique to LPD, and other multiscale analysis methods may be sufficient. In this regard, however, the MCs detected via MSBF prereconstruction processing may not have strong directional geometric features. Therefore, a directional multiscale analysis method, such as wavelet transform, may not be superior to the LPD method [18]. The image artifacts from MCs leads to the appearance of noticeable objects comprising artifact-free voxels that contrast with the background. These artifacts from MCs are a drawback of the FBP algorithm and are conspicuous when images generated using this method are compared with artifact-free images. Therefore, based on the results of this study, future research should consider conducting evaluations using the IR algorithm. There were some limitations to this phantom study. First, the materials constituting the BR3D phantom were only simulations of the mammary gland, because actual mammary gland tissues were not tested. Alternatively, we suppose that the consistency of the BR3D phantom indicates that it is an accurate representation of actual mammary gland tissue. Second, we did not perform a clinical study using human subjects. The utility of pix2pix pre-reconstruction processing was confirmed by basic evaluation. In a future observational study, we plan to investigate the correlation between spatial resolution and contrast. We believe that pix2pix pre-reconstruction processing will allow for optimization of the use of dose in future DBT imaging and radiation-dose reduction technology and improve the accuracy of medical images. Third, the experiment was evaluated by a single vendor system. We think that a study using a multi-vendor system is necessary. Fourth, optimization of projection data analysis of MSE and SSIM was for the central projection only; the performance at other angles will depend on the object's shape. Evaluations will be relevant to checking the sensitivity of optimization according to the projection view angle with respect to the detector. Fifth, evaluation of the in-focus slice of the reconstruction volume only; consideration of any out-of-plane features/artefacts is necessary. Sixth, use of the FBP algorithm only, without optimization (kernel; Ramachandran and Lakshminarayanan). This leaves opportunities for future work (i.e., evaluation of IR algorithms). Seventh, we believe that the CNR evaluation of masses is limited and requires assessment using a wider variety of sizes, shapes and margin types (e.g., smooth, spiculated) to improve accuracy. In addition, we considered using more advanced methods such as a detectability index [27], where the influence of anatomical noise can be included. Conclusions This phantom study revealed that an approximately 50% reduction in radiationdose is feasible using our proposed pix2pix pre-reconstruction processing. The pix2pix pre-reconstruction processing was particularly useful in reducing noise and yielded better results equivalent to that of the reference dose, with no significant difference in the statistical results (0.196 mm: p = 0.996; 0.23 mm: p = 0.886; 0.29 mm: p = 0.321) in terms of preserving the structure of MCs as compared with variant phantom thickness. Thus, pix2pix shows promise for integration into the clinical application workflow to reduce image noise while maintaining image quality in breast tomosynthesis. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All relevant data are within the manuscript and its nonpublished material files. Conflicts of Interest: The authors declare no conflict of interest.
6,354.6
2022-02-01T00:00:00.000
[ "Medicine", "Physics" ]
Determinants of global coffee trade: Do RTAs matter? Gravity model analysis Abstract This study investigates the patterns of global coffee trade flows and identifies the major determinants of global coffee trade by incorporating RTAs as important variable. Gravity modeling with OLS and PPML estimator was employed for the analysis using panel data on bilateral coffee trade flows of 18 major coffee exporters and 201 trading partners for the period 2001–2015. Both exporter GDP (and population) as well as importer GDP were found to be important determinants enhancing coffee trade. Of the bilateral distance variables, physical distance is found to impede coffee trade, while common border was found to enhance it. On the other hand, cultural (distance) variables like colonial link, common colonizer and common language were also found to enhance coffee trade. Other variables that were found to significantly enhance coffee trade include depreciation in exporting country’s exchange rate, the amount of arable land in exporting country, infrastructure and global financial crisis. On the other hand, importing country tariff was found to significantly reduce coffee trade as expected. Surprisingly, the RTA variable had no significant impact on coffee bilateral trade. PUBLIC INTEREST STATEMENT This study investigates the patterns of global coffee trade flows and identifies the major determinants of global coffee trade by incorporating RTAs as an important variable. Result highlights that both exporter GDP (and population) as well as importer GDP were found to be important determinants enhancing coffee trade. Physical distance is found to hamper coffee trade, while common border was found to enhance it. On the other hand, cultural (distance) variables like colonial link, common colonizer and language were also found to enhance coffee trade. Other variables that were found to significantly enhance coffee trade include depreciation in exporting country's exchange rate, the amount of arable land in exporting country and infrastructure. On the other hand, importing country tariff was found to significantly reduce coffee trade as expected. The RTA variable had no significant impact on coffee bilateral trade. Introduction Coffee is produced in more than 50 developing countries, providing income for approximately 25 million farmers (Petit 2007).This makes coffee production one of the main cash crop sector and a significant source of livelihood, employment and foreign exchange in most of developing countries. There are countries known for coffee production on a global scale. In this respect, countries from Latin America, Asia, and Sub-Saharan Africa constitute the largest producers of green coffee in terms of value and quantity (FAO, 2016). Accordingly, Brazil is the main producer of green coffee beans in 2013, followed by Vietnam, Indonesia, Colombia, and India. Importantly, the coffee sector accounts for a significant proportion of foreign earnings in some of the countries listed. For instance, in countries such as Ethiopia, Uganda, and Burundi, the share of coffee exports relative to total export earnings exceeded 10% in 2013, although the importance of coffee for other countries is still lower (FAO, 2016;World Bank, 2015) Moreover, coffee is the second most internationally traded commodity in terms of volume next to crude oil and one of the most valuable globally traded agricultural commodities in human history. Spreading out from Ethiopia in the Horn of Africa, it is now produced in over seventy countries worldwide. Over 2.25 billion cups of coffee are consumed in the world every day (Ponte, 2002). It is the world's most widely traded tropical agricultural commodity with an estimated export of US$ 18.4 billion in 2014/15 (Fair Trade Foundation, 2016. Over 90% of coffee production takes place in developing countries, while consumption happens mainly in the industrialized economies. Three countries alone have produced around 55% of the world's coffee in recent years: Brazil (32%-34%), Vietnam (12%-13%) and Colombia (8%-9%). In 2010 total coffee sector employment was estimated to be about 26 million people in 52 producing countries (International Trade Centre, 2011;Maurice & Davis, 2011). In total, around 125 million people worldwide depend on coffee for their livelihoods (Fair Trade Foundation, 2012). Coffee exports are not only an important contributor to foreign exchange earnings but also account for a sizable proportion of tax revenues and gross domestic product of many countries. For instance, during the period of 2005-2010, the average share of coffee in total export earnings for eight coffee-producing countries exceeded 10% (International Trade Centre, 2011). Given the importance of coffee in global trade and its contribution to national economies for many countries, it would be worthwhile to evaluate the evolution of the sector in global trade through analyzing the determinants of its bilateral trade flows. Analysis of the determinants of bilateral trade has been the subject of much research in the past. Many studies that analyze determinants of trade flows commonly employ the gravity model. Since the seminal works by Tinbergen (1962) and Pöyhonen (1963), the gravity model has been a widely used approach to explain bilateral trade. The model postulates that trade is boosted by the economic size of the trading partners and negatively influenced by the geographical distance between them. Other observable characteristics of each pair of countries such as having a common language or sharing a common border that affect bilateral trade flows can be augmented in the gravity model (Agostinoet al., 2007;Disdier & Head, 2008;Leamer & Levinsohn, 1995). More importantly, gravity equations have been extensively employed in the analysis of the impact of trade policies, including evaluation of the impact of FTAs on trade (Abrams, 1980;Aitken, 1973;Baier & Bergstrand, 2007;Bergstrand, 1985;Brada & Mendez, 1985;Frankel, 1997;Frankel et al., 1995). While much of the available literature focused on total or aggregate trade flows, another strand of research has attempted to explore trade at a disaggregated level. The latter mainly focused on analyzing trade flows at a more disaggregated level like manufacturing trade, trade in parts and components, and agricultural trade by applying a gravity model. In particular, there are quite a number of studies that have assessed agricultural trade. However, most of these studies also focused on the total agricultural trade (exports or imports). Only few studies have attempted to evaluate trade at product levels (Agostino et al., 2007;Aiello & Cardamone, 2010;Nguyen & Arcand, 2009). Often, research that uses the gravity model use aggregate trade data between countries. However, Anderson and Wincoop (2004) and Nguyen (2020) state that analyses with disaggregated trade data are also plausible and necessary, since there are great sector variations in terms of trade flows and costs. The use of aggregate trade data masks several key issues. In particular, the responsiveness of aggregate exports to changes in explanatory variables may not explain the behavior of exports at a more disaggregated level for instance, coffee trade flow. Sectoral or more disaggregated trade in goods may matter for many reasons. First, it may matter due to growth if some sectors are growth drivers. Secondly, factors constraining growth may be more easily identified at the sectoral level. Third, many policies are framed for specific products that are not identified among the relatively aggregated sectors. Finally, most trade policy negotiations are conducted at highly disaggregated "tariff lines". 1 The existing literature mainly focusing on export trade pattern at aggregate level which provide us misleading conclusion for the countries specializing in commodity-specific exporting item, for instance, coffee sector. Such a research gap in the current literature should be filled in order to allow effective trade policy design and implementation in the coffee sector. Our study is based on a set of panel data and evidence is from 18 coffee exporting and 201 importing countries that are at different levels of development state and we believe our findings will have broad acceptability. The purpose of this paper is, therefore, to examine the patterns of bilateral trade at product level for coffee. Accordingly, the paper assesses the determinants of global coffee trade flows among major coffee exporting countries and their coffee importing partners. The knowledge of the world coffee trade pattern allows both importing and exporting countries, to decide on trade and market strategies that may increase the gains from trade. In particular, the paper aims at exploring important factors that affect coffee exports. The paper tries to present new evidence regarding the evolution of global coffee trade between 2001 and 2015 for 18 major coffee exporting countries. Moreover, the current paper seeks to contribute to the literature by providing new evidence on the trends and determinants of global coffee trade for major coffee exporting countries in the world. As such, the paper builds on the previous studies mentioned above so as to provide comprehensive evidence regarding the recent global coffee trade pattern. In particular, the paper extends the work of De Almeida et al. (2012) by considering a more representative sample with a broader regional coverage which is absent so far. In doing so, the study makes use of bilateral coffee trade data for the period of 2001-2015 for 18 major coffee producing and exporting nations that represent over 90% of world coffee exports in recent years. Moreover, the paper also tries to address problems and biases arising from all sources of endogeneity (selection, simultaneity and measurement error) as well as those arising from aggregation of trade flows. Therefore, the key contribution of this study is twofold. First, the study provides valuable empirical contributions to the existing literature regarding disaggregated product level by analyzing what factors could improve and impede the coffee trade pattern in the world. Second, results from the study are expected to provide empirical support for policy design and implementation in the coffee sector that aims to increase trade performance through the creation of different trade integration. The article proceeds as follows. The next section provides a brief review of existing literature. This is followed by an outline of empirical specification of the gravity model and estimation strategy. Issues pertaining to data are also discussed in this section. Next, the estimation results are discussed. The final section concludes the paper. Literature review There are a number of empirical studies that evaluate bilateral trade flows between countries. These studies indicate that empirical analysis of bilateral trade flows can be appropriately carried out using the gravity model. 2 As Bayoumi and Eichengreen (1997) noted, the gravity equation has long been a workhorse for empirical studies of the pattern of trade. In particular, the gravity equation has been widely employed in international trade to study the ex post effects of FTAs and customs unions on bilateral merchandise trade flows. The gravity equation is typically used to explain cross-sectional variation in country pairs' trade flows in terms of the countries' incomes, bilateral distance, and dummy variables for common languages, common borders, and the presence of an FTA (Baier & Bergstrand, 2007). Tinbergen (1962) was the first to use the gravity equation to evaluate the effect of FTA on bilateral trade flows. His results suggested an economically insignificant "average treatment effects" of FTAs on trade flows. Since then, various studies have come up with quite mixed results. Aitken (1973), Abrams (1980), and Brada and Mendez (1985) found the European Community (EC) to have significant effect on trade flows among member countries, whereas Bergstrand (1985) and Frankel et al. (1995) found insignificant effects. Frankel (1997) found positive significant effects from MERCOSUR, insignificant effects from the Andean Pact, and significant negative effects from membership in the EC. Frankel (1997) and Oguledo and MacPhee (1994) also provide an extensive summary of the coefficient estimates of FTA from various other studies. A major concern that seems to be not adequately addressed by majority of the earlier studies relates to the issue of endogeneity, which mainly stems from the assumption that the FTA is exogenous. According to Baier and Bergstrand (2007), the FTA dummies are not exogenous random variables. In fact countries are likely to self-select into FTAs due to unobservable reasons which are possibly correlated with the level of trade. In addition to endogeneity problem, the issue of what is termed as "multilateral resistance terms" also warrants attention. According to Anderson and Wincoop (2004), bilateral trade is determined by relative trade costs, not simply by absolute trade costs. They have rigorously shown that controlling for the relative trade costs is crucial for a well-specified gravity model. To explain this, consider the impact on trade between countries i and j of a change in trade costs between countries i and k, say, as in the case where countries i and k enter into a preferential trade agreement that lowers tariffs on their respective goods. Such a change may well impact the trade of country j, even though it is not a party to the agreement. Such heterogeneity is often unobservable and hence results in biased estimates for the RTA coefficient in the gravity equation. Many papers have tried to address such biases in different ways. Some used instrumental variable (IV) techniques (Terfler, 1993;Ghosh & Yamarik, 2004;Lee & Swagel, 1997). However, Baier and Bergstrand (2007) argue that most of the available instrumental variables used in empirical analysis of trade agreements are not that convincing. Using a wide array of feasible instrumental variables, they conclude that IV estimation is not a reliable method for addressing the endogeneity bias of the RTA binary variable in the gravity equation. They argue that the likely problem is that the vast numbers of variables that are correlated cross-sectionally with the probability of having an RTA are also correlated with trade flows, which prevents the elimination of endogeneity bias. Another widely used procedure to address such issues consists of using country fixed effects for importers and exporters (Baldwin & Taglioni, 2006;Rose & Van Wincoop, 2001). Other studies have assessed the impact of non-reciprocal preferential trade policies (NRPTPs) on the export performance of developing countries (Oguledo & MacPhee, 1994;Goldstein et al., 2003;Nouve & Staatz, 2003;Lederman & Özden, 2004;Subramanian & Wei, 2005;Verdeja, 2006;Agostinoet al., 2007). Agostino et al. (2007) note that the gravitational approach offers a framework for assessing whether NRPTPs affect bilateral trade flows between beneficiaries and donors. The gravity model posits that the normal level of trade is positively affected by the economic mass of the trading countries (richer and larger nations both export and import more) and negatively influenced by the geographical distance between them. These authors, however, raise two major problems that are associated with the previous studies. The first relates to the focus on total trade (exports) from the beneficiary to the preference-giving countries. Secondly, the mentioned studies model NRPTPs by augmenting the gravity model with a preference dummy. Agostino et al. (2007) argue that such methodological choices are misleading. The reason for this stems from the belief that the objective of NRPTRs is not to affect total trade of the beneficiaries, but rather to alter the incentives for developing countries to export more in specific sectors in which preferences are granted. Accordingly, they hypothesize that when overall exports are considered, the impact of NRPTPs might be underestimated, especially if the export shares of the sectors that account for a higher margin of preference are small. Generally, empirical evidence pertaining to trade at sectoral level has been relatively thin as compared to that at aggregate level, though there has been a growing interest in the area in recent years. There are quite a few studies that have assessed trade in agricultural products. Some of these focused on total/aggregate agricultural trade flows (Hatab et al., 2010;Kim et al., 2004), while others dealt with a more disaggregated level (commodity/product-level) of analysis (Gebrelibanos, 2005;Getachew, 2009;Jayasinghe & Sarker, 2008;Jordaan & Eita, 2007;Nguyen & Arcand, 2009). However, coffee as an important agricultural commodity has not been extensively analyzed in the framework of gravity modeling. To the knowledge of the authors, only three studies (Agostino et al. 2007;Aiello & Cardamone, 2010;De Almeida et al., 2012) have attempted to analyze bilateral coffee trade flows using gravity equation. In particular, De Almeida et al. (2012) attempted to assess the impact of non-tariff measures (Technical Barriers to Trade-TBT and Sanitary and Phytosanitary-SPS notifications) on international coffee trade using data for the period of 1996-2010. The study is particularly very relevant as it is the only one that has attempted to directly address the research issue which the current study is seeking to address. However, there are a couple of limitations of the study. Firstly, with only five countries considered, the country coverage appears to be not sufficient to provide the complete picture of the global coffee trade. Moreover, the fact that three of the five sample countries considered were from Latin America may fall short of capturing the regional variability in coffee exports and hence render the sample to be non-representative. Furthermore, for none of these countries was coffee a major export earner for the period under the study. The remaining two studies (Agostinoet al., 2007;Aiello & Cardamone, 2010) analyzed agricultural trade for different levels of aggregation with the aim of assessing the impact of preferential trading agreements on LDC exports. The papers have employed gravity model estimates for different product categories and different levels of aggregation including coffee. However, the main goal of both studies was not to analyze global coffee trade patterns. In particular, Agostino et al. (2007) were mainly concerned with the assessment of the effect of non-reciprocal trade preferences with emphasis on finding out whether or not aggregation matters. 3 On the other hand, Aiello and Cardamone (2010) mainly focused on the assessment of the effectiveness of the Everything but Arms (EBA) initiative launched by the EU in 2001. The study conducted by Nsabimana and TafesseTirkaso (2019) examines the impact and implications of the East African Community and the Common Market for Eastern and Southern Africa preferential trade agreements on coffee export performance of eight East and Southern African countries by employing a static and dynamic gravity modeling framework for the period 1998-2013. They found that regional trade agreements play a vital role in increasing coffee trading in East and Southern African countries. Factors including geographical distances, income, and population size in importing and exporting countries are also statistically significant determinants of coffee exports. The study also found that the exporting countries are currently under-performing with respect to their maximum potential in the global market indicating room for improvement. Ekanayake et al. (2010) analyzed the trade creation and trade diversion effects of the regional trade agreements (RTAs) in Asia and their effects on intra-regional trade flows using annual trade data for the period 1980-2009 and find that the coefficients of real GDP, population, and distance had significantly affected export trade flow in Asian countries. More to product-level analysis, the work by Nsabimana and TafesseTirkaso (2019) examines the impact and implications of the East African Community and the Common Market for Eastern and Southern Africa preferential trade agreements on coffee export performance of eight East and Southern African countries by employing a static and dynamic gravity modeling framework for the period 1998-2013 and they found that factors including geographical distances, income, and population size in importing and exporting countries are also statistically significant determinants of coffee exports. Further, they also found that the exporting countries are currently under-performing with respect to their maximum potential to the global market indicating groom for improvement. Empirical framework: the gravity model As pointed out by Nsabimana and TafesseTirkaso (2019), the traditional gravity modeling approach associates trade flow to different economic indicators and distances such as the national income of trading countries and the physical distances between the trading countries. Meanwhile, recent studies have rigorously expanded the model by incorporating other economically meaningful variables such as potential markets (proxy for population size), trading barriers, country integration, community membership and other sets of dummies representing shared cultural and historical characteristics (see, Melitz 2003;Martínez-Zarzoso 2013;Dal Bianco et al., 2015;Saggi et al., 2018). This section provides the econometric specification of the gravity equation. A gravity model states that the trade flows between two countries (exports or imports) can be explained by three kinds of variables. The first group of variables describes the potential demand of the importing country (importer GDP, importer population), the second one considers the supply conditions in the exporting countries (exporter GDP, exporter population) and the third group consists of all the factors that may hinder or favor the bilateral trade flow (i.e., distance, common border, language, past colonial ties, religion, and other relevant variables). 4 Accordingly, the model can be specified in its general form as follows (Baier & Bergstrand, 2007;Agostino et al., 2010;Hatab et al., 2010;Dal Bianco et al., 2015): where the subscript i refers to the exporters, j to the importers and t to time. X is the export flow (coffee), Y is the gross domestic product, P is the population instrumented by potential markets and D is the distance between the capital cities. u ijt is error term. To control for observable country-pair specific factors affecting bilateral trade, the model includes some dummy variables. In particular, L and B are two binary variables set to unity if the trade partners share a common language or border, respectively. C is a binary variable which is unity if country i was a colony of country j. Finally, Z is a vector of variables that possibly affect bilateral coffee trade flows (like exchange rate and supplyside constraints like infrastructure, governance and other related variables). For the purpose of estimation, the model in Equation (1) is expressed in log form as: Most studies that focus on bilateral trade also seek to assess the impact of Regional Trade Agreements (Baier & Bergstrand, 2007;Kepaptsoglou et al., 2010;Nsabimana & TafesseTirkaso, 2019). This study also does the same by including RTA variable in the gravity specification as a dummy variable set to unity if i and j belong to the same RTA (such as EFTA, NAFTA, COMESA or a bilateral agreement between the trading countries), and zero otherwise. Following the line of argument advanced by Baier and Bergstrand (2007), we examine the impact of RTA on coffee trade while taking care of the endogeneity problem discussed above. After including the RTA dummy variable, the resulting augmented gravity model, therefore, becomes: However, estimating equation (3) in its current form would not account for the endogeneity problem of the RTA variable. As discussed in the literature review section, one way to account for endogeneity bias is to use fixed effects (bilateral fixed effects and country-and-time effects (Baier & Bergstrand, 2007). The fixed effect estimator is appropriate if the study centers on a particular set of N countries because FE is equally adopted to estimate the panel regression model by accounting for the nonrandomness of the cross-section of 18 coffee exporting countries. This procedure will also take care of importers' and exporters' multilateral resistance terms (Anderson & Wincoop, 2004). Estimation strategy As a starting point, we first estimate equation (3) using OLS including various fixed effects. In these sets of estimation, we will basically include the standard gravity variables. Here the discrepancy between estimates with no fixed effects included and those with different fixed effects measures the relevance of the unobservable country heterogeneity. Next, we will run another set of OLS regressions by including additional explanatory variables (such as supply side variables, exchange rates, tariffs and non-tariff, etc.). As trade flows with zero values are dropped in OLS estimation, we apply Poisson Pseudo Maximum Likelihood (PPML) technique to estimate similar regressions. It is worthwhile to use the Poisson Pseudo Maximum Likelihood estimator so as to avoid potential bias due to zero trade. As argued by Nguyen (2020), using FE with the basic gravity model faces some problems. First, FE will lose the information on time-invariant variables like geography distance, common border and language or country colinizer. Besides, the model will delimit the zero trade, leading to selection bias in the case of using FE for export trade flow. Meanwhile, PPML can overcome selection bias created by zero trade and the presence of heteroskedasticity. This method also covers the impact of time-invariant variables that cannot be obtained by FE. Besides, the goodness of fit of PPML (through R 2 ) is higher than FE, leading to higher fitness of the models. Therefore, PPML is more appropriate than FE. In such case, the parameters were obtained from the PPML panel data model, with fixed effects for time, exporters and importers. Data and variables This study covers the period of 2001-2015. The coffee trade statistics (exports) at HS-4 5 digit level (i.e., 0901 for coffee) were extracted from the United Nations COMTRADE database. To collect data from the UN Comtrade database, we primarily use the four-digit harmonized system (HS) code. Specifically, the HS code for "roasted and not roasted" coffee is HS 0901. It should be emphasized the fact that the types of coffee particularly high-quality Arabica and low-quality commodity robusta produced and marketed by the countries that make up the model are different. However, due to the unavailability of specific data on international trade in each of these types of coffees, this paper is restricted to the analysis of whole coffee as a whole. The set of exporting countries is comprised of 18 major coffee exporting countries from three regions of the world: six from Africa, five from Asia and seven from the Americas. 6 These countries together contributed to nearly 92% of the total world coffee production on average for the period of 2007-2015 (Table 1) and roughly about the same percentage of the total world coffee exports for the period of 2008/09-2014/15 (Table 2). On the other hand, the number of importing countries included is 201, that is, all the countries for which trade statistics are available (the complete list is provided in appendix A). The period under the study covers the years 2001-2015. 7 Table 1 Data on GDPs are obtained from the United Nations Statistics Division. Complete information on distance and other time-invariant bilateral gravity variables such as contiguity, adjacency, and so on were obtained from CEPII's 8 database. The RTA dummy is constructed based on a list of FTAs provided on the WTO website. This dummy includes RTAs notified based on the GATT Article XXIV as well as the Enabling Clause. The list of RTAs considered is provided in appendix B. Other variables used in the study include real exchange rates, tariffs, Non-Tariff Measures (NTMs), and infrastructural indices for both exporting and importing countries. Appendix C provides the complete list of all the explanatory variables used in this study along with their descriptions and data source. As COMTRADE only reports the positive trade flows declared by each country, the issue of the non-reported data needs to be handled with care. The non-reported data could either be missing (as in the case where countries do not report trade) or zero-trade values (as in the case where countries do not trade). A commonly used procedure to address this is to consider these values as missing. We follow the same. Results and discussion This section reports and discusses the major empirical results emerging from the analysis. Table 3 shows the results from the OLS estimation of equation (3). Column (1) reports results time or country fixed effects (without fixed effects). Column (2) reports results with time fixed effects meaning time dummies, which are included to account for changing nature of the relationship over time. Column (3) and (4) show results for time and bilateral pair (BP) fixed effects, and bilateral pair and country-by-time fixed effect resepectively (time-invariant importer and exporter fixed effects and for time-varying exporter and importer fixed effects, respectively). Both exporter and importer GDPs have the expected positive sign and are significant in all specifications except for columns (4), where importer GDP loses significance. In line with the logic behind the gravity model for aggregate trade flows, the higher the level of income (GDP) of exporting countries, the greater the capacity to produce/supply and hence the higher the trade level or exports. This suggests that an average increase in income from trading partner countries could positively affect the volume of coffee export. However, this may not necessarily hold true for disaggregated trade like the coffee trade. De Almeida et al. (2012) found a negative and insignificant estimate for exporter GDP and stated that such findings are possible for product-level (disaggregated) analysis. In contrast, our findings somehow conform to the widely accepted logic of gravity modeling. In this connection, the existing literature stresses the need to consider sector GDPs instead of the overall national GDPs for trade analysis at disaggregated level. In fact, given that coffee is a commodity traded between LDCs and MDCs in such a way that it is predominantly exported from low-income countries to high-income countries; it is even possible that the coefficient of exporter GDP is negative as was actually found in one of the regressions (column 2). One possible explanation as to why such could be a possibility is that as countries develop, they are more likely to expand in non-agricultural sectors and hence tend to import agricultural products like coffee than really producing and exporting them. This is particularly a possible scenario since coffee exports are predominantly in the form of non-processed coffee (green coffee). According to International Trade Centre (2011), on average, soluble coffee exports constituted about 0.2% and roasted coffee about 7.0% of the total coffee exports during 2007/08-2010/11. It is, therefore, quite interesting to consider using sectoral GDP data instead of national GDP data and find out a more appropriate effect of the variables on coffee trade flows. With regard to population, it was found that the coefficient for exporter population was positive as expected but insignificant in all equations except equation (2), while that for importer was found negative and significant in all equations. This could be due to the fact that coffee importing countries are typically high-income low-population developed countries and hence possibly leading to negative relationship between coffee exports and importing country population. It could similarly be argued that the opposite could hold true with regard to exporting country population. Distance is consistently negative in all equations as expected but only significant in equations (3) and (4). That is, the greater the distance between trading country pairs, the lower the coffee trade between them, thereby implying that the cost of transportation is an important factor in global coffee trade. This variable is a proxy for transportation costs and time, access to market information, access to markets, and other factors that make it difficult for nations to engage in bilateral coffee trade. Similarly, the variables "common border", "common colonizer" and common language are found to be consistently positive in all equations in line with expectation and significant in most equations, meaning that these variables play an important for enhancing bilateral coffee trade. Exporter's exchange rate is positive and significant (columns 1 and 2) as expected. This implies that depreciation in the exporter country's exchange rate vis-à-vis US$ enhances the competitiveness of its exports and hence increases the exporter country's exports. The remaining coefficient RTA estimates were found to be statistically insignificant or defy expectations; namely, as for the RTA, the negative coefficients obtained in all equations are in contradiction with findings in many past studies. However, the fact that they are consistently insignificant in all equations is quite interesting in terms of solving this puzzle. A possible explanation could be found in the fact that the bulk of coffee trade flow is mainly from LDCs (which are mainly the coffee exporters) to MDCs (where coffee consumption mainly happens), while most of the RTAs considered were of an intra-regional nature. Consequently, this might lead to a possible negative relationship provided that the regional (intra-regional) trade creation impact of the relatively large number of intra-regional RTAs considered in the sample far exceeds the amount of coffee trade created by the relatively small number of inter-regional RTAs included in our sample. Previous studies have come up with diverse information on the effect of RTAs on trade expansion. As argued by Baier et al. (2015) and Carrere (2006) find that an ex post assessment of RTAs shows a significant increase in trade flows and the increase is greater for the deeper RTA agreements while Vicard (2009) argues that RTAs are by nature a predetermined occurrence and the signature an agreement has no effect on the existing trend behavior. However, in case of disaggregated product-level, in our case, coffee trade further studies should be conducted for confirmation of the result. Finally, since the results are not as consistent with theory for the common language variables, we tried to combine the two in one dummy variable (the official language and ethnic language). However, the results didn't show much difference after running regressions using the new dummy variable. It is worthy of note that results confirm that the initial impact of the global financial crisis which was originated in the US in 2008, on coffee exports found to be statistically significant and negative. It was argued that, due to crisis, price fluctuations have been aggravated and showed a low-income elasticity of demand for coffee exporting countries. Arguably, the crisis might therefore worsen the position of coffee producers in the global value chain, thus further reducing their coffee export. Table 5 reports estimation results where various supply-side variables are included. The variable "percentage of arable land" was found to be positive and significant as expected. On the other hand, the variable "percentage of paved roads", which was included as proxy for infrastructure in the exporting country turned out to be significantly negative, unlike what is expected. This finding is perhaps suggestive of the need to consider a more appropriate measure of the quality of infrastructure. The coefficient of applied tariff on primary products imposed by importing countries, which was included to capture tariff measures on coffee trade flows turned out to be negative and significant in line with prior expectation. However, as a follow-up to this analysis, there is a need to use actual tariff data at the most detailed tariff line for coffee and also include data on non-tariff measures (NTMs) to find out the actual effects of these trade barriers imposed by importing countries on coffee trade globally. The results with regards to the remaining variables are quantitatively similar to those discussed earlier. Finally, Table 6 reports estimation results using the PPML. Again, while the results are broadly similar to those discussed in Tables 4 and 5 above qualitatively, the results were quantitatively quite different. In particular, the size of the coefficients of the PPML estimates for exporter and importer GDPs were far smaller. On the other hand, the PPML coefficient estimates for the distance variable are a bit bigger compared to those of the OLS estimates. Therefore, we omit the results of Table 5 in the discussion, because the discussion based on the Estimates of determinants of bilateral coffee exports applying MPML are similar to those of the OLS estimations. Moreover, similar to previous discussion, the estimated coefficient of the dummy variable, RTAs, has a negative sign. This variable is expected to capture the degree of trade-diverting effects between members and non-members, compared to the "normal" bilateral coffee trade flows. This result confirms with the study of Ekanayake et al. (2010). Another pessimist argument of RTA effect on developing countries might be justified by the similarity of resource endowment of the partner members and the frequent failure by these countries to implement fully the terms of their regional integration agreement makes it hard for them to increase intra-regional trade. In some cases there has been deliberate undermining of the integration agreements. Robust standard errors in parentheses, *** p < 0.01, ** p < 0.05, * p < 0.1 Due to unavailability of specific data on international trade in different types of coffees, the determinant of world coffee trade was analyzed based on coffee as a whole without considering trade in Arabic and robusta separately which might be a significant difference of pattern of trade in both coffee differentiation. This would be a shortcoming of the analysis and it calls for further research on issue related with coffee differentiation across the world. Conclusion This study was conducted with the aim of investigating the patterns of global coffee trade flows and identifying the major determinants of coffee exports. Gravity modeling was employed for the analysis using panel data on bilateral coffee trade flows of 18 major coffee exporters and their trading partners (201 countries) for the period of 2001-2015. Both OLS and PPML estimation techniques were employed. Overall, the gravity model has explained coffee trade flows very well with R-squared values as high as 0.87 in some cases. The major findings are generally supportive of those found in most previous studies and can be summarized as follows. The basic gravity model variables like exporter GDP (and population) and importer GDP were found to be important determinants enhancing coffee trade. Of the bilateral distance variables, physical distance is found to impede coffee trade, while common border was found to enhance it. On the other hand, cultural (distance) variables like colonial link, common colonizer and language were also found to enhance coffee trade. Other variables that were found to significantly enhance coffee trade include depreciation in exporting country exchange rate and the amount of arable land of exporting country. On the other hand, importing country tariff and global financial crisis were found to significantly reduce coffee trade as expected. The RTA variable was found to have no significant impact on coffee trade flows. Therefore, policy makers should consider supply and demand side factors for the coffee sector as an essential path towards coffee sector development. Interestingly, the trade potentials from the study revealed that there are substantial opportunities for the coffee exporting countries to exploit the available markets as the countries are still under-exporting compared to the coffee market needs. While our study examined additional variables beyond the basic gravity variables, it would be worthwhile to raise a number of issues that could help improve our understanding of the central issue we attempted to address. Some of these issues include: the use of data on sectoral GDP rather than overall national GDP; revision of the RTA variable considering only selected RTAs that are based on agreements on inter-regional provisions of preferential treatments or considering Preferential Trade Agreements (PTAs) from developed nations to developing countries like Everything But Arms (EBA), African Growth and Opportunity Act (AGOA), and so on. Moreover, the use of actual tariff data and also of tariff-equivalent of NTM imposed by importing countries specifically for coffee instead of applied tariff on primary agricultural products might have helped better understand the issue at hand. The analysis could have also benefitted from the use of the coffee trade data flows at HS six digit level including the various forms of coffee exported (like roasted, non-roasted, decaffeinated, etc.) so as to be able to analyze the margins of coffee trade by types of coffee.
9,021.6
2021-01-01T00:00:00.000
[ "Economics" ]
Seismic site response of unstable steep slope using noise measurements: the case study of Xemxija Bay area, Malta Landslide phenomena involve the northern coast of Malta, affecting in particular the urban area of Xemxija. Limestones overlying a clayey formation represent the shallower lithotypes that characterize the surficial geology of this area, where lateral spreading phenomena and rockfalls take place. Ambient noise records, processed through spectral ratio techniques, were analysed in order to characterize the dynamic behavior of the rock masses affected by the presence of fractures linked to the landslide body existing in the area. Experimental spectral ratios were also calculated after rotating the horizontal components of the seismic signal, and a direct estimate of the polarization angle was also performed in order to investigate the existence of directional effects in the ground motion. The results of the morphologic survey confirmed the existence of large cliff-parallel fractures that cause cliff-edge and unstable boulder collapses. Such phenomena appear connected to the presence, inside the clay formation, of a sliding surface that was identified through the interpretation of the noise measurement data. The boundaries of the landslide area appear quite well defined by the pronounced polarization effects, trending in the northeastern direction, observed in the fractured zone and in the landslide body in particular. Introduction The Maltese Archipelago is situated in the Mediterranean Sea, about 290 km northeast of Tunisia and 90 km south of Sicily.It consists of three major islands: Malta and Gozo, the southerly and northerly islands, respectively, and Comino, which lies in the Comino Straits separating the two largest islands (Fig. 1).The Maltese economy is mainly based on the tourism industry and has increased since the 1970s with a high degree of coastal urban settlements. In order to better preserve the historical heritage, landscapes, and coastal areas and to promote tourism activities, it has been proposed that the archipelago might be considered as an open air laboratory.In this context multidisciplinary studies integrating geology, engineering, geomorphology as well as history and archeology may be undertaken in order to develop and test methodologies for the assessment of the relationship between the physical environment and cultural heritage (Soldati et al., 2008). Xemxija is a seaside village and marina on the northeastern part of Malta (Fig. 1), and it is a very important site for touristic attractions, as well as cultural and historical heritage.In fact, up Xemxija Hill lies what is known as The Xemxija Heritage Trail, which comprises a variety of sites spanning a period of about 6000 yr (e.g.seven prehistoric tombs, a Neolithic temple, a Roman road, several caves, an ancient and well-preserved Roman apiary, a Punic tomb, Roman baths, a 1000 yr old carob tree, etc.).The Xemxija study area spans a couple of square kilometres.More than half of it is intensely built, while the remaining area consists of meadows and agricultural land.The area is characterized by a geology and topography that varies over small spatial scales.Its geomorphological features are the result of the combined effect of the lithology, tectonics and coastal nature that shaped the region, and such features contribute towards the degree of geological instability of the whole area and particularly to the cliff sections. To date, no studies combining geomorphology and site response have been carried out in the study area of Xemxija.In this paper we carried out a preliminary study of the area, paying particular attention to the risk of landsliding and rockfalls.In particular, the combination of vulnerable geomorphological features, intensive land use and cultural/touristic importance implies that the study area is exposed to a considerable natural risk.Buildings in Xemxija village are located in a diversity of topographical settings, such as slopes, ridges and valleys, while a variety of building types and ages can be identified.The small urban settlement is densely populated in summer, when it is used as a summer resort with intensive recreational and commercial coastal land use.The area also provides a suitable setting for the subsequent evaluation of a number of other factors that contribute to the damage potential, and hence the holistic assessment of geo-risks. We have conducted a qualitative geomorphological survey of the area, and we used the ambient noise horizontal-tovertical spectral ratio (HVSR) technique in order to characterize the behavior of rock masses in the presence of fractures linked to the landslide body in the area.This type of measurement can be done quickly and with a high spatial density, providing a fast tool for setting the dynamic behavior of the rock outcropping in Xemxija.Recordings of ambient noise and the use of the HVSR technique have recently had widespread use in studying landslides (e.g.Del Gaudio et al., 2008;Burjánek et al., 2010;Del Gaudio and Wasowski 2011;Burjánek et al., 2012). Geological setting The geology of the Maltese islands is relatively young, with the oldest rock dating back only to the Tertiary period.The islands are mostly composed of marine sedimentary rocks (Fig. 1).Although the sedimentary platform on which the Maltese islands are situated was formed during the Triassic, there are no surface outcrops of this age.All exposed rocks were deposited during the Oligocene and Miocene when the Maltese Islands were part of the Malta-Ragusa platform with Sicily and, as such, attached to the African margin (Pedley et al., 1978;Mourik et al., 2011).The most recent deposits are the Quaternary deposits, which are found in minor quantities and are of terrestrial origin.The geologic sequence of the Maltese Islands is classically divided into five units (Pedley et al., 1978(Pedley et al., , 2002)).The lowermost unit is the Lower Coralline Limestone Formation (LCL), which consists of massive biogenic limestone beds of shallow gulf marine origin.This shallow carbonate ramp phase is Oligocene in age.Younger beds show evidence of deposition in more open marine conditions.Deeper water slope carbonates of the Globigerina Limestone Formation (GL) began depositing in the Chattian (Late Oligocene) and span from the early Miocene to late middle Miocene.They consist mainly of loosely aggregated planktonic foraminifers, whereas larger skeletal fragments, such as echinoids or mollusks, are rare.A marly unit of alternating light to dark layers, called the Blue Clay Formation (BC), spanning the Serravalian (middle Miocene) (Kienel et al., 1995;Jacobs et al., 1996), abruptly follows the GL.Planktonic and benthic foraminifers form the bulk of the skeletal components within this unit.The water depth at which BC was formed is estimated on the basis of benthic foraminifers to be 150-200 m (Jacobs et al., 1996), or even 500 m (Foresi et al., 2002).The BC formation is unconformably overlain by the Greensand Formation (GS) and the Upper Coralline Limestone Formation (UCL), both late Miocene in age.The GS formation consists of a glauconitic sand bed ranging from 0 to 10 m in thickness, while the UCL consists of white porous calcareous sandstone, always rich in organic remains.Though some layers are completely crystalline and have lost traces of the organisms from which they originated, other portions are highly fossiliferous containing casts of shells and other organisms.It resembles the LCL both on chemical and paleontological grounds, indicating deposition in shallow-water carbonate ramp conditions. On Malta and Gozo, the bedding is generally subhorizontal, with a maximum dip of about 5 • .The geostructural pattern is dominated by two intersecting fault systems which alternate in tectonic activity.An older ENE-WSW trending fault, the Victoria Lines Fault (or Great Fault), traverses the islands and is crossed by a younger NW-SE trending fault, the Maghlaq Fault (Fig. 1), parallel to the Malta Trough, which is the easternmost graben of the Pantelleria Rift System.The faults belonging to the older set, all vertical or sub-vertical, are part of a horst and graben system of relatively small vertical displacement.The structural geology of the Maltese Islands is usually divided into three main regions: north of the Victoria Lines fault, south of the Victoria Lines fault and Gozo.The Xemxija Bay study area lies north of the Victoria Lines fault, where the structure is dominated by the development of horst and graben blocks, bounded by ENE trending normal faults; it is characterized by three different lithotypes: clay from the BC formation and two different members of the UCL formation, namely, the Tal-Pitkal Member, which is hard and pitted, and the Mtarfa Member, more soft and erodible.The coast north of the Victoria Lines Fault in Malta is characterized by lateral spreading phenomena which take place within the brittle and heavily jointed and faulted UCL formation overlying the BC formation, consisting of softer and unconsolidated material.Most of the time these processes occur in places that are not intensively built, however some areas are highly frequented by both local and tourists for recreation activities. Geomorphologic analysis The study of slope instability conditions in the coastal region of Malta is an important task since several areas are subject to a high landslide hazard.Coastal erosion, generally, takes place in conditions of poor sediment availability and when the sea engulfs the land due to wind, wave and tidal pressures (Doody et al., 2004;Young et al., 2011).This erosion process can include three simultaneous processes, namely, the long-term retreat of coasts, the medium-term degradation of beaches and the short-term cliff erosion (Hart, 1986).Geomorphologic studies were conducted by Magri et al. (2008) on the west coast of Malta and in Gozo, which share similar geological conditions.Along the west coast of Malta, in particular, a temporary GPS network was set up in 2005 to monitor the state of the lateral spreading activity, and preliminary results indicate that the coastal landslides are quite active.A number of preliminary field surveys of the NW and NE coasts of Malta were carried out by Farrugia (2008) and Coratza et al. (2011) to better understand Maltese coastal development and to highlight areas having similar geological features.Landslides in the northern Malta coast seem to be caused by lateral spreading and rockfall (Fig. 2a), which occur within the brittle UCL formation overlying the BC.The UCL formation is characterized by a prominent plateau scarp face, whereas BC produces slopes extending from the base of the UCL scarp face to sea level.It is well known that lateral spreading usually takes place in the lateral extension of cohesive rock masses lying over a deforming mass of softer material where the controlling basal shear surface is often not well defined (Pasuto and Soldati, 1996). A detailed field survey was performed in the Xemxija Bay area to map the main coastal instability features, using a map at scale 1 : 2500 (MEPA, 2004).The features mapped are the result of different mechanisms, such as: -cliff parallel fracturing due to natural cliff erosion and retreat; -formation and detachment of blocks along the cliff edge, leading to rock collapse; -landsliding on sloping faces; and -instability resulting from weakening of the UCL strata lying on the top of the soft BC layer that is slowly eroding and sliding. In the study area, attention was particularly focused on fractures and unstable blocks existing along the coastline and the hill (see areas 1, 2, 3 in Fig. 2b). Along the Xemxija cliff and headland, in the UCL formation, a main fracture, striking in the NE-SW direction, is quite evident (area 1 of Fig. 2b).It develops along the southwestern end of the cliff coast at a variable distance (1-8 m) from the cliff edge, and it is sometimes filled with rock debris and soil.The fracture width ranges from less than 10 cm to about 80 cm, whereas its depth ranges between 1m to 1.60 m.A vertical offset ranging from 3 to about 20 cm was measured in several points.Many secondary fractures run along the cliff in a parallel direction, which is also compatible with the structural system present in the area.All the UCL appears intensively fractured (Fig. 3a and b), and some of the fractures show signs of humidity and fluid percolation.It is fair to say that some zones in the study area could not be mapped because they were covered by thick vegetation.A number of collapsed rocks and detached blocks lie at the base of the cliff escarpment and contribute to form much of the shore area, as seen in Fig. 3c. The hill behind the cliff (area 2 in Fig. 2b) presents the same stratigraphy and type of structures visible at the cliff edge, with numerous detachments and secondary fractures along both sides of the hill trending in an E-W direction (Fig. 3d).Moreover, several cavities permeate the UCL outcrops (see arrow in Fig. 3d), while discontinuities generate blocks of different sizes, mostly covered by vegetation and showing also the presence of fluid percolation.In this area, huge collapsed blocks, having sizes of up to 7-8 m, lie along the base of the escarpment (circles in Fig. 3d). On the other side of the hill (area 3 Fig.2b) the slope is characterized by small-medium size blocks that have been detached from the top UCL formation overlying the BC.No protection measures (e.g.net, walls) have been taken to protect the residential area that lies below.Considering that the hill trends in a NE-SW direction, the same as the cliff, and since the processes that involve the hill are substantially the same as those affecting the cliff, rockfall risk might be a problem for the buildings placed on the foot of the "seaside" slope. It is important to notice that the main fracture in area 1 (Fig. 2b) identified in the present study runs close to a residential complex and the Franciscans' convent and San Guzepp Haddiem church, as shown in Fig. 2b.In Fig. 3a, the arrows track this fracture which is responsible for several cracks in the walls of the building.Although the religious complex was newly built about 12 yr ago to replace the previous structure which was affected by similar problems, the complex does present already several major cracks presumably related to the main fracture previously shown in Fig. 2b.In Fig. 4, the unstable situation prevailing today is shown.It can be indeed observed that the southern part of the building is extensively cracked and perceptibly tilted and, large cracks are also visible on the front balcony (Fig. 4a).Along the south block of the church a wide crack is running in E-W direction (see arrows in Fig. 4b).Besides, in one of the rooms that are overlooking the south coast of the cliff, evidence of cracks can be observed (Fig. 4c).Examining the direction of cracks along the cliff and those on the church, it is plausible to assume that all of the cracks present in this complex are associated with movements taking place in the same group of geological fractures. The field survey has also pointed out that different hazards exist in different portions of the study area, according to the type of land use.On the beach below the cliff on the headland, a clear hazard from collapsing rocks is present for people making recreational use of the shore.On the northern side of the ridge, where Roman baths and tombs are carved into the Upper Coralline Limestone, most of the slope consists of cultivated fields and meadows.As can be seen in Fig. 3c, the main hazard here is due to potential rockfalls onto the road linking Xemxija to Mellieha that runs along the base of the escarpment.Finally, the slope of the hill on the side of Xemxija Bay, which is increasingly being built up, might be involved in collapsing and sliding. Experimental setup and data processing The HVSR method is a common tool used for site effect investigations.It is based on the ratio of the horizontal to vertical components of ground motion.Generally, this spectral ratio exhibits a peak corresponding to the fundamental frequency of the site.The ambient noise wavefield is the result of the combination of unknown fractions of both body and surface waves (Bonnefoy-Claudet et al., 2006).If the first are prevailing, the ratio is mainly induced by S H resonance in the superficial layers whereas; if Rayleigh surface waves predominate, the theoretical ellipticity dictates the observed curves (Nogoshi and Igarashi 1970;Fäh et al., 2001;Scherbaum et al., 2003).This is especially true when a large shear wave velocity contrast exists between the shallow layer and the bedrock, as theoretically confirmed by Malischewsky and Scherbaum (2004).Although experimental data peaks usually fit quite well the resonance frequency of the theoretical curves, they are less reliable as regards their amplitude.Nevertheless, the HVSR curve contains valuable information about the underlying structure, especially as concerns the relationship between V S of the sediments and their thickness (Ibs-Vonseth and Wholenberg 1999; Scherbaum et al., 2003). We recorded ambient noise at 27 sites (Fig. 2c) using a 3component seismometer (Tromino, www.tromino.eu).Time series of ambient noise, having a length of 20 min, were recorded with a sampling rate of 128 Hz and, following the guidelines suggested by the SESAME project (2004), they were divided in different time windows of 20 s each not overlapping each other.A 5 % cosine taper was applied to each window and the Fourier spectra were calculated.The spectra of each window were smoothed using a Konno-Ohmachi window (Konno and Ohmachi, 1998) fixing the parameter b to 40.Finally, the resulting HVSR, in the frequency range 0.5-40.0Hz, was computed by estimating the logarithmic average of the spectral ratio obtained for each time window, selecting only the most stationary and excluding transients associated to very close sources. Experimental spectral ratios were also calculated after rotating the N-S and E-W components of motion by steps of 10 degrees starting from 0 • (north) to 180 • (south).This approach, first applied to earthquake recordings in studying the directional effects due to topographic irregularities at Tarzana, California (Spudich et al., 1996), has been used for ambient noise signals by several authors (e.g.Lermo and Chávez-Garcia, 1993;Panzera et al., 2011a) and to identify site response directivity in the presence of an unstable slope as well (Del Gaudio et al., 2008;Burjánek et al., 2010;Del Gaudio and Wasowisky, 2011). A direct estimate of the polarization angle was also achieved by using the covariance matrix method (Jurkevics, 1988) to overcome the bias linked to the denominator behavior that could occur in the HVSR technique.This technique is based on the evaluation of eigenvectors and eigenvalues of the covariance matrix obtained by three-component seismograms.Signals at each site were band-pass filtered using the entire recordings and a moving window of 1 s with 20 % overlap, therefore obtaining the strike of maximum polarization for each moving time window. HVSR patterns and interpretation A dense microtremor measurement survey was carried out in the Xemxija Bay area (Fig. 2c).We focused on the NE part of the bay, in which there is major evidence of slope instability and in which a high level of cliff fracturing is evident.We chose the recording sites in order to sample the area as uniformly as possible.Moreover, several recording sites where chosen on a linear deployment for investigating the role of the fractures in the HVSR behaviour.Figure 2c shows the geological setting of the investigated area. The HVSR curves obtained identify three different zones.In particular, we can identify a region (Fig. 5, sites #5, #6, #7, #25, #27) where the HVSR peaks around a stable frequency of about 1.5 Hz.These fundamental peaks may be generally associated with the interface separating the BC layer from the GL.The presence of the BC layer gives rise to a velocity inversion since it has a lower shear wave velocity with respect to the overlying UCL formation.This causes the HVSR values to drop below 1 over a wide frequency range (e.g.Castellaro and Mulargia 2009;Panzera et al., 2011b).The origin of the resonance peak was confirmed by carrying out 1-D modelling, computing synthetic HVSR curves (Fig. 5).To compute the synthetic spectral ratios, we considered that ambient vibration wavefields can be represented by the superimposition of random multi-modal plane waves moving in all the directions at the surface of a flat 1-D layered viscoelastic solid, as in the Herrmann (2002) formulation.These waves are assumed to correspond to Rayleigh waves in the fundamental mode (Fah et al., 2001), including also the presence of Love waves (Bonnefoy-Claudet et al., 2008), generated by a distribution of random independent point sources located at the surface of the Earth (Lunedei and Albarello, 2009).Although the contribution of higher modes is relatively small, we extended the modal summation up to the fifth mode.When modelling the HVSR, we applied initial constraints on the thicknesses and elastic parameters of the layers using borehole logs data to obtain approximate values of layer thicknesses and rock densities.Seismic velocity values were also utilised from a separate preliminary study undertaken in the same area using the ReMi, MASW and refraction methods (Panzera et al., 2011c). The sites close to the cliff edge and all around the identified fractures (Fig. 6, #1, #2, #3, #4, #8, #9, #10, #11, #12, #13, #26) present a similar behaviour to the one described above, but with a slightly different feature at the highfrequency.In general, in all the records (Fig. 6) we observe a clear and predominant peak at around 1.5 Hz which is associated with the interface between BC and GL.Several peaks, showing a slight increase of the amplitude on the HVSR, are also evident at higher frequency (> 9.0 Hz).The fact that these peaks are not visible in the unfractured region leads us to postulate that they may be associated with the presence of fractures and of blocks almost detached from the cliff and therefore free to oscillate. The sites in the rockfall area show a different HVSR behaviour (Fig. 7) with respect to the measurements taken on the plateau.In this area it is possible to identify HVSRs showing bimodal dominant peaks at low frequency, in the range of 1.0-3.0Hz, as well as pronounced peaks at about 3.0 Hz and at frequency higher than 9.0 Hz (Fig. 7).Trying to interpret these observations within a geological framework following the section in Fig. 5, the bimodal peaks at low frequency (1.0-3.0Hz) can, in our opinion, be associated with the contact between the rockfall and detritus unit and the BC formation, as well as to the interface between BC and the underlying GL formation.It is interesting to observe that such bimodal peaks are not observed in the measurement sites located in the southern part of the landslide zone (#14, #23, #24).It therefore seems that the thickness of the rockfall and detritus deposits plays an important role, pointing out the existence of bimodal peaks at low frequency when its thickness is of the order of tens of meters, whereas only the fundamental frequency associated to the contact BC/GL appears when the detritus thickness decreases.Peaks observed at frequencies greater than 9.0 Hz, as previously described, can be associated with detached blocks almost free to oscillate. In Fig. 8a we summarize the above described results into a tentative draft profile, located as shown in Fig. 2b, which illustrates the main geological features and hypothesizes the shape of the landslide body.The bottom panel (Fig. 8b) shows a 2-D diagram obtained by combining all the ambient noise measurements along the profile, namely the HVSR spectra shown in Panel c of Fig. 8. Below the sites #5 and #25, the main peaks can be associated with a sequence of layers which have different shear wave velocities and show the presence of an evident velocity inversion.Moving along the profile from measurement point #4 to #1, it is interesting to observe the increasing amplitude of the HVSR at frequencies greater than 6.0-7.0Hz.It can also be noticed that, especially between 50 and 60 m along the profile (sites #1, #2 and #3), the influence of the fracture zone is evident (see dashed area in Fig. 8b).This is associated with the presence of the mapped main fracture (marked in red in the top panel) and the vibrational mode of the almost detached blocks.Along the cross section, at distances ranging from 60 to 100 m, it is possible to note both the presence of the bimodal peak associated with the two interfaces, detritus/BC and BC/GL, as well as the high frequency peaks most probably associated with the vibration of large blocks that have been detached from the cliff face and are now partially or totally included in the BC. Directional effects from HVSR and polarization analysis The existence of directional effects in the site response was investigated by rotating the horizontal components of the spectral ratios obtained at each measurement site by steps of 10 • starting from 0 • N to 180 • N (Fig. 9).We observe that the rotated spectral ratios obtained in the cliff fracture zones show clear directional effects with an angle of about 40 • -60 • N in all of the considered frequency range (first row in Fig. 9), although some variability in azimuth is observed at high frequency (> 9 Hz) at site #13.On moving away from the cliff edge, the rotated HVSRs show a slight change of the directional resonance angle (second row in Fig. 9) and an amplitude decrease of the rotated spectral ratios at high frequency.Such a behavior could be linked to the increase of rock stiffness and the reduction of the amount of blocks free to oscillate.Finally, it is evident that the directionality pattern observed in the rotated HVSRs performed on the landslide body is quite complex (third row in Fig. 9).The general trend has a prevailing direction of about 40 • -60 • N at low frequency (1.0-9.0Hz), similar to what is observed in the fractured zone, whereas different resonant frequencies and directions that could be ascribed to the vibration of smaller blocks can be observed at higher frequencies (9.0-40.0Hz).Furthermore, we obtained a direct estimate of the polarization angle through the full use of the three-component vector of the noise wavefield.Signals at each site were band-pass filtered in three frequency bands 1.0-40.0Hz, 1.0-10.0Hz and 10.0-40.0Hz.The last two frequency bands, in particular, were selected in order to distinguish between properties of low and high frequency components of the signal.It seems evident that the prevailing angle of directional site effects observed in the HVSR, especially in the fractured and in the landslide zone, appear constant in the NE direction in the range 1.0-10.0Hz, whereas directional effects become more randomly distributed at frequency greater than 10 Hz (see Fig. 9).Some examples of the results of noise polarization analysis from recording sites on both the cliff and the landslide are shown in Fig. 10.Considering the three selected frequency bands, it is clear that the maxima of the horizontal polarization occurs in the northeast direction, although in some cases (see for instance #22) the high frequency directionality is more complex.As observed by Burjánek et al. (2010), high-frequency ground motion can indeed be controlled by the vibration of smaller blocks that imply both different resonant frequencies and directions.In Fig. 11, some examples of Fourier spectra, showing the presence of spectral peaks in the vertical component at the same frequency of the maxima of horizontal components of motion, are reported to support the existence of rocking mode vibration at high frequency. The polarization observed for the sites located away from the unstable areas (see middle panel in Fig. 10) show a trend with more dispersed and variable directions.The boundaries of the landslide area therefore appear well defined by the polarization pattern, and as postulated by Kolesnikov et al. (2003) the landslide activity is characterized by strong horizontal polarization in a broad frequency band.In our study, a tendency seems evident for the entire landslide body to generally vibrate with a northeast azimuth and, accordingly, during a strong earthquake the ground motion would be amplified in this direction.Studies of Burjánek et al. (2010Burjánek et al. ( , 2012) ) point out that the ambient noise polarization is at about a 90-degree angle to the observed fractures which are perpendicular to the sliding direction.In the present study the polarization angle is parallel to the opening cracks, which appears in contrast to the above mentioned results.A possible explanation of our findings is that there exists a prevailing northeasterly sliding direction of the landslide body which is strongly affecting the polarization direction, especially in the 1-10 Hz frequency range. Concluding remarks This paper presents a preliminary field study in the Xemxjia Bay area aiming to highlight the importance of evaluating the local seismic response in presence of the slope instabilities related to landslide hazard.In particular, large cliff-parallel fractures that could cause cliff-edge collapse and main unstable boulders were identified, mapped and photographed in order to document the present-day situation.The most diffuse collapse mechanism is represented by rockfalls, toppling and retrogressive sliding of small to large rocks.These processes are most likely induced by the different stiffness of the clay and the overlying limestone.Recent studies by M. Soldati (personal communication, 2011) have pointed out that there is no evidence of measured movements after rain or dry periods.We therefore think that the fracturing is not induced by weathering and erosion at the cliff edge.In our opinion, following the outcomes of noise measurements, the presence of the clay formation develops a sliding surface and produces tension stresses at the top of the UCL.Thus, cracks expand due to the ultimate tensile strength of the formation, defining blocks on the top of both cliff and hill which are affected by collapse mechanisms (Lollino and Pagliarulo, 2008). It is known that seismically-induced ground acceleration can lead in some cases to landsliding and block detachment that therefore represent a considerable problem for engineering geology (Fell et al., 2008).In areas prone to severe ground shaking, the effect of seismically-induced landslides on human lives and facilities can add further damage to that directly connected to the shaking (e.g.Jibson et al., 2000), as experienced in several earthquakes of moderate and large magnitude such as in the recent M w = 6.3 earthquake in Christchurch (New Zealand, 22 February 2011), the M w = 6.2 L'Aquila earthquake (Italy, 6 April 2009) and the M w = 7.9 Wenchuan earthquake (China, 12 May 2008).The seismic history of the Maltese Islands is adequately documented since around 1500 (Galea, 2007).During this period, the islands have suffered earthquake damage exceeding EMS-98 intensity V on seven occasions (1542,1693,1743,1856,1861,1911,1923), and the occurrence of landslides has been reported on several occasions (e.g.1693 earthquake from Ellul, 1993). The Xemxija unstable area is characterized by the presence of numerous blocks and boulders along the slopes and cliff base, supporting the idea that the area is prone to a severe landslide risk.The instability processes that could be potentially triggered are linked to both slow mass movements, which might normally occur over tens or hundreds of years, and sudden rockfall in the case of ground shaking due to moderate-to-strong earthquake activity.To better understand the situation, the most important discontinuities, both in the coastal cliff area and the hill, were identified and mapped, focusing in particular on the main unstable rock masses which might be displaced even in case of moderate earthquakes. The available literature data and recent instrumental observations indicate that the dynamic response of potentially unstable slopes to seismic shaking can be very complex.In particular, there is evidence that seismic ground motion on landslide slopes can be considerably amplified, and such amplification has a directional character (Moore et al., 2011).Such directional effects were seen to be related with topographic, lithologic and structural factors as well as normal mode rock slope vibration (e.g.Del Gaudio et al., 2008;Burjánek et al., 2012).The NE part of Xemxija Bay, in which there is major evidences of slope instability, was studied in detail through ambient noise measurements.The results of horizontal-tovertical spectral ratio measurements indicate that this method could be useful for the recognition of site response directivity phenomena.The use of noise measurements pointed out the existence of three different zones: a stable zone, in which the HVSRs show only a dominant peak at about 1.5 Hz, linked to the presence of the BC in the shallow lithologic sequence; a second zone, close to the cliff area, characterized by the presence of spectral ratio peaks linked to both the presence of shallow lithotypes such as the BC and the UCL, as well as to the existence of the fractures in the rock; and a third zone on the landslide body, which puts into evidence the presence of an active slip surface inside the soft clayey material that allows the slow sliding of the upper portion of BC formation.Moreover, the experimental data highlight the existence of directivity phenomena, affecting in particular the slope areas, centered in the northeastern direction, that seem to be influenced by the simultaneous action of geological factors as well as fractures and block vibrations linked to the landslide activity. Fig. 1 . Fig. 1.Sketched geological map of the Maltese Islands (modified from Various Authors, 1993).The black square indicates the study area. Fig. 2 . Fig. 2. (a) Schematic sketch of lateral spreading effects along the Xemxija coast; (b) view of the Xemxija Bay area showing the sites investigated in the geomorphological survey; and (c) geo-lithologic map of the northeastern part of the Xemxija Bay (modified from Various Authors, 1993). Fig. 3 . Fig. 3. (a) and (b) Overview, along the cliff area, of the main fracture, which is approximately striking in the NE-SW direction; (c) example of shore area below the cliff (area 1); and (d) overview of the cliff escarpment along the north side of the ridge (area 2). Fig. 4 . Fig. 4. Cracks and damage in the San Guzepp Haddiem church.(a) The balcony on the southern side of the building; (b) detailed view of the main fracture along the south side of the church; and (c) view of a room inside the building. Fig. 5 . Fig. 5. HVSR results at recording sites located in the not fractured zone, stratigraphic sequence, shear wave velocity profile and comparison of experimental spectral ratios with 1-D theoretical modelling. Fig. 6 . Fig. 6.HVSR results at recording sites located along the fractured zone in the cliff area. Fig. 7 . Fig. 7. HVSR results at recording sites located in the landslide zone. Fig. 8 . Fig. 8. (a) Cross section along the A-B profile in Fig. 2c; (b) 2-D diagram obtained combining all the ambient noise measurements along the profile as a function of distance (x-axis) and frequency (y-axis); and (c) HVSR results at the recording sites located across the profile. Fig. 9 . Fig. 9. Examples of the contours of the geometric mean of the spectral ratios as a function of frequency (x-axis) and direction of motion (y-axis) obtained at selected ambient noise recording sites. Fig. 10 . Fig. 10.Examples of rose diagrams at representative ambient noise recording sites. Fig. 11 . Fig. 11.Examples of Fourier spectra at representative ambient noise recording sites along the profile shown in Fig. 8. Blue, red and green lines refers to the vertical, N-S and E-W components of motion.The black arrow indicates the frequency range showing the rocking mode vibration.
8,044.4
2012-11-21T00:00:00.000
[ "Geology", "Environmental Science" ]
Multiple Myeloma Macrophages: Pivotal Players in the Tumor Microenvironment Tumor microenvironment is essential for multiple myeloma (MM) growth, progression, and drug resistance through provision of survival signals and secretion of growth and proangiogenic factors. This paper examines the importance of macrophages within MM bone marrow (BM) microenvironment, referred to as MM-associated macrophages, as a potential niche component that supports tumor plasma cells. These macrophages are derived from peripheral blood monocytes recruited into the tumor. Upon activation by MM plasma cells and mesenchymal stromal cells, macrophages can release growth factors, proteolytic enzymes, cytokines, and inflammatory mediators that promote plasma cell growth and survival. Macrophages promote tumor progression through several mechanisms including angiogenesis, growth, and drug resistance. Indeed, these macrophages are essential for the induction of an angiogenic response through vasculogenic mimicry, and this ability proceeds in step with progression of the plasma cell tumors. Data suggest that macrophages play an important role in the biology and survival of patients with MM, and they may be a target for the MM antivascular management. Tumor-Associated Macrophages In the past decades, the major focus of cancer research has been the malignant cell itself. In haematological malignancies, including multiple myeloma (MM), this has led to the identi�cation of molecular alterations affecting growth control and apoptotic pathways [1]. Recent studies add yet another facet to the complex multistep model of tumorigenesis by demonstrating that tumor cells carrying genomic and epigenomic abnormalities also trigger changes in their microenvironment [2]. Indeed, accumulating evidence supports the hypothesis that the tumor microenvironment or "niche" ultimately determines the clinical behavior of the disease and has direct impact on overall prognosis [3]. MM is characterized by the accumulation of monoclonal plasma cells in the bone marrow (BM) where they grow and expand. is suggests the importance of the BM microenvironment in supporting MM cell growth and survival [4]. e roles of BM stromal cells in supporting MM plasma cells have been extensively studied. e interaction between plasma cells and stromal cells confers plasma cell homing, growth, survival, and resistance to chemotherapy [5]. Among stromal cells, the in�ammatory cells play an indispensable role in disease progression [6]. Within the tumor stroma, the macrophage is the pivotal member of in�ammatory cells. Tumor-associated macrophages (TAMs), which constitute a signi�cant part of the tumor in�ltrating immune cells, have been linked to the growth, angiogenesis, and metastasis of a variety of cancers [7]. In MM, macrophages are an abundant and important component of the stromal cells, contributing to tumor angiogenesis [8] in line with several reports describing an association between macrophage in�ltration, vascularity, and prognosis [9]. Macrophage Activation and Polarization Macrophages constitute an extremely heterogeneous population originating from blood monocytes, that are capable of displaying different functional activities, some of which are antagonistic; for instance, they can be immunostimulatory or immunosuppressive and either promote or restrain in�ammation [10]. is functional plasticity is regulated by local cues to which the macrophages respond. Macrophage heterogeneity has been simpli�ed in cell polarization concept that discriminates macrophages into distinct types, schematically identi�ed as M1 (or "classically activated") and M2 (or "alternatively activated"). In general, M1 macrophages are stimulated by bacterial products and cytokines secreted by 1 cells; they act as soldiers defending the host from viral and microbial infections, �ghting against tumors, producing high amounts of in�ammatory cytokines and activating immune response [11]. On the other hand, distinct types of M2 cells differentiate when monocytes are stimulated with interleukin-4 (IL-4) and IL-13 or with IL-10 and glucocorticoids [12]. M2 macrophages are characterized by poor antigen-presenting capability and wound-healing promotion [13]. Further, these macrophages express speci�c change in some metabolic pathways; arginine metabolism is oriented toward the production of ornithine and polyamine instead of citrulline and nitric oxide. M2 cells are workers of the host; they promote scavenging of debris, angiogenesis, remodeling, and repair of wounded/damaged tissues. Of note, M2 cells control the in�ammatory response by downregulating M1 cell-mediated functions [14]. TAMs (including MMassociated macrophages) resemble M2-like macrophage population with little cytotoxicity for tumor cells because of their limited production of nitric oxide and proin�ammatory cytokines [15]. TAMs also possess poor antigen-presenting capability and effectively suppress T cell activation. In the majority of cancers, TAMs show mostly protumoral functions, promoting tumor cell survival, proliferation, and dissemination by secreting a wide range of growth and proangiogenic factors as well as metalloproteinases, and by their involvement in signalling circuits that regulate the function of �broblasts in the tumor stroma [7]. Current Concepts of MM-Associated Macrophages In patients with active (symptomatic) MM, �uorescenceactivated cell sorting (FACS) analysis on freshly isolated BM mononuclear cells revealed higher percentages of CD68 + macrophages (a glycoprotein expressed only by human macrophages) than in patients with nonactive disease (i.e., in partial/complete remission, or in plateau phase) or those with monoclonal gammopathies of undetermined signi�cance (MGUS). MGUS is a premalignant, asymptomatic disorder characterized by monoclonal plasma cell proliferation in BM with absence of end-organ damage that represents a benign plasma cell disorder. Histologically, in patients with active MM, CD68 + macrophages were heavily in�ltrated in the BM. Indeed, in these patients, macrophages are recruited from the BM pool and/or the circulation into the vascular endothelial growth factor (VEGF) plus �broblast growth factor-2-(FGF-2-) rich microenvironment [16], both factors being chemotactic for macrophages. Scavelli et al. demonstrated that BM macrophages in patients with active MM are functionally, phenotypically, and morphologically different from those of patients with nonactive disease and MGUS [8]. Indeed, macrophages of these patients are similar to paired endothelial cells (MMECs) and contribute to angiogenesis through vasculogenic mimicry, in parallel to progression of plasma cell tumours [17]. It may well be that in active MM, plasma cells secrete VEGF and FGF-2 that induce in�ammatory cells to secrete their own VEGF, FGF-2, and hepatocyte growth factor (HGF); all these cytokines continuously recruit and activate MM-associated macrophages to adapt functionally, phenotypically, and morphologically to become vicarious MMECs, mimicking these cells, and collaborating with them in vessel formation [18]. is is likely minimal in nonactive MM or cannot take place in MGUS or benign anemia patients, due to the absence or small number of plasma cells, hence, very low levels of secreted VEGF and FGF-2, as previously demonstrated [19]. Moreover, BM macrophages protect MM cells from spontaneous and melphalan-induced apoptosis [20]. However, the exposure of macrophages in MM during the treatment with zoledronic acid and bortezomib, alone and/or in combination, impacts their angiogenic and vasculogenic properties, suggesting that these cells may be considered as a target of both drugs in MM patients. ese �ndings indicate that macrophages (as TAMs) may be an abundant and important component of the BM stromal cells and play a critical role in MM tumor progression. The Role of MM-Associated Macrophages in Tumor Progression (Figure 1) Growth Promoting Properties of MM-Associated Macrophages. Macrophage in�ltration positively correlates with MM cell survival and proliferation. Indeed, MM macrophages are characterized by higher expression of factors that stimulate plasma cell proliferation and survival, including IL-6 and IL-10, and lower expression of IL-12 and tumor necrosis factor-alpha (TNF-) [21]. It has been shown that IL-10 stimulates the proliferation of MM cells freshly isolated from patients in IL-6 deprived cultures [22]. Additionally, both IL-12 and TNF-are considered to retain antitumor effects [23]; hence, a lower expression of these cytokines by macrophages could provide a favourable milieu for the growth of malignant cells. Interestingly, MM macrophages have increased levels of VEGFA and VEGFC mRNA expression [21]. It is well known that VEGFs play a critical role in MM pathology by their effect on vascular endothelial cells, one of the well-known components of the MM plasma cell niche [24]. Traditionally, it has been assumed that mesenchymal stromal cells (MSCs) are the major source of VEGFs [25], but current results suggest the interesting �nding that macrophages might be another major contributor of VEGFs, especially when they have been educated by MSCs. Based on an in vivo model of MSCs transplantation into rat hind limb ischemia model, the source of increased VEGF in the tissues was found to be not transplanted (human) MSCs but recipient (rat) macrophages [26]. Angiogenesis Promoting Properties of MM-Associated Macrophages. BM neovascularization is a constant hallmark of MM, but not of MGUS. is phenomenon forms partly through angiogenesis [18] and is endowed with the overangiogenic phenotype of MMECs [27]. Mature macrophages have been found to form capillary-like lumina and branching patterns in vitro, participating to de novo formation of microvessels [28]. Scavelli et al. demonstrated that BM macrophages in patients with active MM contribute to build neovessels through vasculogenic mimicry, in parallel to progression of plasma cell tumors [8]. Macrophages from MM patients exposed to VEGF and FGF-2, which are major angiogenic cytokines secreted by plasma cells and present in the BM microenvironment, transformed into cells functionally and phenotypically similar to paired MMECs, generating capillary-like networks mimicking those of MMECs. Macrophages from nonactive MM, MGUS, and benign anemia patients displayed similar, albeit weaker, features [8]. EC-like macrophages and apparently typical macrophages contributed sizably to form the neovessel wall in patients with active MM, whereas their vascular supply was minimal in nonactive MM and absent in MGUS patients. ese data suggest that in active MM, macrophages contribute to neovascularization through a vasculogenic pathway, and that in nonactive MM and MGUS, they are prone to behave accordingly, marching in step with progression, hence, with the vascular switch [29]. MM-associated macrophages present morphological differences from those from nonactive MM or MGUS and benign anemia patients; they displayed oblong and spindle shape with thin cytoplasmic extroversions, some of which were either arranged to form a microvessel-like lumen or anastomosed with each other and with those of nearby macrophages to form tube-like structures. In contrast, macrophages from the other patients' groups were rounded in shape and gave no extroversions or only rare, short ones. ese differences could be due to higher levels of VEGF and FGF-2 in the BM milieu of active MM [16], hence, to an intense, continuous paracrine stimulation of cells, as occurs in paired MMECs [27]. Under VEGF plus FGF-2 stimulation, MM macrophages undergo a phenotypic and functional adaptation [30], starting to behave like MMECs, expressing typical markers of paired MMECs that are FVIII-RA, VEGFR-2, and VE-cadherin, and retaining their own CD14 and CD68 markers. Macrophages of nonactive MM, MGUS, and benign anemia patients exposed to VEGF plus FGF-2 underwent morphological, phenotypic, and functional changes indicative of vascular mimicry, becoming prone to form neovessels [8]. e vasculogenic switch by macrophages may be induced by the numerous VEGF and FGF-2 secreting plasma cells in the active MM and emerges with progression from MGUS to MM. VEGF and FGF-2 may act via their respective binding to VEGFR-1, the only VEGF receptor present on macrophages [31], and the FGF-2 receptors FGFR-1/-2/-3. VEGFR-1 mediates macrophage chemotaxis [31] and the organization of the embryo vasculature by vasculogenesis [32], but not the de�nitive vessel assembly, which is closely dependent on VEGFR-2, a speci�c EC differentiation marker [33]. Exposure of active MM macrophages to VEGF plus FGF-2 leads to an increase in the expression of Tie2/Tek and VEGFR-2, and a slight decrease in FGFR-2, all at levels overlapping those of paired MMECs. e intense expression of VEGFR-2 and Tie2/Tek, together with the decreased expression of VE-cadherin, a speci�c inter-EC adhesion molecule, is indicative of ongoing neovascularization [8]. In patients with active MM, FACS analysis on freshly isolated BM mononuclear cells revealed higher percentages of CD14/CD68 double-positive cells than in patients with nonactive disease and with MGUS. Since BM macrophages from patients with active MM keep their CD14 and CD68 lineage markers, they can be regarded as cells that do not transdifferentiate into ECs, but adapt functionally, phenotypically, and morphologically to be like MMECs. e EC-like macrophages are morphologically and histochemically similar to sinusoid-lining cells of human lymphoid tissue, a special subset of macrophages that express FVIII-RA [34]. e behaviour of these macrophage types in active MM can thus be regarded as a "vasculogenic mimicry, " like that of melanoma and other tumor cells which form vascular channels to cater for their rapid proliferation and high need of vessels [35]. Moreover, MM macrophages synthesize and release inducible nitric oxide synthase, which increases blood �ow and promotes angiogenesis [17]. Immunosuppressive Properties of MM-Associated Macrophages. TAMs promote tumor growth not only by supporting angiogenesis but also by inducing immunosuppression [36]. In MM, recent evidence attributes a major role in immunosuppression to myeloid-derived suppressor cells (MDSCs) [37]. MDSCs represent a heterogeneous population of immature myeloid cells that lack in the expression of cell surface markers speci�cally expressed by monocytes, macrophages, or dendritic cells and with a potent suppressive effect on T cells. MDSCs are phenotypically characterized by CD14− CD11b+ or CD33+, which is a common marker for myeloid cells, and lack in markers for mature myeloid and lymphoid cells such as HLA-DR [38]. MDSCs are signi�cantly increased in patients with MM compared to patients with MGUS and healthy controls, as a conse�uence of factors associated with in�ammation, such as increased secretion of VEGF, IL-1 , IL-6, and prostaglandin E2 [37]. MDSCs play their immunosuppressive activity through various mechanisms encompassing arginase, inducible nitric oxide synthase, and reactive oxygen species [38]. Arginase-1 and nitric oxide synthase-2, released by MDSCs, are key enzymes in L-arginine catabolism, which work synergistically in inhibiting T cell proliferation and MHC II expression and in promoting apoptosis. Moreover, arginase-1 activation mediates H 2 O 2 production by MDSCs that inhibit the release of IFN-, essential for the stimulation of naïve T cell differentiation and, hence, for the promotion of immune evasion [38]. Sera�ni et al. demonstrated the ability to use clinically available phosphodiesterase-5 (PDE5) inhibitors to overcome the MDSC-mediated immunosuppressive pathway in MM. PDE5 blockade in MDSCs from MM patients downregulates IL-4R expression which is correlated with L-arginine expression. ese data suggest the use of PDE5 inhibitors as therapeutically effective drugs to overcome tumor-induced immunosuppression [39]. Role of MM-Associated Macrophages in Chemotherapy Resistance. Although chemotherapy is now the most effective treatment for MM, plasma cells oen fail to respond to the drugs. Studies have shown that the response of MM plasma cells to cytotoxic chemotherapeutics can be attenuated by the presence of BM stromal cells [40]. Coculture of MM plasma cells with macrophages protected plasma cells from melphalan-induced apoptosis by inhibiting the activation and cleavage of caspase-3 and poly(ADP-ribose) polymerization (PARP) and maintaining the levels of Bcl-XL. ese results suggest that macrophages protect MM cells from apoptosis via inhibiting Bcl-XL-dependent caspase activation [20]. Role of MM-Associated Macrophagesin as erapeutic Target. Bortezomib (BZ) and zoledronic acid (ZOL) synergistically impact MM macrophage proliferation, adhesion, and migration, as well as VEGF, FGF-2, HGF, and PDGF secretion [21]. ese drugs synergistically inhibit macrophage vasculogenesis on Matrigel and the expression of FVIII-RA, Tie2/Tek, and VEGFR-2/VE-cadherin, indicative of cell transdifferentiation into EC-like cells. Both drugs reduce phosphoactivation of VEGFR-2 and ERK1/2 and NF-KB activity. ese data provide evidence that the exposure of BM macrophages during the treatment with BZ and ZOL impacts their angiogenic and vasculogenic properties, suggesting that these cells may be considered as a target of both drugs in MM patients. Conclusions e BM microenvironment plays a crucial role in the pathophysiology of MM. Substantial evidence suggests that MMassociated macrophages promote plasma cell growth and confer the ability to develop a vasculature which favours the disease progression. In summary, macrophages are key regulators of the angiogenic switch in MM, suggesting why the density of these cells is correlated with microvascular density and poor prognosis. Based on these �ndings, the development of antimacrophage therapeutics that target speci�c pathways associated with angiogenesis might contribute to the armamentarium of agents for treating MM or preventing the conversion of MGUS to active MM.
3,637.2
2013-01-30T00:00:00.000
[ "Biology", "Medicine" ]
In Search of Ummah Welfare Model: The Revitalisation of Sharia Economic Law in Indonesia Sharia economic law in Indonesia has been revitalised through legal unification and codification to improve national economic development. In this context, the Sharia economy has become a guideline in every transaction. Therefore, people must understand the Islamic economic concept to create maslahah (goodness) in every aspect of life. Sharia economic law is not a new system, as it has been implemented since the era of the Prophet. However, there is a need for adjustment in the implementation of the Sharia economic law from time to time to enable it responding the current development. This study employs qualitative inquiry, using library research to analyse Sharia economic law's history and legal development. Legal documents used include state laws and regulations, the regulations of the Bank of Indonesia, the fatwa of DSN-MUI, and others. This paper argues that the revitalisation of the Sharia economic law in Indonesia is in line with the efforts made by the predominantly Muslim population to conserve and develop the system. This includes non-legalised and legalised implementation of the Sharia economic system, such as Sharia banking. Furthermore, the system does not contradict the value of Pancasila and the 1945 Constitution’s pillars of the Unitary State of the Republic of Indonesia. Sharia economic law, prioritising moral and religious principles, has proven to create maslahah and become a solution to the economic crisis. This was shown by the survival of Sharia banks during the 1998 economic crisis, maintaining the Sharia-standardised contract to create justice in society. Sharia economic law in Indonesia has been revitalised through legal unification and codification to improve national economic development.In this context, the Sharia economy has become a guideline in every transaction.Therefore, people must understand the Islamic economic concept to create maslahah (goodness) in every aspect of life.Sharia economic law is not a new system, as it has been implemented since the era of the Prophet.However, there is a need for adjustment in the implementation of the Sharia economic law from time to time to enable it responding the current development.This study employs qualitative inquiry, using library research to analyse Sharia economic law's history and legal development.Legal documents used include state laws and regulations, the regulations of the Bank of Indonesia, the fatwa of DSN-MUI, and others.This paper argues that the revitalisation of the Sharia economic law in Indonesia is in line with the efforts made by the predominantly Muslim population to conserve and develop the system.This includes non-legalised and legalised implementation of the Sharia economic system, such as Sharia banking.Furthermore, the system does not contradict the value of Pancasila and the 1945 Constitution's pillars of the Unitary State of the Republic of Indonesia.Sharia economic law, prioritising moral and religious principles, has proven to create maslahah and become a solution to the economic crisis.This was shown by the survival of Sharia banks during the 1998 economic crisis, maintaining the Sharia-standardised contract to create justice in society. INTRODUCTION Indonesia is a state with a Muslim-majority population. 1 Islam, as a religion, does not merely regulate divinely-related matters (uluhiyah) but also worships (ubudiyah) and behaviour (akhlak).These two relate to horizontal relations with God and vertical relations with humans.The latter includes economic activities involving human interactions, such as buying and selling, almsgiving, renting, and borrowing money.In the early period of Islam, economic activities became the means of spreading the religion.Khadijah, the wife of the Prophet, and his companions, such as Uthman ibn Affan, eagerly spent their wealth on Islam.These two figures had economic advantages and assisted the Prophet financially.In Indonesia, the economy is the main object to be regulated regarding its practice and compliance with Sharia.This provides guidelines and principles for the state in regulating Sharia economic practices, which later became Sharia economic law.As a result, it is crucial for the Sharia economic law to be consistently updated in line with the current development. The Sharia economy is divided into four paradigmatic conceptual eras in its development.The first is the Sharia era, during the period of the Prophet and his companions.At that time, the territory of Islam was regionally limited to Makkah and Madinah.The companions directly interacted with and obtained legal doctrines from the Prophet.This was followed by the companions' interpretation of the Quran and Sunna.The Prophet's companions exemplified Sharia economic activities during the early period of Islam.For example, Zubair ibn Awwam chose a borrowed property over a deposit.In this case, the borrowed property could be used but should be returned.Ibn Awwam also transferred money to his brother, Mis'ab ibn Zubair, from Makkah to Iraq. 2 This implies that the sharia economy was initiated in the Prophet era, with the Quran and Sunna as the legal references, followed by ijtihad (Islamic legal reasoning) to reinterpret those primary resources. The second is the era of fiqh (Islamic jurisprudence), during the era of tabi'in (the followers) and when Islam had spread to more expansive Middle Eastern territory.Mujtahids (authoritative interpreters of Islam) emerged during this period.They interpreted the Quran and Sunna, leading to the birth of madhhabs or Islamic legal schools, with the figures such as Hanafi, al-Shafi'i, Malik, and Ahmad ibn Hanbal.These two periods become the primary references in Sharia economic law development.The next periods refer to the periods where the legal issues were less complex and legal interpretation could have been more useful. The third is the era of the qanun ((established) law), which is the construction of Sharia economic laws and norms.This started from the initiation of Majalat al-Ahkam al-Adliyah to the 21 st Century, marked by the stipulation of sharia economic law in the state law and regulations.The fourth is the era of qada, from the beginning of the 21st Century until today.During this era, the sharia economic law has been influenced by political situations, with the demand to solve sharia financial disputes quickly and accurately using the specific laws and regulations to deal with sharia economic disputes.At the same time, judges are expected to be innovative and creative in producing Sharia economic law by considering existing laws, regulations, and justice [246] values in society. 3These four eras become the references for developing the Sharia economic law in Indonesia. Sharia economic law has attracted Indonesian Muslims to implement it.The state and religious leaders have actively educated the people through various Muslim organisations' laws, regulations, and campaigns.The revitalisation of Sharia economic law in society has been done by instilling an understanding of the concept by teaching students and people in general classical fiqh literature on muamalah so that they use it as a means of conflict resolution in society. 4An example of the implementation of Sharia economic law is when religious leaders use the law in Sharia economic dispute settlements.In other words, the Sharia economic law has become a foundation for dealing with a conflict.Consequently, the government should implement and formalise Sharia through rules and regulations.With this, there will be legal certainty and decisiveness. The Sharia economy in Indonesia has been developed through Sharia businesses and the establishment of educational institutions, focusing on Sharia economic teaching and the legislation within the national legal system (ius constitutum).The current development shows that the existence of Sharia banks still needs to meet its targeted objectives in terms of the institutional and legal aspects. 6This means that the Indonesian people generally have a limited understanding of the concept.Therefore, there is a need for the revitalisation of Sharia economic law.Furthermore, the study of Sharia economics should offer recommendations for improving the laws, as the conventional economic system tends to ignore morals and spiritual consequences. Regardless of such challenges, the maslahah (public interests) principle is one of the principles in the Sharia economy.The maslahah is the ultimate objective in Islamic legal formulation: to obtain happiness in the world and hereafter by taking benefits and avoiding harm. 7In the sharia implementation, maslahah becomes the primary consideration. 8The principle of creating maslahah and rejecting harm is the legal spirit stipulated in the Quran and 3 Muhamad Qustulani, Modul Matakuliah Hukum Ekonomi Islam (Tangerang: PSP Nusantara Press, 2018) hadith.Based on this principle, every muamalah (transaction) should be free from riba (usury), najash (impurity), ihtikar (profiteering), and gharar (uncertainty).Islam offers principles and guidelines for economic transactions, called Sharia economic law, the primary rules for humans in muamalah. Al Shatibi mentioned that the main objective of Islamic law revelation is to benefit humans in the world and hereafter. 9Apart from that, the objectives of Sharia are considered fulfilled with the creation of humans prosperity and interests. 10In the context of muamalah, humans' needs are divided into daruriyat (primary), hajiyat (secondary), and tahsiniyat (tertiary).These needs are considered fulfilled if they have good values for humans.This means the primary consideration in determining the fulfilment of human needs is the benefits and harm that may be resulted, as well as the urgency level from the primary to tertiary. In this context, Muslims need to be aware of Sharia economic law in every transaction and that the law is to maintain public interests and benefits. 11The need of the people to understand the Sharia economic law should be accommodated by the government, such as by providing education at formal levels, such as universities, pesantrens (Islamic boarding schools), and nonformal levels, such as workshops and other activities held by sharia financial institutions. Previous studies have discussed Sharia economic law, but they have yet to implicitly discuss the revitalisation of the law and the value of maslahah in Indonesia's existing Sharia economic laws. 12Therefore, this research is crucial to understand the revitalisation of Sharia economic law theoretically and practically for public interests and benefits. RESEARCH METHOD This is normative legal research (applied legal research), a study employing normative legal cases such as the products of legal behaviour. 13The normative legal approach involves the written state laws implemented in concreto in society.Therefore, this research combines a normative study on Sharia economic law; and the in concreto implementation of those laws in real actions and legal documents.This research considers historical, statute, and case approaches. 14The historical approach is to reveal the legal history of Sharia economics since the era of the Prophet Muhammad up until today, contributing to the current Sharia economic law in Indonesia.The primary data consists [248] of laws and regulations on Sharia economics, supported by secondary data from previous studies and notes relevant to this study. ANALYSIS AND DISCUSSION The History of Sharia Economic Law in Indonesia Revitalisation is an effort to increase the economic value of certain areas by redeveloping certain establishments to improve their functions.Revitalisation aims to restore the vitality of a potential establishment or region to be further developed. 15In Indonesia, the revitalisation of economic law is crucial to provide a comprehensive understanding of the law to the government, economic actors, and society in general.This is to promote Islamic or Sharia economic law in the community.Both terms, Sharia and Islamic economy, do not differ in terms of forms, implementations, and objects.Sociologically, these terms are used interchangeably by practitioners, academics, and commoners.However, there are different groups established using these two terms.Groups using Islamic economy are Ikatan Ahli Ekonomi Islam (IAEI/ Islamic Economists Association), Forum Silaturahim Studi Ekonomi Islam (ForSEI/ Islamic Economics Study Forum), Asosiasi Pengajar and Peneliti Ekonomi Islam (APPEI/ Islamic Economics Teachers and Researchers Association).Those using the sharia economy are Masyarakat Ekonomi Syariah (MES/ Sharia Economic Society), Asosiasi Program Studi Hukum Ekonomi Syariah Indonesia (APHESI/ the Indonesian Sharia Economic Law Department Association), and Asosiasi Dosen Ekonomi Syariah (ADESY/ Sharia Economics Lecturers Association).16Source: Mohammad Nur Yasin, Politik Hukum Ekonomi Syari'ah di Indonesia, 2018, p. 121. Table 1 shows that the revitalisation of Sharia economic law in Indonesia relies not only on the state but is also supported by society by establishing Sharia economic organisations.Basic principles of economic laws in Indonesia are taken from several resources, such as Western, [249] Sriwijaya Law Review ◼ Vol.7 Issue 2, July (2023) customary, and Islamic laws.These legal resources become a foundation for creating public benefits, as previously studied by Sharia economic experts before becoming state laws and regulations.Substantially, ulama (authoritative Islamic scholars) categorise Islamic law into two: ibadah (worship) and muamalah (social relationship).Sketchily, ibadah is related to the relationship between humans and God, while muamalah is related to humans' worldly activities, such as economy, politics, socio-culture, etc. 17 Therefore, this study is part of the muamalah study as it focuses on studying economic transactions.These legal resources become the basic principles for creating maslahah (public benefit) for society.Studies by experts contribute to the establishment of Sharia economic law in Indonesia.Besides, Islam allows its adherents to reinterpret Islamic legal resources to answer the current needs without deviating from sharia provisions. In Islam, human dignity is a right given by God to humans as God's successors on earth.They share the responsibility of 'imrāh al-arḍ (creating a civilisation on earth). 18Therefore, human resources have an essential role in achieving the objectives of the government and public companies, including Sharia banks. 19Indonesia has several qualified and potential human resources to bring about betterment. 20The earliest Sharia economic development can be seen in the establishment of the first Sharia bank, showing the commitment of Indonesia to implementing Sharia economic law in financial sectors. The earliest unification and codification of Islamic economic law in Indonesia can be seen in the establishment of Bank Muamalah Indonesia (BMI).Before its launching and inauguration, several other names were proposed, such as Bank Islam Indonesia (BASINDO), Bank Syariah Islam Indonesia, Bank Karya Islam, and Bank Amal Indonesia.These names, however, were unacceptable due to their closed connotation with the controversial radical right wings, considered against the principle of Pancasila and a subversive, destructive, and dangerous movement.The establishment of BMI offered a positive realisation of Sharia economic law in Indonesia.However, this has not been understood by all Indonesian Muslims.Therefore, there is a need for the introduction of sharia economic law.This is proven by many Muslims who are reluctant to use Sharia banks.Therefore, Sharia economic law should be well-introduced with the Quran and sunna as the basic principles and resources.Besides, there is also a need to consider the principle of Islamic jurisprudence or usul fiqh. 21urthermore, considering the moral principles of Islam, there is a need for enhancing selfconsciousness to understand and apply every aspect regulated by Islam. 22In this case, the Sharia economy is crucial to be promoted and developed in Indonesia as an effort of Islamic proselytisation and teaching contemporary Islamic jurisprudence (fiqh) on muamalah.This is to harmonise Islamic legal education within the heterogeneity of Indonesian society. 23Moreover, Sharia economic law is standardised in sharia-compliance economic transactions.This is because Islam does not allow placing economic interests above religious ones.Therefore, sharia economic actors need to consider sharia rules and ethics.Sharia economic law encourages new thinkers from practitioners to revisit the colonial system.This system disadvantaged the Indonesian economy by positioning the native Indonesians as the lowest class in society.Moreover, the colonial government divided the natural resources management depending on race, which led to severe problems.This was exacerbated by the establishment of gambling and prostitution sites by the non-Muslim Chinese people. 24part from that, a Western author in the 17 th Century believed that almost all trading activities were dominated by the Westerners, who happened to be Jews.This situation occurred in Cambodia, Patani, Jambi, and Southeast Asia, including Indonesia.The Chinese and Western people run retail, manufacturing, service, liquor and gambling businesses. 25n mid-1997, Indonesia was hit by an economic crisis, destroying the fundamental components of the national economy.26 The revitalisation of the economic system became crucial to revive customers' trusts which drastically weakened.In that difficult situation, the government started to view the Sharia economic system as an alternative, which was concretely implemented through the development of Sharia banking.The concept of Sharia economics is able to balance the real and monetary sectors.27 Studies on the Sharia economy have emphasised that the fiscal issue is a major economic issue and more important compared with the monetary one. The dynamic economic condition has brought about changes in economic behaviour. 28The monetary crisis of mid-1997 made 240 conventional banks in Indonesia experience a negative spread, leading to liquidation.It was only sharia banks safe from the crisis, which was Bank Muamalah Indonesia. 29The differences could be seen in the absence of a burden to pay the customers' interests by Sharia banks.The Sharia banks used a profit-sharing system with the amount depending on the obtained profits.With such an important role, there is a need for more actions to introduce the concept of the Islamic economy to society.This introduction can be in the form of economic proselytisation to strengthen the society's characters without contradicting the national economic laws. [251] Sriwijaya Law Review ◼ Vol.7 Issue 2, July (2023) Law No. 14 of 1967 stipulated the national economy of the Basic Principles of Banking and Foreign Banking.However, this law did not allow the establishment of Islamic economic institutions.This law was gradually revised to enable the formulation of Islamic economic laws.The revision occurred in the banking sector deregulation on 1 June 1982.This was followed by the October 1988 Package, dated 27 October 1998, and the oral explanation by the government during the meeting with Commission VII of the People's Representative Council on 5 July 1990. Using the profit-sharing concept in Sharia becomes evidence of the insertion of Islamic values in national law.This is mentioned in Article 2 Paragraph (1) of Government Regulation No. 72 of 1992, stating that: "the profit-sharing principle as mentioned in Article 1 Paragraph (1) is the sharia-based profit-sharing principle used by the banks adhering to profit sharing principle." 30Terminologically, Sharia economic law is legal norms related to Sharia economics.Authorised officials establish Sharia economic law and norms to regulate Sharia economic activities and punish violators. Questions posed to the authenticity of the Sharia economy.How Islamic is the system? and how can it contribute to development? 31On the other hand, legal recognition of the Sharia economy is getting more apparent with the issuance of Law No. 3 of 2006 on the Completion of Law No. 7 of 1989 on the Religious Courts.The affirmation of Sharia economic law can also be found in the Religious Courts' authorities.These include the powers to inspect, decide, and resolve Sharia private law cases between Muslim litigants in marriage, waqf (endowment), zakah (almsgiving), hibah (gift), sadaqah (charity), and Sharia economy.This is mentioned in Article 48 of Law No. 3 of 2006. 32The explanation of Article 49 of this law states that the sharia economy is business activities conducted based on sharia principles. Even though the current development has proven to increase economic growth, the social and environmental aspects have yet to succeed. 33The Sharia economy can be considered an evolution of the neo-classic economy.The Sharia economy emerged when the modern economy slowed down in offering concrete or alternative solutions to contemporary economic challenges. 34Laws and regulations mentioning the Sharia economy include Law No. 19 of 2008; Law No. 21 of 2008; Law No. 7 of 1992; Law No. 10 of 1998; Law No. 39 of 2005; Government [252] economic law as a reference for Muslims in establishing Sharia economic-based transactions.The following table briefly shows the development of the Sharia economic system.Table 2 becomes evidence of the government's contribution to supporting and revitalising Sharia economic laws in Indonesia to provide legal protection for its people.Sharia economic activities focus on achieving financial objectives and ensuring the compliance of economic activities with Sharia norms and ethics.36Among the concrete implementation of the Sharia economy in Indonesia are the establishment of Bank Muamalah Indonesia and the issuance of laws and regulations on Sharia banking.These include Law No. 7 of 1992 on Banking, Law No. 10 of 1998 on the Amendment of Law No. 7 of 1992, and Law No. 21 of 2008 on Sharia Banking.Apart from those legislations, the regulation of Sharia banking also takes the form of the Compilation of Sharia Economic Law (Kompilasi Hukum Ekonomi Syariah (KHES)). According to Yusuf Wibisono, as quoted by Ali Syukron, Law No. 1 of 2008 on Sharia Banking, with its 70 Articles, aims at achieving particular objectives.Firstly, the law ensures legal certainty and trust of stakeholders and society in using Sharia banking products and services.This legal certainty is available in various provisions about the implementation of Sharia, business type and feasibility, fund transfer, bank confidentiality, prohibitions in Sharia banks, and dispute resolutions.Secondly, the law is to ensure the Sharia compliance of the businesses.This can be seen in the obligations to avoid prohibited activities, follow the Sharia fatwa authority (Indonesian Ulama Council or Majelis Ulama Indonesia/ MUI), establish Sharia Supervisory Board in every Sharia bank, and obey all related laws and regulations and the Sharia Supervisory Board at Bank Indonesia.Thirdly, the law is to ensure the system's stability.This is shown by the adoption of 25 Basel Core Principles for Effective Banking Supervision, consisting of provisions about the bank establishment and ownership, controlling shareholders, governance, prudential principles, risk management obligations, development, and supervision.The spirit of this "system's stability" is more visible with the administrative and criminal sanctions. 37253] Sriwijaya Law Review ◼ Vol.7 Issue 2, July (2023) Other Islamic banks have followed the path of BMI by opening Sharia branches.On 28 June 1999, IFI Bank opened its Sharia branches, followed by the conversion of Bank Susila Bakti to Bank Syariah Mandiri as the subsidiary of Bank Mandiri and the opening of the Sharia branches by Bank Negara Indonesia.In February 2000, there were five new Sharia branches opened.Several banks proposed opening Sharia branches and have been registered by Bank Indonesia.These banks are Bank Bukopin, Bank BTN, Bank BRI, Bank Niaga, Bank Mega, Bank Mega, BPD Jabar, and BPYD Aceh.The most current development is that the government supported merging three Sharia banks: BNI Syariah, BRI Syariah, and Bank Syariah Mandiri.This merger resulted in the establishment of Bank Syariah Indonesia (BSI). 38However, the problem is that these banks still need to be put under the auspices of Bank Indonesia, a central bank with a conventional system.Ideally, they should fall under the supervision of Bank Indonesia Syariah, a special financial institution established by the government with equal status as Bank Indonesia. 39slamic law or Sharia offers significant excellence to the prosperity of society, as stated in the primary resources of Sharia.With the more comprehensive industrial and economic development, modern legal experts have widened the interpretation of sharia financial institutions' moral and religious obligations. 40These institutions are the primary components in the operation of Sharia-based financial institutions.Their existence has been acknowledged in the global economic area. 41A conventional economic theory maintains that human needs fulfilment is a fundamental issue.The fulfilment of human satisfaction of all their needs is a survival objective.In this case, all economic instruments must be maximised to fulfil such satisfaction, regardless of religious principles.Consequently, many efforts need to pay more attention to the harmony of human life and the balance of nature.This is different from Sharia economic principles that prioritise maslahah and economic justice. The rational economy is one of the concepts in the conventional economic system.This is reflected in someone's actions.Economic activity is considered rational if centred on self-interests as the primary aspect of human activities.The conventional economic concept regards maximum utilisation as rational behaviour and appropriate. Furthermore, Adam Smith argued that considering self-interests will positively impact society.This is because market mechanisms and competition are maintained with invisible hands.The philosophical foundation of the conventional economy includes secularism, which separates religion from worldly matters.Religion is considered merely a means to regulate humans' interactions with the Divine.Meanwhile, interhuman interaction is not considered a religious zone.This notion leads to the understanding that worldly life is within humans' authority.Humans have full rights to determine their own lives.Therefore, rationality becomes the primary guideline in every human's action.This indicates that human existence becomes rational if it is based 38 "PT Bank BSI," accessed January 10, 2023.[254] on self-interest as the primary objective of every activity.The objectivity of the economy is considered the manifestation of capitalism that is free from morality and universally applicable (positivism).The equilibrium nature becomes the belief in capitalism as the balance of nature in Newtonian physic.Jean Baptiste Say maintained that supply creates demand, which then impacts the balance of the economy. 42On the other hand, production activities develop demands.Consequently, the excess production and the increase in unemployment will not happen.In this system, the government does not interfere with economic activities, as the intervention is considered a disturbance to the natural balance.These assumptions are the strength of conventional economic paradigms and the greatness of the market in offering solutions to economic issues. The Actualisation of the Sharia Economy in Indonesia as A Legal Concept The spirit of legal reform has influenced the improvement of Islamic material law as a sub-national legal system.Islamic law has several characteristics, such as responsive, adaptive, and dynamic, as the results of the reform process. 43In this case, Muslims should possess a specific perspective to face current development and advancement.One aspect discussed considerably is economic circulation, in accordance with current demands.The problems widen and vary as humans conduct various transactions to meet their needs.The issues emerged in the operation of companies, credits, cooperatives, insurance, and others, which often become more complex, especially when they face Islamic law. Islamic law is a value system and rules, functioning as a solution to worldly problems. 44In the Islamic legal system, decision-making is conducted by considering the notion of maslahah mursalah for worldly matters, including protecting human life and religious values. 45Sharia economic law cannot be separated from Islamic legal resources.Human satisfaction is not only related to worldly matters but also the hereafter.As a consequence, there is a need to balance those two aspects.Regarding wealth and property, Islam protects them by legalising various transactions, such as trading, renting, pawning, and prohibiting usury, fraud, and thievery.Islam also determines hand amputation for robbery to achieve the ultimate goal of achieving public goodness.The maslahah in the economy becomes a crucial consideration influencing the balance of human social life.Muslim scholars initiated the implementation of sharia economic law in Indonesia, which has been well responded to by society.However, their awareness to fully interact with Sharia economic law remains limited. [255] Sriwijaya Law Review ◼ Vol.7 Issue 2, July (2023) From the perspective of Islamic law, maslahah is a must in the formation of law. 46With this, the objective of Islamic law is to achieve the maslahah in the world and hereafter.This is to improve the position of humans as the noblest creatures.This means that the value of maslahah should consist of sharia compliance (halal), benefits, goodness (tayyib), and avoiding harm.Maslahah, in the sharia economy, is oriented toward the actors who always want maslahah.With maslahah, all economic potential will provide justice for all. Besides, Islamic law is intended as ibtila' (ordeal) and ikhtibar (lesson), testing Muslims' loyalty to their religious laws. 47Humans, with their strengths and weaknesses, are required to live collectively, share, and create togetherness in social life to achieve public interests. 48Islam pays significant attention to property ownership rights and makes it one of the five fundamental objectives of Sharia.These five objectives are protecting lives, honours, minds, wealth, and religion.Copyrights, in the form of writing, artworks, patent, and trademark, are among the legal rights of their owners, in Sharia or common perspectives.The practices are detailed in Sharia economic law to provide certainty in transactions and economic development. In its actualisation, the commitment to God's provisions will influence someone's commitment to the value of humanity in business by prioritising the importance of kindness. 49Sharia economic law consists of the godly dimension to achieve success in the world and hereafter (falah). 50According to Umar Chapra, as quoted by Apridar, the basic principles of the sharia economy include, first, tauhid (unification of God), which is a belief and awareness that economic behavior should consider God's power in various aspects, to be a basis in maintaining justice and maslahah.Second, the principle of khilafah is an understanding that human existence on earth is not without purposes and is full of responsibilities.This is why humans are equipped with the capabilities to actualise their interests to submit themselves to the Creator.Third, the principle of justice is the fulfillment of human needs and stability to thrive equally with good (tayyib) and halal incomes. 51hilosophically and substantively, the value of justice in Islam differs from the normative substantive law.Normative law regarding civil procedural law is based on individualism and secularism approaches.The optical nature of the legal disputes becomes the fundamental distinction in Islamic justice. 52Indonesia is a nation of laws, and its national objectives are stated in the [256] fourth paragraph of the Preamble to the 1945 Constitution.Among the goals are to protect all the people of Indonesia and all the independence that has been struggled for and to improve public welfare. 53With this, economic development should consistently be rooted in and based on Pancasila.The economic development should reflect the actualisation of Pancasila as a fundamental value and spirit of people's economy based on the principles of togetherness, justice, and independence. 54In terms of legal material, Bustanul Arifin, as quoted by Djamil, states that Indonesian laws needed to seriously pay attention to Sharia-based trade or contracts.This is because the primary reference to the Indonesian rules mostly comes from colonial laws, as the continuation of French Law or Penal Code. 55he legalisation and formulation of Sharia economic laws in Indonesia still meet various challenges.Among them is related to political constellations.With gradual development from time to time, Sharia economic law should be able to compete with conventional economic laws, which are more advanced and significantly developed.Apart from that ideological and macro reasons, Sharia economic laws have been strongly encouraged by establishing the National Sharia Board of the Indonesian Ulama Council (Dewan Syariah Nasional-Majelis Ulama Indonesia/ DSN-MUI).Practical and macro interests also influenced this establishment.At first, MUI initiated the establishment of Sharia banks.Other Sharia-based financial institutions and businesses follow this.For sustainability reasons, they need to be supported by the MUI.In this case, an institution focusing on the Sharia economy becomes crucial.The consideration of the Central Council of the Indonesian Ulema Council Decree mentions the need for banking facilities that are free from interests, which is haram (prohibited) in Islam. Economic activity is considered maslahah if it meets two aspects: sharia compliance (halal) and benefiting all humans. 56In other words, Sharia economic activities applied by Muslims are generally a form of or an effort to achieve prosperity.In those efforts, all parties should use a contract based on honesty and willingness to submit to a Sharia-based contract. 57In this context, the establishment of the DSN-MUI has been to respond to Muslim people's aspirations in economic fields and encourage the implementation of Islamic teaching in economy or finance based on Sharia. 58Another factor leading to the establishment of the DSN-MUI is the efficiency step in ulama coordination in responding to current issues on economy and finance.Various cases need fatwas (legal opinions) to be discussed by the ulama to achieve common perspectives in dealing with specific issues by Sharia Supervisory Board (Dewan Pengawas Syariah/ DPS) in [257] Sriwijaya Law Review ◼ Vol.7 Issue 2, July (2023) every Sharia financial institution. 59DSN-MUI is primarily responsible for providing fatwas on the sharia economy.It becomes an authoritative institution to determine the sharia compliance of sharia banking products, institutions, and businesses. Broadly, Sharia, especially Islamic law on muamalah, has a definite meaning and is relatively far from the branch of fiqh. 60Therefore, the revitalisation of Sharia economic law to achieve public interests can be carried out with several tasks.First, there is a need for legislative members who call for a sharia financial system.Second, there is a need to promote the Sharia economic system through seminars, training, education, etc. Third, the state needs to codify Sharia economic laws comprehensively.Fourth, improving human resources competence in Sharia economics among academics and practitioners is crucial.Fifth, there is a need for research and studies on the Sharia economy in Indonesia.Sixth, there is a critical need to prepare professional information and technology.Seventh, an institution to offer assurance in financing and advocation services when a dispute occurs is also essential.Eight, there is a need to supervise Sharia compliance in products and marketing. CONCLUSION Sharia economic law based on the Quran and Sunna has been in application since the period of the Prophet, his companions, and followers until today.Indonesia has a long history of Sharia economy.It started when the system used only the Quran and Sunna as the primary references, followed by the unification and codification of Sharia economic law within the national law.The Sharia economic law is crucial in the national and religious lives of the Indonesian people.Therefore, the law and its implementation within the state are needed.In Indonesia, the Sharia economic law is developed by involving various state elements, including the establishment of Sharia financial institutions initiated by Bank Muamalat Indonesia, the stipulation of Sharia economic law, the expansion of the Religious Courts' jurisdiction to deal with Sharia economic disputes, and the issuance of the National Sharia Board fatwa on sharia economy.These are followed by establishing Sharia economic organisations consisting of academics and practitioners.Also, higher education institutions play their role in opening Sharia economic departments, while pesantren education offers innovation in Sharia economic studies. 5 This is marked by the issuance of Law No. 3 of 2006 on the Amendment of Law No. 7 of 1989 on the Religious Court, Law No. 19 of 2008 on State Sharia Bonds, Law No. 21 of 2008 on Sharia Banking, Law No. 48 of 2009 on Judicial Powers, and Law No. 50 of 2009 on the Second Amendment of Law No. 7 of 1989 on Religious Courts.The latest stipulates that the Religious Courts have absolute authority to settle Sharia economic disputes.However, those developments should be followed by the people's level of understanding of the Sharia economic law.
7,820.4
2023-07-31T00:00:00.000
[ "Economics" ]
Drosophila poly suggests a novel role for the Elongator complex in insulin receptor–target of rapamycin signalling Multi-cellular organisms need to successfully link cell growth and metabolism to environmental cues during development. Insulin receptor–target of rapamycin (InR–TOR) signalling is a highly conserved pathway that mediates this link. Herein, we describe poly, an essential gene in Drosophila that mediates InR–TOR signalling. Loss of poly results in lethality at the third instar larval stage, but only after a stage of extreme larval longevity. Analysis in Drosophila demonstrates that Poly and InR interact and that poly mutants show an overall decrease in InR–TOR signalling, as evidenced by decreased phosphorylation of Akt, S6K and 4E-BP. Metabolism is altered in poly mutants, as revealed by microarray expression analysis and a decreased triglyceride : protein ratio in mutant animals. Intriguingly, the cellular distribution of Poly is dependent on insulin stimulation in both Drosophila and human cells, moving to the nucleus with insulin treatment, consistent with a role in InR–TOR signalling. Together, these data reveal that Poly is a novel, conserved (from flies to humans) mediator of InR signalling that promotes an increase in cell growth and metabolism. Furthermore, homology to small subunits of Elongator demonstrates a novel, unexpected role for this complex in insulin signalling. Summary Multi-cellular organisms need to successfully link cell growth and metabolism to environmental cues during development. Insulin receptor -target of rapamycin (InR-TOR) signalling is a highly conserved pathway that mediates this link. Herein, we describe poly, an essential gene in Drosophila that mediates InR -TOR signalling. Loss of poly results in lethality at the third instar larval stage, but only after a stage of extreme larval longevity. Analysis in Drosophila demonstrates that Poly and InR interact and that poly mutants show an overall decrease in InR-TOR signalling, as evidenced by decreased phosphorylation of Akt, S6K and 4E-BP. Metabolism is altered in poly mutants, as revealed by microarray expression analysis and a decreased triglyceride : protein ratio in mutant animals. Intriguingly, the cellular distribution of Poly is dependent on insulin stimulation in both Drosophila and human cells, moving to the nucleus with insulin treatment, consistent with a role in InR-TOR signalling. Together, these data reveal that Poly is a novel, conserved (from flies to humans) mediator of InR signalling that promotes an increase in cell growth and metabolism. Furthermore, homology to small subunits of Elongator demonstrates a novel, unexpected role for this complex in insulin signalling. (3,4,5)-trisphosphate (PIP 3 ) at the cell membrane. PIP 3 in turn recruits PDK1 and Akt to the membrane, where PDK1 phosphorylates and activates Akt. Phosphorylated Akt signals inhibit the tuberous sclerosis complex (TSC, Tsc1 -Tsc2) [7 -9]. When TSC is inhibited, the small GTPase Rheb becomes active [10]. This then activates TOR, integrating TOR into the insulin signalling process. TOR is a component of two different complexes: TORC1 and TORC2. Activation of TORC1 has various downstream effects contributing to an increase in cell growth and proliferation. For example, TORC1 directly phosphorylates S6K and 4E-BP, resulting in an increase in ribosome biogenesis and m7G capdependent translation [1]. In addition, TORC1 activation inhibits autophagy [11]. A negative feedback loop signals back to the IRS through S6K, ensuring attenuation of TOR signalling above a certain level [12]. The TORC2 complex phosphorylates and activates Akt kinase [13], resulting in the phosphorylation of the forkheadlike transcription factor FoxO. Phosphorylated FoxO is excluded from the nucleus, precluding the transcription of FoxO target genes [14][15][16]. A critical consequence of the activation of InR -TOR signalling is the inhibition of autophagy: a cellular response to starvation in which components of the cytoplasm are engulfed in small double-membrane-enclosed vesicles. The contents of these vesicles are degraded by the autophagic machinery, and breakdown products then serve as a nutrient source for the cell until more nutrients become available in the environment. Alterations to autophagy have been found in cancer and neurodegenerative diseases [17,18]. Herein, we report the identification of Poly as a novel mediator of the InR -TOR signalling pathway. poly is an essential gene in Drosophila that was mutated in a P-element transposon mutagenesis screen [19]. Crucially, the gene product is conserved in higher eukaryotes, including humans, showing homology to the ELP6 subunit of the Elongator complex. Loss of poly function results in lethality at the late larval stage, but only after extreme larval longevity. Many intriguing phenotypic features are observed in larvae lacking poly, including abnormal nuclear morphology in neuroblasts and the development of large melanotic masses in third instar larvae. We have combined genetic, biochemical and bioinformatic approaches to functionally characterize poly. Our data reveal that poly is a novel mediator of InR-TOR signalling and that loss of poly results in downregulation of a number of components of the InR -TOR pathway. We therefore propose that the wild-type Poly protein is a positive regulator of cell growth and metabolism. Characterization of the poly mutant phenotype The poly mutation was isolated in a P-element mutagenesis screen that aimed to generate a large collection of single P-element-induced mutations in Drosophila [19]. The P-element insertion that led to the lethal poly 05137 allele was mapped to the single intron of the CG9829 gene, localizing to 87E7-8 on the third chromosome (figure 1a). The poly 05137 insertion led to an absence of poly mRNA as assessed by Northern blotting (not shown), reverse transcriptasepolymerase chain reaction (RT-PCR; figure 1b) and Poly protein as revealed by immunoblotting ( figure 1c). RT-PCR verified that expression of the overlapping CG8790 gene was not affected by the P-element insertion in the poly 05137 allele (figure 1b). Two independent experiments additionally corroborate lesion of the CG9829 gene as being responsible for the mutant phenotype: (i) excision of the P-element following genetic exposure to transposase completely reverted the mutant phenotype, and (ii) successful rescue of poly 05137 lethality was achieved by using a hs-Gal4 driver to direct expression of a UAS-poly transgene during larval development. Mutation of poly results in pleiotropic effects manifesting as a particularly striking phenotype. poly mutants appear to progress normally through embryogenesis, but larval development proceeds much more slowly. While the normal generation time is 10 days at 258C, poly mutant larvae exhibit extreme third instar longevity-up to 21 days-before dying without pupation (figure 1d). When homozygous mutant larvae were examined, the morphology of many tissues appeared abnormal: the brain, ring gland, salivary glands and imaginal discs were reduced in size compared with heterozygous siblings and wild-type animals, suggesting cell growth and/or proliferation defects (figure 1e). Mutant larval neuroblasts were characterized by abnormally shaped nuclei, though mitotic figures, evident even in 20 day old mutant larvae, were normal in appearance. These lobulated nuclei resembled the nuclei of mammalian polymorphonuclear leucocytes, thus suggesting the name poly (figure 1f ). During the lengthened third instar larval phase, melanotic masses appeared in the haemolymph of poly mutants, increasing in size and number with time (figure 1g). A second hypomorphic allele of poly has recently been reported to result in abnormal nurse-cell chromosome dispersal and oocyte polarity in the Drosophila germline, possibly owing to a proposed interaction with an mRNP complex involved in similar processes [20]. Expression analysis of poly revealed high levels of mRNA and protein in the first 4 h of embryogenesis, suggestive of maternal loading of the mRNA and possibly protein into oocytes (figure 1h,i). While the level of Poly decreased during the remainder of embryogenesis, the level was still higher than in later stages of development. Overall, poly was expressed throughout development (figure 1h,i). Poly is conserved among higher eukaryotes and resembles the Elp6 subunit of the Elongator complex The Drosophila poly gene encodes a 251-amino-acid-long protein, lacking any motifs suggestive of specific function. Sequence similarity searches revealed that Poly is conserved among higher eukaryotes and, importantly, has a human homologue (figure 2a). The apparent orthologue of Poly in Saccharomyces cerevisiae is the Elp6 protein, part of the 6-subunit Elongator complex that, in association with the RNA polymerase II holoenzyme, is responsible for transcriptional elongation [21,22]. The Elongator holocomplex is conserved in composition from yeast to humans [23,24], with acetylation activity contributed by the Elp3 catalytic subunit [25]. Intriguingly, acetylation is directed to different substrates, depending on where in the cell the complex is (e.g. tubulin in the cytoplasm, and histone in the nucleus) [26,27]. Elongator has additionally been linked to translational control through tRNA modification in the cytoplasm. To date, only the Elp3 subunit has been studied in Drosophila, where rsob.royalsocietypublishing.org Open Biol 2: 110031 the mutant phenotype has recently been reported to be remarkably similar to that described herein [28]. Elp6 in budding yeast is part of an Elongator subcomplex that also contains the Elp4 and Elp5 proteins [21,22], which share sufficient homology to be aligned with one another (and can also be identified in flies and humans). Consistent with the previous identification of Elp4 and Elp6 proteins as paralogues [29], our resulting phylogeny implies that each of the three different Elongator subunits probably arose from common gene duplication events during eukaryotic evolution figure S1). Thus, Poly is an evolutionarily conserved protein that exhibits homology to the yeast Elp6 Elongator subcomplex component. Identification of protein interactors of Poly As the poly gene appeared to be a 'pioneer', we adopted an unbiased approach to determine the pathways and processes in which Poly might be involved. In order to identify proteins interacting physically with Poly, immunoprecipitation with an antibody generated against recombinant Poly was performed using 0 -5 h Drosophila embryo extracts ( figure 3a,b). Poly can only be immunoprecipitated with immune serum, and only from wild-type extracts. Samples were analysed using tandem mass spectrometry [30,31]. Immunoprecipitates with pre-immune serum served as the control. Among the five Poly-binding proteins identified with significant scores, InR scored highest and was represented by numerous Poly is decreased in insulin receptor mutants Considering the physical interaction of Poly with InR, we assessed the level of Poly in various InR signalling mutants. Strikingly, the level of Poly was dramatically decreased in InR 05545 mutant larvae, whereas the level of Poly in Akt1 mutant larvae was unaffected (Akt1 is situated downstream of InR; figure 3c). The level of poly mRNA was unchanged in the InR mutant, suggesting that the difference in protein level might be due to instability or decreased protein synthesis of Poly in the absence of InR (figure 3d). Further evidence for a connection between Poly and the InR was provided by phenotypic analysis of InR 05545 mutant larvae (figure 3e). While this allele was reported previously to be embryonic lethal [32], we noticed that a small number of homozygous InR 05545 animals reached the third instar larval stage. Strikingly, the phenotype of these InR mutant larvae resembled that observed in poly larvae with increased larval lifespan and development of melanotic masses (figure 3e). These observations demonstrate that loss of InR leads to a decrease in Poly and that InR 05545 mutants exhibit phenotypic features similar to those of poly mutant larvae. Poly mutation results in decreased signalling activity downstream of the insulin receptor Based on the results presented so far, we hypothesized that there should be genetic interactions between poly and components of the InR signalling pathway. Consequences of such interactions could be revealed by the state of downstream effectors of the InR pathway. We therefore over-expressed poly in adult fly eyes, using a GMR-Gal4 construct to drive the expression of a UAS-poly transgene [33]. At 278C, this led to a rough eye (disorganized ommatidia) phenotype (figure 4a). Disruption of InR-TOR signalling through mutation of either dAkt1 or dS6K led to a striking suppression of the poly-induced rough eye phenotype (figure 4b,c), suggesting that an intact InR -TOR signalling cascade was required for Poly to exert its effects. These experiments also critically demonstrate the cell autonomous importance of Poly to insulin signalling. We hypothesized that if dAkt and dS6K mutants can act as suppressors of the poly over-expression phenotype, the activity of these kinases might be altered in poly mutant animals. We therefore probed early third instar larval extracts with antibodies that recognize specifically the phosphorylated (active) forms of dAkt and dS6K. Indeed, phosphorylation of both dAkt and dS6K was decreased upon mutation of poly (figure 4d,e). These data reveal that the activity of both dAkt and dS6K kinases was decreased Immunoblotting of third instar larval extracts using phospho-specific antibodies to dAkt and dS6K reveals decreased phosphorylation of both kinases in poly larval extracts. (f ) d4E-BP transcript levels were assessed by real-time qPCR on RNA isolated from wild-type and poly mutant larvae. d4E-BP levels were normalized to Actin5C levels. The error bars derive from reactions performed on biological triplicate samples. The double asterisk represents significant difference ( p , 0.01) in 4E-BP expression levels between wild-type and poly larvae. (g) Immunoblotting of third instar larval extracts showing decreased phosphorylation of d4E-BP in poly compared with wild-type larvae. Table 1. Mass spectrometry identifies the insulin receptor as a physical interactor of Poly. Analysis of pre-immune and immunoprecipitation samples by tandem mass spectrometry identified interacting proteins specifically found in the immune-precipitate sample. Insulin receptor was identified in bands 2 and 3, with the highest scores. 'exp score' quantifies on a log scale (base 10) the expectation that the hit was achieved by chance, calculated using the program X!TANDEM. band exp score unique/total peptides protein positive effects on cell growth [34]. Disruption of InR-TOR signalling causes inhibition of cap-dependent translation as a decrease in 4E-BP phosphorylation results in its binding to eIF-4E [14]. Disruption of signalling also causes increased transcription of 4E-BP owing to increased FoxO activity. Consistent with a decrease in InR -TOR signalling, the level of d4E-BP transcript was almost threefold greater in poly mutant larvae compared with control animals (figure 4f ). In addition, immunoblotting revealed that d4E-BP was hypo-phosphorylated in poly mutant extracts compared with wild-type extracts (figure 4g). Taken together, these results are consistent with a decrease in cap-dependent translation upon loss of Poly function. Autophagy is constitutively active in the fat body of poly larvae A critical consequence of the activation of InR -TOR signalling is the inhibition of autophagy. Drosophila undergoes developmentally programmed autophagy at defined times to facilitate tissue remodelling during metamorphosis [35 -38]. On the other hand, starvation-induced autophagy (in response to nutrient deprivation) takes place specifically in the larval fat body [11]. As the larval fat body serves a similar function to the vertebrate liver by acting as a nutrient storage organ, it has been commonly used to examine autophagy during both starvation-induced and developmentally regulated autophagy [11,39,40]. While the fat body from fed wild-type larvae does not normally exhibit autophagy, autophagy is evident within a short period of amino acid starvation. TOR directly inhibits starvation-induced autophagy, while components of InR signalling (such as InR and Akt, acting upstream of TOR) also behave as negative regulators of autophagy. We hypothesized that if poly acts in InR signalling, the state of autophagy as visualized with Lysotracker staining may be altered in the poly mutant fat body compared with wild-type fat body. As anticipated, no Lysotracker puncta were observed in fed wild-type early third instar fat body (figure 5a, fed). However, Lysotracker puncta became apparent following a 4 h amino acid starvation, owing to the activation of autophagy (figure 5a, starved). Strikingly, Lysotracker puncta were abundant even in fed poly fat bodies, demonstrating that autophagy was constitutively active in the fat body of poly mutants (figure 5b, compare fed with starved). Poly loss of function leads to an increase in apoptotic cell death Autophagy and apoptosis are closely related, and increased levels of autophagy can lead to apoptosis [39,41]. For example, increased autophagy resulting from the clonal over-expression of Atg1 in the wing disc resulted in elevated apoptosis, as evidenced by the appearance of cleaved caspase-3 in these clones [39]. Because loss of poly led to constitutive activation of autophagy in the fat body, we investigated whether the loss of poly also resulted in increased apoptosis. Indeed, cell death increased dramatically in third instar larval neuroblasts as poly larvae aged. This was readily seen when larval neuroblasts were stained for pS10-histone H3 and TUNEL to detect mitosis and apoptosis, respectively (figure 5c). Mitotic figures, though rare at the latest stages, were still normal in appearance (data not shown). As the generation of mosaic imaginal discs allows the side-by-side comparison of wildtype versus mutant cells, we generated poly loss-of-function clones in imaginal discs. Staining of mosaic discs revealed increased levels of cleaved caspase-3 in poly mutant clones, which were discernible by the lack of ß-galactosidase (figure 5d ). Cleaved caspase-3 was not detectable in the adjacent wild-type cells. Thus, several independent lines of evidence demonstrate that loss of poly leads to an activation of autophagy coupled with an increase in apoptotic cell death. Metabolism is affected in poly mutant larvae Drosophila is frequently used to study metabolic regulation as fruitflies share the majority of metabolic functions with vertebrates [42]. The larval fat body is the main organ for regulation of energy homeostasis as excess energy is stored in the form of glycogen and triglycerides (TAGs; lipids). Activation of signalling through the InR pathway promotes both anabolic metabolism and the storage of nutrients such as TAGs [43]. This is important for development, as the breakdown of larval fat body and consequent release of energy facilitates Drosophila metamorphosis. In order to identify genes that were differentially expressed in poly mutant larvae, we carried out a microarray analysis. Among the list of 106 genes downregulated in poly mutants, functional enrichment group analysis identified a strong enrichment in gene ontology (GO) terms belonging to metabolic processes, suggesting that metabolism might be affected in poly mutants (table 2). We therefore investigated whether the decrease in InR -TOR activity was also manifest at the metabolic level in the storage of TAGs in poly mutants. TAG levels were normalized to total protein to give an accurate quantification of lipids per unit mass in wild-type and poly mutant larval extracts. Consistent with a decrease in InR-TOR signalling, we found the TAG : protein ratio to be half that detected in wild-type larvae (figure 6a). Lipid storage droplet-2 protein (Lsd-2), a protector against lipolysis and the Drosophila perilipin homologue, is localized on the outer membrane of lipid droplets, the main storage organelle for TAGs [44,45]. We found that the level of Lsd-2 in poly larval extracts was decreased relative to wild-type fat body, consistent with the decreased TAG : protein ratio observed in poly mutants ( figure 6b). Together, the reduction in TAG storage and Lsd-2 protein in poly mutant larvae suggest a decrease in anabolic metabolism, consistent with-and an expected consequence of-diminished InR-TOR signalling. 3.9. The Poly protein relocalizes following insulin stimulation in Drosophila haemocytes and human cultured cells Given the physical interaction of Poly with the InR and functional links between Poly and InR-TOR signalling, we were interested to determine whether the level and/or distribution of Poly in the cell were dependent on InR activity. Strikingly, staining for Poly was noticeably increased following insulin stimulation of larval haemocytes (cells of the rsob.royalsocietypublishing.org Open Biol 2: 110031 innate immune system). Prior to isolation of haemocytes, larvae were starved for 3 h in 20 per cent sucrose. Haemocytes were then stimulated in culture with 200 nM insulin (electronic supplementary material, figure S3). This increased staining in Poly appears to be a rapid response as it is detectable following only 15 min of insulin stimulation, and levels of Poly remain elevated after 75 min. A change in the appearance of Poly following insulin stimulation was conserved in human cells. Overnight serumstarved HeLa cells were stimulated with insulin for up to 90 min. As in Drosophila haemocytes, staining for HsPoly was significantly stronger following insulin stimulation, appearing maximal at 90 min. It additionally appeared that this increase in HsPoly was concentrated in or near the nucleus, with relocalization already evident by 30 min of insulin stimulation (figure 7a). Nearly, 60 per cent of cells showed a relocalization of Poly by 60 min of insulin stimulation, with this level increasing to 100 per cent by 90 min. This rapid change in HsPoly behaviour was dependent on TOR signalling. Incubation of HeLa cells with rapamycin during the overnight starvation period prevented the increase in HsPoly staining following insulin stimulation (figure 7b). In rapamycin-treated cells, HsPoly remained evenly distributed throughout the cell even following 90 min of insulin treatment. Taken together, these observations demonstrate that the increase and nuclear relocalization of HsPoly following insulin treatment occurs in a TOR-dependent manner. Overall, our results identify Poly as a novel component of InR-TOR signalling that is conserved from flies to humans. Discussion Poly is a conserved protein that plays a novel essential role in InR signalling and, crucially, promotes the effects of the TOR kinase on cell growth and metabolism. The activity of InR signalling is decreased in poly larvae Our data reveal an essential involvement for Poly in InR signalling, an important pathway linking nutritional status to metabolism and cell growth [3]. The two kinases, Akt and S6K, are key effectors of InR -TOR signalling. The Akt kinase is located upstream of the TSC1-TSC2 complex in this pathway and has a critical role in promoting cell growth. Phosphorylation of Akt substrates contributes to a range of cellular processes including cell growth, proliferation and survival [46]. Dysregulation of Akt is involved in various diseases, including type-2 diabetes and cancer [47,48]. S6K kinase is one of the most downstream effectors of InR-TOR signalling, and is subject to phosphorylation and activation by TOR. Activation of S6K leads to an increase in translation through its phosphorylation and activation of ribosomal protein S6 [1]. Poly acts upstream of both of these kinases in the InR signalling pathway, as levels of phosphorylated (active) dAkt and dS6K kinases were decreased in poly larvae. That Poly acts in the activation of both dAkt and dS6K is also supported by genetic data that revealed suppression of a poly-induced rough eye phenotype by mutations in dAkt and dS6K. These two kinases should only act as genetic Figure 6. poly mutant larvae are characterized by reduced levels of TAGs and Lsd-2. (a) TAG levels were assessed in wild-type and poly mutant third instar larvae. The total TAG level was normalized to the total protein level. Error bars derive from the standard deviation of three independent experiments. The unpaired twotailed t-test was used to compare the data from wild-type and poly larvae. The double asterisk represents a significant difference ( p , 0.01) in TAG : protein ratio between control and poly mutant larvae. (b) Immunoblotting was carried out on wild-type and poly mutant larval extracts with an antibody specifically recognizing Lsd2 protein. Table 2. Microarray analysis revealed that several metabolic functions were among the genes downregulated in poly mutants. DAVID (the Database for Annotation, Visualization and Integrated Discovery) functional enrichment group analysis of the list of differentially expressed genes identified enrichment in gene ontology (GO) terms belonging to metabolic processes as among those genes downregulated in poly mutants. The data were analysed by Limma (see §5.10) and were for four biological replicates. Genes were selected for functional/pathway analysis if the adjusted (corrected for multiple testing) p-value was less than 0.05. Log (base 2) fold changes are given (log 2 FC). 'rep' indicates reported gene function, while 'pred' indicates predicted gene function according to the gene-specific FlyBase report. suppressors if they are situated downstream of poly in the signalling pathway, resulting in an overall decrease in InR -TOR signalling in the absence of Poly. The biochemical data presented herein revealed a physical interaction of Poly with InR. Intriguingly, the level of Poly was reduced in InR mutant larval extracts independently of transcription, suggesting that a fraction of Poly may be unstable and subject to degradation in the absence of InR. Indeed, there are numerous examples of such instability if one partner in a protein complex is absent [49][50][51]. Whether Poly exists in a discrete complex with InR, and how such complex(es) may be regulated in response to insulin stimulation, are currently under investigation. Such an interaction may well be transient and highly dynamic, as Poly appears to accumulate in the nucleus following insulin stimulation. Considering the insulin-induced, TORdependent nuclear enrichment of Poly in HeLa cells, it is likely that insulin-mediated signalling via Poly is conserved from flies to humans. GO Together our data predict a decrease in InR-TOR signalling in the absence of Poly. As one read-out of TOR signalling, we examined autophagy, a multi-step, catabolic process used for nutrient recycling during development and starvation. TOR activity is responsible for inhibiting autophagy under cell growth-promoting conditions. Our finding that autophagy is constitutively active in the fat body of poly mutant larvae is in accord with the observation of reduced InR-TOR signalling, and consistent with an inhibition of cell growth and/or proliferation in poly mutant animals. Furthermore, the increase in apoptotic cell death in poly mutant animals and clones generated in imaginal discs might result from increased levels of autophagy upon mutation of poly. Metabolism is disrupted in poly mutants A major function of the InR-TOR pathway is the modulation of metabolism. For example, 4E-BP mutant animals show increased sensitivity to starvation [16]. Interestingly, it was rsob.royalsocietypublishing.org Open Biol 2: 110031 previously shown that the loss of the tumour suppressor PTEN (responsible for the dephosphorylation of PIP 3 ) in Drosophila nurse cells results in the accumulation of activated Akt in the cytoplasm. This activated Akt drives the formation of enlarged lipid droplets along with an increase in the expression of Drosophila Lsd-2 [52]. Lsd-2 mutants are characterized by decreased TAG levels [45]. Interestingly, autophagy and lipid metabolism were shown to be two interlinked processes, as suggested by a decrease in autophagy resulting in an increase of lipid storage in the cell [53]. Consistent with decreased InR-TOR signalling in the absence of poly, both Lsd-2 and the TAG : protein ratio were reduced in poly mutant animals. Therefore, we propose that Poly affects metabolism via its interaction with the InR-TOR pathway, acting as a positive regulator of anabolic metabolism. Comparison of poly to other mutants and Elongator Numerous aspects of the poly phenotype have been observed in other mutations, including that of InR and TOR. An extreme larval longevity, one of the most remarkable aspects of the poly mutant phenotype, has been observed in both Tor [54] and InR mutants (this study). The appearance of melanotic masses is frequently seen in mutations with aberrant immune responses and/or haematopoietic defects [55][56][57]. In poly third instar larvae, melanotic masses appearing at multiple different locations along the larval body are likely to result from either an alteration in the immune response and/or a hyper-proliferation of blood cells, or haemocytes. The observation that the few InR mutant larvae that reach the third instar stage also develop melanotic masses highlights the phenotypic similarities of poly and InR loss-offunction mutations, and further corroborates a functional relationship between poly and InR. Given the apparent similarity of Poly to yeast Elp6, it is therefore intriguing that the recently described mutant phenotype for the Elongator Elp3 catalytic subunit in Drosophila resembles that of poly [28]. These observations are suggestive of a role for the Elongator holocomplex in insulin signallingthrough histone and/or tubulin acetylation-or even translational control through tRNA modification. In addition, as human mutations in Elp3 have been linked to familial dysautonomia [58], it is likely that the future analysis of Elongator function in model organisms will be of significant preclinical value. Model for Poly action We have shown that Poly binds to InR (a transmembrane receptor) and modulates the activity of various downstream proteins. While the lack of clearly discernible functional motifs hampered a prediction of Poly's molecular function, extensive database searches and phylogenetic analyses identified Poly as a member of the Elp6 subfamily of Elongator proteins, with more distant homology to various members of the RecA/Rad51/DCM1 superfamily (such as KaiC and RadB); these observations are described in a distinct study from our laboratory. Both KaiC and Elp6 have been shown to significantly affect gene expression, pointing to a degree of functional conservation. Thus, we suggest an involvement for Poly during transcription ( perhaps once relocated to the nucleus following insulin stimulation). If Poly has a role during transcription, as the phylogenetic data suggest, how do we explain its binding to a transmembrane receptor? One explanation is the dynamic changes in the cellular localization of Poly, occurring in a TOR-dependent manner. Interestingly, InR phosphorylation of IRS-1 and IRS-2 (two of the human IRS homologues) not only leads to activation of PI3K signalling, but is also associated with IRS-1 translocation to the nucleus, where it activates transcription of various genes [59 -61]. Given that Poly also interacts with InR, and moves to the nucleus following insulin stimulation, it is possible that changes in the localization of Poly are also ultimately in response to InR signalling. Immunofluorescence on both Drosophila haemocytes and HeLa cells demonstrated that the level and/or distribution of Poly are significantly affected following insulin stimulation. Importantly, in HeLa cells, Poly relocalization to the nucleus occurred in a rapamycin-sensitive manner. Thus, we speculate that the stimulatory effects of Poly on cell growth and metabolism may be exerted via effects on transcription. However, further analysis is required to assess whether Poly participates in an Elongator complex and, if so, in which of the myriad functions currently ascribed to Elongator. Future research will address the detailed nature of the interaction of Poly with the InR. Does Poly interact with the InR in the absence or presence of insulin? How dynamic is this interaction? And is the level or post-translational state of Poly modified upon insulin treatment? In the light of the data presented herein, we propose that Poly is a novel mediator of InR-TOR signalling in the regulation of cell metabolism and growth in Drosophila (figure 8). We suggest that the physical interaction between Poly and the InR is followed by the translocation of Poly into the nucleus (upon insulin stimulation), wherein the expression of key metabolic genes is affected, thus contributing to the promotion of cell growth and metabolism. While the detailed nature and regulation of Poly's interaction with the InR remain to be elucidated, it is highly likely that, given the conservation of Poly, this crucial interaction and function will also hold true in human cells. Drosophila culture All fly stocks were maintained at 258C on standard medium unless otherwise stated. Fly stocks used in this study were: Canton S (wild-type), poly 05137 /TM6B. The following fly lines were obtained from the Bloomington Stock Center: InR 05545 /TM3, Akt1 04226 /TM3, S6K 07084 /TM3, Cg-Gal4. Phylogenetic analysis Sequences homologous to Drosophila Poly were identified by multiple iterative searches using the PSI-BLAST program [62] and the HHpred interactive server [63]. Alignments between the corresponding sequences were generated using M-COFFEE [64] and manually adjusted based on predicted secondary structure according to ALI2D [65]. Identical and similar amino acids in a representative subset of aligned sequences were shaded using the BOXSHADE server (http://www.ch. embnet.org/software/BOX_form.html). Phylogenetic trees were calculated using TREE-PUZZLE [66], MRBAYES [67] and the PROML program of the PHYLIP Package [68] after clustering of related sequences into smaller groups using SPLITSTREE4 [69]. Branch lengths were calculated by application of the WAG substitution matrix [70] using TREE-PUZZLE, modelling rate heterogeneity according to a gamma distribution with 16 rate categories, and bootstrap confidence intervals ( provided in electronic supplementary material, figure S1) were estimated using the SEQBOOT program of the PHYLIP Package [68]. Immunoprecipitation Fifty microlitres of 0-5 h wild-type embryos (raised at 188C) were homogenized in 1 ml of cold lysis buffer (100 mM NaCl, 10 mM EDTA, 50 mM Tris -HCl pH 7.6, 0.1% Triton-X100, 10 mg ml 21 each of chymostatin, leupeptin, antipain and pepstatin, and 50 mg ml 21 PMSF) and briefly sonicated. Eight microlitres of rabbit pre-immune or anti-poly serum coupled to 40 ml of Dynal beads (Invitrogen) were added to precleared embryo lysates overnight. Ten per cent of each sample was subjected to immunoblot analysis to verify successful immunoprecipitation of Poly, while the remaining 90 per cent of sample was resolved on precast 4-12 per cent Bis-Tris polyacrylamide gels (Novex) and stained with Colloidal Coomassie Blue (Invitrogen). Comparable molecular weight regions of interest were excised from each lane ( pre-immune-and immune-precipitations) and mass spectrometry analysis performed (Dr Gerard Cagney, Dublin). Mass spectrometry The proteins in slices from sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS -PAGE) gels were digested in-gel with trypsin by the method of Shevchenko et al. [71]. The resulting peptide mixtures were resuspended in 1 per cent formic acid and analysed by nano-electrospray liquid chromatography mass spectrometry (nano-LC MS/ MS). A high-performance liquid chromatography (HPLC) instrument (Dionex, UK) was interfaced with an LTQ ion trap mass spectrometer (ThermoFinnigan, CA). Chromatography buffer solutions (buffer A: 5% acetonitrile and 0.1% formic acid; buffer B: 80% acetonitrile and 0.1% formic acid) were used to deliver a 60 min gradient (35 min to 45% buffer B, 10 min to 90%, hold 10 min, 3 min to 5% and hold for 15 min). A flow rate of 2 ml min 21 was used at the electrospray source. One full scan was followed by 10 MS/MS events, and the duty cycle programmed to enforce dynamic exclusion for 2 min. In-house proteomics pipeline software (PROLINE) was used to process data. Spectra were searched using the Sequest algorithm [72] against SwissProt.2007.04. 19 database restricted to Drosophila melanogaster entries. Proteins with (i) peptide prophet probability score greater than 0.99 [73] and (ii) identified by a minimum of two different peptide spectra were automatically accepted, while spectra for the minority of proteins identified by single spectra were manually checked for quality. Larval manipulations Prior to dissections or protein extractions, 10-15 second instar larvae were transferred to a fresh vial of food supplemented with fresh yeast paste. Manipulations were carried out when larvae reached early third instar stage. For starvation experiments, early third instar larvae were starved in 20 per cent sucrose solution for 3-4 h prior to dissection. RNA extraction and quantitative reverse transcriptase -polymerase chain reaction Larvae were transferred to RNase-free Eppendorf tubes (Ambion). RNA was extracted with the Qiagen RNeasy Plus Kit (Qiagen Hilden, Germany) according to manufacturer's instructions. cDNA synthesis was carried out by using the SuperScript III (Invitrogen) following the manufacturer's instructions. Real-time quantitative RT-PCR analysis was performed on the LightCycler system (Roche) using the universal probe library. The reactions were set up following manufacturer's recommendation with the LightCycler master mix kit. The relative cDNA ratio was calculated with Lightcycler software 480. Actin5C was used as control to normalize equal loading of template cDNA. Preparation of larval protein extracts for immunoblotting Fifteen to twenty wandering third instar larvae of the appropriate genotype were placed in 1. Lysotracker staining Lysotracker staining was performed as described [74]. Fat body from fed or starved larvae was dissected in PBS, and incubated for 2 min in 100 nM LysoTracker Red DND-99 (Molecular Probes), with 1 mM Hoechst 33342 (Molecular Probes) in PBS. Fat bodies were mounted with phosphatebuffered saline (PBS) on a glass slide and visualized using an Olympus AX-70 Provis epifluorescence microscope and Hamamatsu Orca II charge coupled device (CCD) camera. Images were captured using SMARTCAPTURE 3 and processed using Adobe PHOTOSHOP. Microarray processing and analysis Ten to fifteen second instar wild-type Canton S and poly larvae were transferred into vials supplemented with fresh yeast paste. Early third instar larvae were transferred into TRIzol 24 h later for RNA extraction. All microarray processing was by the Flychip team at the University of Cambridge (http://www.flychip.org. uk/). RNA from wild-type Canton S and poly early third instar larvae was extracted (medium scale) using TRIzol (http:// www.flychip.org.uk/protocols/gene_expression/standard_ extraction.php). RNA was reverse transcribed and labelled using Klenow labelling: 5 mg of total RNA were reverse transcribed to cDNA (anchored oligo (dT)23 (Sigma), Superscript III (Invitrogen)) and second strand synthesis was then performed (Second strand buffer (Invitrogen), DNA Polymerase I (Invitrogen), RNaseH (New England Biolabs), Escherichia coli DNA ligase (GE Healthcare)) to obtain double-stranded DNA. Random primers are annealed to 500 ng of this denatured DNA template and extended by Klenow fragment using the Bioprime DNA Labeling System (Invitrogen), while fluorescent dyes Cy3-dCTP or Cy5-dCTP (GE Healthcare) are incorporated (http://www.flychip.org.uk/protocols/gene_expression/ klenow_v2.php). Hybridization to FL002 microarrays: hybridization to amino-modified long oligonucleotide microarrays using a Genomic Solutions hybridization station with the Biosolutions hybridization buffer (http://www.flychip.org.uk/ protocols/gene_expression/hyb_oligoMWG.php). Scanning was with the Genepix 4000B dual laser scanner at 5 mM pixel resolution (http://www.flychip.org.uk/protocols/gene_ expression/scanning2.php). Spot finding and quantitation were via the DAPPLE package. Raw data were normalized by the vsn method in BIOCONDUCTOR (http://www.bioconductor. org/packages/2.8/bioc/html/vsn.html) to generate log (base 2) fold changes and average signals. Differential expression was tested using Limma, also in BIOCONDUCTOR (http:// www.bioconductor.org/packages/2.8/bioc/html/limma.html). GO and KEGG biological pathway enrichment in the differentially expressed gene set were assessed using the DAVID functional annotation bioinformatics microarray analysis tool (http://david.abcc.ncifcrf.gov/). Microarray data were deposited in the gene expression omnibus (GEO) under the accession number GSE32637 (http://www.ncbi.nlm.nih.gov/ geo/query/acc.cgi?acc=GSE32637). Triglyceride assay on larvae TAG assay on larvae was carried out as described by Gronke et al. [75]. Briefly, six whole larvae corresponding to each genotype were collected in 500 ml of 0.05 per cent Tween 20 and homogenized using a Polytron apparatus, followed by treatment at 708C for 5 min. Samples were centrifuged for 1 min at 3500 r.p.m. and 350 ml of the supernatant were transferred to a new vial and centrifuged for 3 min at 2500 r.p.m. Six hundred microlitres of Thermo Infinity Trig solution (Thermo Electron, 981786) were added to 75 ml of isolated supernatant and the absorbance at 540 nm was measured following incubation for 30 min. Similarly, protein content was determined by adding 10 ml of isolated supernatant to Bradford reagent (Sigma, B6916) and reading the absorbance at 595 nm. TAG levels were normalized to corresponding protein levels. Antibody staining of haemocytes Third instar larvae were bled on multispot microscope slides (Hendley-Essex) using a pair of forceps and a 25-gauge needle in 20 ml of PBS. Cells were left to settle on the slide for 1 h at room temperature in a humidified chamber to allow adherence to the slide. Cells were fixed with 20 ml of 3.7 per cent paraformaldehyde in PBS for 5 min. Cells were washed with PBS for 5 min, followed by a 5 min permeabilization in PBS þ 0.1% Triton X-100. Following an additional 5 min wash in PBS, blocking was performed by incubating cells in PBS þ 3% BSA for 1 h. Cells were then incubated with primary antibody diluted in PBS þ 3% BSA overnight at 48C. The following day cells were washed in PBS three times and incubated at room temperature with secondary antibody diluted in PBS þ 3% BSA. Following the secondary antibody, incubation cells were washed twice in PBS for 5 min. Cells were incubated with DAPI diluted in PBS (1 : 5000 dilution) for 5 min followed by a final 5 min wash in PBS. Coverslips were mounted with mowiol on top of the slide. HeLa cell culture HeLa cells were maintained in Dulbecco's modified Eagle's medium (DMEM; Sigma) supplemented with 10 per cent foetal bovine serum (FBS; Gibco). Cells were grown to 80-90% confluence before overnight serum withdrawal. For insulin stimulation, serum-starved cells were treated with 100 nM insulin (Sigma). Cells were treated with 20 nM rapamycin (Cell Signaling Technology) during an overnight serum starvation. Immunofluorescence on HeLa cells HeLa cells were cultured on coverslips in six-well plates overnight. Cells were rinsed in PBS for 3 min followed by a 3 min fixation in PBS containing 4 per cent paraformaldehyde (diluted from 16% ampoules). The fixative was removed by a 2 min wash in PBS. Cells were permeabilized by a 5 min incubation in PBS þ 0.5% Triton X-100 followed by a 1 h block in PBS containing 3 per cent BSA. Subsequently, cells were washed in PBS þ 0.1% Triton X-100 for 5 min. Primary antibody was diluted in PBS þ 0.3% BSA þ 0.1% Triton X-100 and incubated on the cells for 1 h at room temperature. Cells were washed twice for 5 min followed by a 1 h secondary antibody incubation diluted in PBS þ 0.3% BSA þ 0.1% Triton X-100. Subsequently, 45 min washes were performed in PBS þ 0.1% Triton X-100. DAPI was included in the penultimate wash at 0.1 mg ml 21 concentration.
9,402.4
2012-01-01T00:00:00.000
[ "Computer Science" ]
Solution-Grown Dendritic Pt-Based Ternary Nanostructures for Enhanced Oxygen Reduction Reaction Functionality Nanoalloys with anisotropic morphologies of branched and porous internal structures show great promise in many applications as high performance materials. Reported synthetic approaches for branched alloy nanostructures are, however, limited by the synthesis using a seed-growth process. Here, we demonstrate a conveniently fast and one-pot solution-phase thermal reduction strategy yielding nanoalloys of Pt with various solute feed ratios, exhibiting hyperbranched morphologies and good dispersity. When Pt was alloyed with transition metals (Ni, Co, Fe), we observed well-defined dendritic nanostructures in PtNi, PtCo and Pt(NiCo), but not in PtFe, Pt(FeNi) or Pt(FeCo) due to the steric hindrance of the trivalent Fe(acac)3 precursor used during synthesis. In the case of Pt-based nanoalloys containing Ni and Co, the dendritic morphological evolution observed was insensitive to large variations in solute concentration. The functionality of these nanoalloys towards the oxygen reduction reaction (ORR); however, was observed to be dependent on the composition, increasing with increasing solute content. Pt3(NiCo)2 exhibited superior catalytic activity, affording about a five- and 10-fold enhancement in area-specific and mass-specific catalytic activities, respectively, compared to the standard Pt/C nanocatalyst. This solution-based synthetic route offers a new approach for constructing dendritic Pt-based nanostructures with excellent product yield, monodispersity and high crystallinity. Introduction Improving the efficiency of catalysts for hydrogen proton exchange membrane fuel cells (PEMFCs) is an important challenge at present.Platinum is currently the catalyst of choice for PEMFCs [1][2][3][4], but the use of pure Pt is limited by its high cost.Pt also exhibits sluggish catalysis kinetics in the oxygen reduction reaction (ORR) [2][3][4][5].These factors constrain the commercial viability of PEMFCs and hence remain a barrier to their widespread use [2,6].Considerable research is currently directed at improving the cost efficiency by synthesizing nanocatalysts with reduced Pt loading (achievable by alloying Pt with less-costly constituents) while also striving for accelerated ORR kinetics [1][2][3][4].Nanoparticles with branched and porous structures exhibit improved catalytic activity owing to their exceptionally large surface area [5,7].The design of more open structures in Pt-based nanoalloys has accordingly emerged as a promising platform for attaining improved catalytic performance [8].There have been significant achievements in the design and preparation of more cost-effective Pt alloys than pure Pt, which exhibit excellent catalytic activity in the ORR: for instance, nanoalloys of Pt with Fe, Co or Ni [9][10][11][12].Typically, however, these have been synthesized using seed-activated/-mediated growth followed by annealing; a synthesis protocol that poses scale-up challenges and requires prolonged annealing time [5,[13][14][15][16]. Although nanoparticles with large surface area morphologies are expected to exhibit improved catalytic activity [5,7], the influence of variations in alloying composition on morphological evolution is less clear and requires systematic evaluation.Moreover, the scarcity of facile and direct one-pot synthetic approaches for rapid and large-scale production of alloy nanostructures with well-defined morphologies and controlled surface compositions required for industrial catalytic processes is indispensable.The correlation between the structure-composition-functionality relations of alloys is the driving force towards the design and development of new solution-phase synthetic approaches, leading to the manipulation of the size, composition, shape and structure of nanostructures. Herein, we report a conveniently fast and new one-pot high thermal (300 • C) decomposition approach, for the solution-phase synthesis of high quality dendritic nanostructures of Pt with varied stoichiometric solute feed ratios.We used this synthetic protocol to investigate the influence of composition on the morphology of a series of Pt nanoalloys.Assessment of morphology shows that this synthetic strategy (which does not require annealing) produces an open and branched morphology in Pt-based nanoalloys containing Ni and/or Co, but not Fe.Systematic variations in Ni and Co concentrations were not observed to result in morphological changes to these nanoalloys, which have a large surface area, porous internal structure and many low coordination sites at edges and corners [5,17,18].The catalytic activity is however sensitive to composition, increasing with decreasing Pt content (i.e., with increasing Ni or Co concentration) and hence decreasing cost. Synthesis of Binary PtNi, PtCo and PtFe Nanostructures In a typical thermolysis synthesis: Metal precursor salts Pt(acac) 2 (0.2 g, 0.5085 mmol), with Ni(Ac) 2 (0.12 g, 0.4822 mmol), Co(Ac) 2 or Fe(acac) 3 (0.17 g, 0.4822 mmol), were dissolved by stirring in oleylamine (OAm, 20 mL), trioctylamine (TOA, 15 mL) and oleic acid (OLEA, 5 mL) at 200 • C for 10-15 min, in a high boiling point solvent, 1-octadecene (1-ODE, 25 mL).Thereafter, the resultant homogeneous solution was rapidly transferred into a one-neck flask and heated to 300 • C for 15-20 min, with a heating rate of 15 • C/min.Upon completion, the reaction mixture was allowed to cool to room temperature, and the colloidal particles were flocculated by the addition of excess ethanol and acetone and washed three times to ensure the elimination of any unwanted solvent and excess surfactants.The black product was dried and finally re-suspended in chloroform (20 mL) by mild sonication, yielding a dark-brown homogeneous colloidal suspension. Nanostructure Characterization Techniques The black powders of the as-synthesized and unsupported nanostructures were deposited onto a silicon (Si) wafer support and characterized by powder X-ray diffraction (XRD) on an X'Pert Pro multipurpose diffractometer (MPD), using Cu Kα radiation (λ = 1.54056Å).The XRD patterns were recorded at a scan rate of 0.106 • /s and with a step size of 0.0334 • .Scanning transmission electron microscopy (STEM) specimens were prepared by placing one drop of the colloidal solution (nanoparticles re-dispersed in chloroform) on 3-mm copper grids coated with a carbon support.The grids were dried under ambient conditions.Thereafter, the materials were analysed using transmission electron microscopy (TEM), high-resolution TEM (HR-TEM) and STEM on a JEOL ARM200F (JEOL, Tokyo, Japan) probe-corrected instrument, operating at 200 keV.The chemical compositions of individual nanostructures were determined using energy dispersive X-ray spectroscopy (EDS) in STEM mode.Spectrum imaging was used in which an EDS spectrum was obtained at each pixel in the STEM Image, to produce a 3 dimensional (3D) dataset.Rapid acquisition was used (5 s per frame) integrated over at least 100 frames.Image drift correction was applied after each frame.The 3D datasets were analysed with the Noran System Seven software.STEM imaging was carried out in high angle annular dark field (HAADF), bright field (BF) and secondary electron (SE) modes.Fourier transform infrared (FT-IR) spectra of the as-synthesized alloy nanostructures physically mixed with KBr pellets were acquired using a Nicolet 5700 FT-IR spectrometer (Thermo Fisher Scientific, Madison, WI, USA). Electrochemical Measurements Cyclic voltammetry (CV) scans were conducted in an argon (Ar)-purged electrolyte by potential cycling of the working electrode between 0.05 V and 1.00 V vs. the standard hydrogen electrode (SHE) at a scan rate of 100 mV/s in a 0.1 M HClO 4 solution.The electrocatalysts were observed to stabilize after 100 voltage cycles, which were used to electrochemically clean the catalyst surface, referred to as "catalyst activation" [19].The sweep rate was then reduced to 50 mV/s, and the third cycle at that scan rate was used for analysis.The electrochemically-active surface area (ECSA) was calculated by integrating the area under the curve for the hydrogen underpotential deposition region (H upd ) assuming a monolayer hydrogen charge of 210 µC/cm 2 Pt [20,21].Carbon monoxide (CO)-stripping voltammetry curves were obtained by bubbling CO gas into the electrolyte solution while holding the potential of the working electrode (WE) at 0.1 V vs. SHE.The electrolyte was then purged with Ar to remove the dissolved CO gas while still holding the potential of the WE at 0.1 V vs. SHE.The potential of the WE was then cycled to 1.00 V vs. SHE at 20 mV/s, followed by a CV cycle as described above at 20 mV/s.The peak area could then be determined using the baseline CV, and a normalization factor of 420 µg/cm 2 Pt [22] was used to calculate the ECSA CO .For the linear sweep voltammetry (LSV) curves, the potential of the WE was swept from 1.10 V to 0.20 V vs. SHE and back at 10 mV/s.ORR polarization curves were recorded at rotation speeds of 1600 rpm.The ORR curves obtained in oxygen (O 2 )-saturated electrolyte were corrected for the capacitive current associated with Pt x M y /C catalysts, by subtracting a CV measured in an Ar-saturated electrolyte.The current densities were also normalized with reference to the calculated ECSA to evaluate the specific activities.For polarization curves, the measured currents were corrected for mass transport to acquire the true kinetic currents.The mass activities and specific activity were determined at +0.9 V by normalizing the kinetic currents (I k ) with the ECSA of the alloy catalysts and the Pt catalyst, respectively, immobilized on the electrode.The kinetic current (I k ) can be calculated by using the Koutecky-Levich equation [23]. Influence of Different Metal Precursors on Nucleation and Growth Mechanism of Pt 3D Transition Metals (Ni, Co, Fe) The size and morphological evolution of solution-grown metallic nanostructures are governed by nucleation and growth kinetics, which can be controlled by experimental parameters including the nature of precursor salts, reduction temperature, reducing agents and surfactants [17,18,24,25].We accordingly explored a high temperature (300 • C) co-thermal decomposition approach to balance complete decomposition of the metal precursors into zero valent states, in the presence of distinct hydrophobic surfactants of different functional groups: oleylamine (OAm), trioctylamine (TOA) and oleic acid (OLEA).Figure 1 (left to right, respectively) shows scanning transmission electron microscopy (STEM) images corresponding to secondary electron (SE), bright-field (BF) images and high-resolution BF images of three ternary nanoalloys synthesized with different solutes.Pt(NiCo) nanoalloys (Figure 1a) are observed to form well-defined and monodisperse dendritic morphologies; whereas Pt(FeNi) (Figure 1b) and Pt(FeCo) (Figure 1c) show a mixture of segregated spherical and interconnected nanoalloys.The presence of Fe in Pt(FeNi) or Pt(FeCo) nanoalloys is accordingly associated with irregular morphologies, unlike the highly dendritic structure of Pt(NiCo) nanoalloys.Binary nanoalloys of Pt with Co, Ni or Fe were synthesized using the same thermolysis protocol.Images show monodisperse nanostructures of PtCo and PtNi with dendritic morphology; PtFe exhibited a mixture of polydispersed single-crystalline and polycrystalline nanostructures ranging from spherical, tripod, tetrapod and irregular morphologies (Supplementary Materials Figure S1).The simultaneous coexistence of both single-crystalline and polycrystalline nanostructures has previously been observed [25,26].These results, together with the results presented in Figure 1, show that for this high temperature decomposition synthetic protocol, the Co(Ac) 2 and Ni(Ac) 2 metal salts are associated with the development of a uniform dendritic morphology.We interpret the observed morphological evolution differences in terms of the different oxidation states of divalent Ni 2+ and Co 2+ cations, compared to the trivalent Fe 3+ cation.Ni and Co precursor salts possess a single short ligand, which cleaves off efficiently during reduction; as a consequence, the growing crystal sites permit more incorporation of incoming zero valent Ni 0 and Co 0 .In the case of Fe(acac)3 with three large ligands, we hypothesise that the cleaving of the ligand bonds occurs in a stepwise manner when Fe binds to the growing nanoparticles.This in turn We interpret the observed morphological evolution differences in terms of the different oxidation states of divalent Ni 2+ and Co 2+ cations , compared to the trivalent Fe 3+ cation.Ni and Co precursor salts possess a single short ligand, which cleaves off efficiently during reduction; as a consequence, the growing crystal sites permit more incorporation of incoming zero valent Ni 0 and Co 0 .In the case of Fe(acac) 3 with three large ligands, we hypothesise that the cleaving of the ligand bonds occurs in a stepwise manner when Fe binds to the growing nanoparticles.This in turn generates cluster-Fe(acac) 2 group as an intermediate.These large, dangling ligands sterically block the incorporation of similar Fe-containing molecules and also those containing Pt, Ni or Co.It thus appears that trivalent Fe(acac) 3 modifies the growth behaviour of the Fe-containing nanoalloys due to steric hindrance.This, in turn, suggests that the reduction kinetics of the Fe(acac) 3 precursor affects the nucleation and subsequent crystal overgrowth of nanostructures.This results in irregular morphologies for the Fe-containing nanoalloys, in which symmetrical branch formation and the development of highly interconnected patterns of crystal facets are inhibited. The Influence of Alloying Feed Ratios on the Morphology of Pt(NiCo) Alloys In Figure 2 are shown (left to right) STEM SE, BF, HR-BF and the corresponding fast Fourier transform (FFT) diffractograms of ternary nanostructures solution-grown by the synthetic protocol described, while systematically varying the feed ratio between Pt:(NiCo) precursor salts as follows: 2:(1:1), 2.6:(1:1), and 3.5:(1:1).All nanostructures are observed to have a dendritic morphology that radiates out from the core; and a high surface area exhibiting multiple crystal facets.The STEM images show that varying the feed ratio of solute precursors had a negligible effect on the final morphology of Pt(NiCo) nanoalloys.The average particle diameters of these three ternary nanostructures (calculated from the measurement of approximately 300 individual nanoparticles) was 63 nm-73 nm, with broader, skewed particle size distributions (Supplementary Materials Figure S2).HR-BF images reveal polycrystalline nanoalloys with well-resolved lattice fringes (the measured d-spacings are shown on the figures) and randomly-oriented crystal facets.The multiplicity of facets arises from the deposition of single crystals, from the core outward, which results in the evolution of multiply-exposed crystal facets.This is further confirmed by the FFT diffractograms, which show arcs of spots characteristic of polycrystalline structures, in contrast with disparate spot patterns identified when imaging separate single or twinned crystals.This shows that nanocrystals exhibit a narrow range of orientations, suggesting that templating has occurred: the orientation of new crystals is guided by the growth of pre-existing surfaces.Thus, our high temperature co-reduction approach suggests that the crystallographic origin of these selectively monodisperse polycrystalline dendritic nanostructures, regardless of composition variations, is due to the rapid growth of preformed individual alloy nanocrystals, coalescing at favourably-oriented/attachment sites rather than guided or accompanied by epitaxial growth.It is, however, possible that the deployment of ternary surfactants can alter the growth kinetics and dictate the nanostructure growth orientation because of preferential binding or nonbinding on growing crystals, thus favouring coalescence and aggregation-directed growth of preformed single crystals in a diffusion-controlled manner [7]. We next evaluated the distribution of elements within the Pt(NiCo) nanoalloys, as shown in Figure 3.A homogeneous atomic distribution of Pt and the alloying elements is observed for all nanoalloys.The average atomic compositions of the ternary nanoalloys were determined by energy dispersive spectroscopy (EDS) to be Pt 60 (NiCo) 40 , Pt 79 (NiCo) 21 and Pt 82 (NiCo) 18 (Supplementary Materials Figure S3).The intensity profiles acquired through elemental maps of the composite images in Figure 3a-c reveal an even distribution of the three elements across single nanoparticles.The crystal structure of ternary Pt(NiCo) nanoalloys was evaluated by X-ray diffraction (XRD) (Supplementary Materials Figure S4).Five 2θ diffraction peaks were indexed to the (111), ( 200), (220), (311) and (222) planes, characteristic of a face-centred cubic (fcc) solid solution.There were no additional XRD peaks detected of pure Pt and (NiCo), indicating that the fcc phase is a single-phase disordered solid solution.A slight shift of the peak positions toward higher angles as solute content increases, relative to pure Pt, suggests a decreased lattice parameter, consistent with the replacement of some Pt by smaller atoms of Ni and Co in the crystal lattice [27,28].In the case of dendritic binary nanostructures (Supplementary Materials Figure S5), XRD measurements (Supplementary Materials Figure S6) show PtCo to have a larger lattice parameter than PtNi, as expected from the relative atom size of Co and Ni.Additionally, these binary nanoalloys displayed typical diffraction peaks that could be indexed to an fcc solid solution.The average particle diameters (calculated from the measurement of approximately 300 individual nanoparticles) of binary PtNi and PtCo were 73 ± 8.8 nm and 63 ± 5.4 nm, respectively, exhibiting broader and skewed particle size distributions (Supplementary Materials Figure S7).Further EDS compositional and HAADF-STEM-EDS elemental mapping assessments of the as-synthesized binary Pt-based nanostructures gave average atomic compositions of Pt 52 Ni 48 and Pt 54 Co 46 (Supplementary Materials Figures S8 and S9, respectively), consistent with the 1:1 feed ratio (of precursor salts) used during synthesis.Based on the results obtained, the systematic compositional variations and analysis indicate that hyperbranched nanostructures with more accessible surface and porous internal structures can be solution-grown using the described co-thermal synthetic approach. The Effect of Synthesis Chemistry on Nanoalloys' Morphological Evolution The thermolysis method in this work aimed to provide a rapid synthesis medium, which resulted in dendritic Pt nanoalloys within 15-20 min, hence a shortened reaction period.The growth mechanism of pure Pt into dendritic structures is reported to occur anisotropically along the <111> orientation, leading to the formation of interconnected branches at low temperatures (≤150 °C) and spherical Pt structures at the elevated temperature (250 °C), using OAm as both surfactant and reductant [26].This may suggest that the OAm molecules are tightly bound to the crystal surfaces at lower temperatures and dictate the final morphology of the particles.However, this synthesis required prolonged annealing time for the complete formation of such dendritic nanostructures [26,29].Our experiments suggest that the evolution and growth of dendritic nanostructures is rapid at 300 °C, stimulating fast coalescence/stacking of single crystals in a highly controlled interconnected manner.In addition, these nanostructures were observed to sediment during synthesis, yielding a dense black product.We correlate this phase transformation mechanism of crystallization/precipitation to induced weakened binding of the surfactants and supersaturation of single crystals in a synthesis solution phase at the elevated temperature.Generally, single crystals tend to grow into larger particles, to minimize the interfacial energy and consequently yielding dense colloids with diminished surface energy or may trigger subsequent particle coarsening (Ostwald ripening) [24], with prolonged reaction time.In the case of polycrystalline NPs, total atomic diffusion or anisotropic particle-particle coalescence can be hindered by crystalline boundary/lattice diffusion, thereby favouring the construction of interconnected NPs with internal pores and high surface free energy [18].By decreasing the stoichiometric feed ratios of the alloying elements, we observed that the dendritic morphological evolution remains unaltered.Furthermore, this implies that Pt anisotropic growth could be the key determinant in the creation of such branched and interconnected nanostructures.Our results are, however, contradictory to the creation of spherical nanocrystals at the elevated temperature [26]. In this work, dendritic nanostructures were observed for Pt with Ni and/or Co, but not Fe. The Effect of Synthesis Chemistry on Nanoalloys' Morphological Evolution The thermolysis method in this work aimed to provide a rapid synthesis medium, which resulted in dendritic Pt nanoalloys within 15-20 min, hence a shortened reaction period.The growth mechanism of pure Pt into dendritic structures is reported to occur anisotropically along the <111> orientation, leading to the formation of interconnected branches at low temperatures (≤150 • C) and spherical Pt structures at the elevated temperature (250 • C), using OAm as both surfactant and reductant [26].This may suggest that the OAm molecules are tightly bound to the crystal surfaces at lower temperatures and dictate the final morphology of the particles.However, this synthesis required prolonged annealing time for the complete formation of such dendritic nanostructures [26,29].Our experiments suggest that the evolution and growth of dendritic nanostructures is rapid at 300 • C, stimulating fast coalescence/stacking of single crystals in a highly controlled interconnected manner.In addition, these nanostructures were observed to sediment during synthesis, yielding a dense black product.We correlate this phase transformation mechanism of crystallization/precipitation to induced weakened binding of the surfactants and supersaturation of single crystals in a synthesis solution phase at the elevated temperature.Generally, single crystals tend to grow into larger particles, to minimize the interfacial energy and consequently yielding dense colloids with diminished surface energy or may trigger subsequent particle coarsening (Ostwald ripening) [24], with prolonged reaction time.In the case of polycrystalline NPs, total atomic diffusion or anisotropic particle-particle coalescence can be hindered by crystalline boundary/lattice diffusion, thereby favouring the construction of interconnected NPs with internal pores and high surface free energy [18].By decreasing the stoichiometric feed ratios of the alloying elements, we observed that the dendritic morphological evolution remains unaltered.Furthermore, this implies that Pt anisotropic growth could be the key determinant in the creation of such branched and interconnected nanostructures.Our results are, however, contradictory to the creation of spherical nanocrystals at the elevated temperature [26]. In this work, dendritic nanostructures were observed for Pt with Ni and/or Co, but not Fe.Accordingly, we consistently used the same Pt, Co and Ni precursor salts and decomposition temperature (300 • C) together with OAm, TOA and OLEA to achieve simultaneous reduction of the metal salts and hence nanoalloy nucleation (note that the three surfactants act also as reductants here).Subsequent growth of the nanoalloys was controlled by the selection of surfactants (OAm, TOA and OLEA) to promote the growth of a dendritic nanostructure.Surfactants passivate and coat the developing nanostructures during growth: the adsorption of stabilizers on surfaces differs in strength depending on the orientation of surface facets.This directs the rate at which surfactant monomers attach to different surface facets [30].The binding affinity (adsorption) of two or more surfactants on growing crystal surfaces within the same wet synthetic system differs.The surfactants tightly bound on crystal surfaces provide more steric hindrance, arresting the rate of crystal growth and thereby providing an intimate organic coating shell in the overall final particle growth.The weakly-bound stabilizers serve for on/off surface attachment/detachment, accelerating growth [31].These weakly-bound organic molecules are washed off facilely during the flocculation/purification (cleaning) process of the as-synthesized nanostructures. In order to elucidate this metal-surface functionalization/competition of ternary organic surfactants (OAm, OLEA and TOA), Fourier transform infrared (FTIR) measurements (Figure 4a) of the three distinct surfactants and the as-synthesized ternary nanostructures were conducted.The recorded spectra of OAm, OLEA and TOA are in good agreement with other reports [28,32].Accordingly, the ternary nanostructures' recorded spectra are similar to that of OAm, but not OLEA and TOA.The pure OLEA absorption bands (not observed in the spectra of TOA, OAm and ternary alloys) around ~1712 cm −1 and ~2680 cm −1 are characteristic of the carboxylic group (C=O) stretching mode and hydroxyl group (-OH) stretching mode, respectively, of the dimerized acid [33,34].Pure OAm exhibits typical bands at ~1562 cm −1 and ~1650 cm −1 (not appearing in either TOA or OLEA spectra) attributed to the NH 2 deformation vibrations/scissoring mode of primary amines, whereas the peak at ~3328 cm −1 is assigned to the -NH stretching mode [33].The OAm absorption bands appear in the spectra of all ternary nanostructures.These FT-IR spectral investigations gathered suggest that the organic coordinating agents attached on the as-synthesized ternary nanostructures after repeated destabilization/purification approach are exclusively OAm, although both OLEA and TOA were used during solution-grown alloys, in conjunction with OAm. Nanomaterials 2018, 8, x FOR PEER REVIEW 9 of 13 bound organic molecules are washed off facilely during the flocculation/purification (cleaning) process of the as-synthesized nanostructures. In order to elucidate this metal-surface functionalization/competition of ternary organic surfactants (OAm, OLEA and TOA), Fourier transform infrared (FTIR) measurements (Figure 4a) of the three distinct surfactants and the as-synthesized ternary nanostructures were conducted.The recorded spectra of OAm, OLEA and TOA are in good agreement with other reports [28,32].Accordingly, the ternary nanostructures' recorded spectra are similar to that of OAm, but not OLEA and TOA.The pure OLEA absorption bands (not observed in the spectra of TOA, OAm and ternary alloys) around ~1712 cm −1 and ~2680 cm −1 are characteristic of the carboxylic group (C=O) stretching mode and hydroxyl group (-OH) stretching mode, respectively, of the dimerized acid [33,34].Pure OAm exhibits typical bands at ~1562 cm −1 and ~1650 cm −1 (not appearing in either TOA or OLEA spectra) attributed to the NH2 deformation vibrations/scissoring mode of primary amines, whereas the peak at ~3328 cm −1 is assigned to the -NH stretching mode [33].The OAm absorption bands appear in the spectra of all ternary nanostructures.These FT-IR spectral investigations gathered suggest that the organic coordinating agents attached on the as-synthesized ternary nanostructures after repeated destabilization/purification approach are exclusively OAm, although both OLEA and TOA were used during solution-grown alloys, in conjunction with OAm.We deduce that the organic surfactants determining the dendritic morphological evolution/transformation of these ternary nanoalloys are OAm, as schematically illustrated in Figure 4b.Multiple surfactants used in the same synthetic system can thus trigger preferential adsorption in a range of surface orientations, leading to distinct crystallographic growth directions and hence to a dendritic structure.Our synthetic approach resulted in the successful formation of high surface area, multiply-branched nanoalloys Pt(NiCo) with varied stoichiometric compositions.These highly-exposed different crystal facets and porous internal We deduce that the organic surfactants determining the dendritic morphological evolution/ transformation of these ternary nanoalloys are OAm, as schematically illustrated in Figure 4b.Multiple surfactants used in the same synthetic system can thus trigger preferential adsorption in a range of surface orientations, leading to distinct crystallographic growth directions and hence to a dendritic structure.Our synthetic approach resulted in the successful formation of high surface area, multiply-branched nanoalloys Pt(NiCo) with varied stoichiometric compositions.These highly-exposed different crystal facets and porous internal structures may foster active reaction sites, thus enhancing the functionality of the binary and ternary alloys depending on the surface compositions. Catalyst Activity Measurements towards ORR Prior to electrochemical evaluations, dendritic Pt(NiCo) nanoalloys of varying composition were first deposited on carbon black (Vulcan XC-72R, Fuel Cell Store, Austin, TX, USA) via a colloidal-deposition approach.TEM images of these particles showed no apparent change in particle size or morphology following dispersion onto the support (Supplementary Materials Figure S10).The surface electrochemical properties of these ternary nanoalloys were then measured and compared to commercial Pt/C (HiSPEC 60% on carbon).Figure 5a shows cyclic voltammetry (CV) curves for the four nanocatalysts after voltage cycling to 100 cycles of the catalyst's surface cleaning.These curves exhibit both the hydrogen desorption/adsorption peak (at 0.05-0.35V) and the oxide formation/reduction peak (at 0.7-1.0V) [35,36].The magnitude of both of these peaks (in mA.cm −2 ) scaled in the following sequence: Pt ).This is consistent with the ECSAHupd for ternary nanoalloys, but not for Pt/C.The ECSACO/ECSAHupd ratio for all the ternary nanoalloys was 1.02-1.11,indicating little difference between Hads and COads surface coverage.In contrast, the ECSACO/ECSAHupd ratio for the commercial Pt/C electrocatalyst was ~2.2, which suggests that CO is better adsorbed on Pt surfaces than on the ternary nanoalloy surfaces.This can be explained by the smaller average sizes of the Pt nanocatalysts (7.3 ± 4.2 nm) and higher Pt coverage (60 wt %) on the carbon surface.Figure 5b also shows that the presence of Ni and Co on the Pt nanoalloy surfaces is associated with a shift in the CO stripping peaks to lower potentials than the commercial Pt/C nanocatalyst.This suggests that the presence of Ni and Co improves CO tolerance [37].Double CO oxidation peaks were also observed for ternary alloys.A number of factors can give rise to this phenomenon in these alloys, including: the existence of defects; segregated particles [38]; preferential/selective binding onto distinct facets [39]; the nature of the surface sites or particle size distribution [38,40]. The influence of composition on the ORR functionality was probed in an O2-purged 0.1 M HClO4 electrolyte solution at room temperature.There are two observable potential regimes in the ORR polarization curves in Figure 5c: a mixed, kinetic-diffusion-controlled region (the true measure of the catalyst functionality) between 0.8 and 1.0 V and a diffusion-limited current density regime between ~0.1 and 0.8 V.In the latter region, all the catalysts reached the theoretical limiting current density of −6.02 mA/cm 2 [23,41,42].The Tafel plots shown in Figure 5d, obtained from the potentials of 0.85-0.95V, exhibit functionalities that scaled as follows: Pt3(NiCo)2/C > Pt4(NiCo)/C > Pt5(NiCo)/C > Pt/C.These plots thus show that Pt3(NiCo)2/C displays the highest positive kinetic currents of all the electrocatalysts at any given potential, suggesting exceptional catalytic performance of ternary nanoalloys of this composition.Figure 5e shows the calculated Pt mass-specific activity (A mgPt −1 ) that scaled as follows: Pt3(NiCo)2/C (0.29) > Pt4(NiCo)/C (0.19) > Pt5(NiCo)/C (0.13) > Pt/C (0.03).The Pt nanoalloys thus showed mass-specific activity up to 10-times higher than Pt/C. Figure 5f shows that the area-specific activities (in mA.cmThe ECSA CO /ECSA Hupd ratio for all the ternary nanoalloys was 1.02-1.11,indicating little difference between H ads and CO ads surface coverage.In contrast, the ECSA CO /ECSA Hupd ratio for the commercial Pt/C electrocatalyst was ~2.2, which suggests that CO is better adsorbed on Pt surfaces than on the ternary nanoalloy surfaces.This can be explained by the smaller average sizes of the Pt nanocatalysts (7.3 ± 4.2 nm) and higher Pt coverage (60 wt %) on the carbon surface.Figure 5b also shows that the presence of Ni and Co on the Pt nanoalloy surfaces is associated with a shift in the CO stripping peaks to lower potentials than the commercial Pt/C nanocatalyst.This suggests that the presence of Ni and Co improves CO tolerance [37].Double CO oxidation peaks were also observed for ternary alloys.A number of factors can give rise to this phenomenon in these alloys, including: the existence of defects; segregated particles [38]; preferential/selective binding onto distinct facets [39]; the nature of the surface sites or particle size distribution [38,40]. The influence of composition on the ORR functionality was probed in an O 2 -purged 0.1 M HClO 4 electrolyte solution at room temperature.There are two observable potential regimes in the ORR polarization curves in Figure 5c: a mixed, kinetic-diffusion-controlled region (the true measure of the catalyst functionality) between 0.8 and 1.0 V and a diffusion-limited current density regime between ~0.1 and 0.8 V.In the latter region, all the catalysts reached the theoretical limiting current density of −6.02 mA/cm 2 [23,41,42].The Tafel plots shown in Figure 5d, obtained from the potentials of 0.85-0.95V, exhibit functionalities that scaled as follows: Pt 3 (NiCo) 2 /C > Pt 4 (NiCo)/C > Pt 5 (NiCo)/C > Pt/C.These plots thus show that Pt 3 (NiCo) 2 /C displays the highest positive kinetic currents of all the electrocatalysts at any given potential, suggesting exceptional catalytic performance of ternary nanoalloys of this composition.Figure 5e shows the calculated Pt mass-specific activity (A mg Pt Conclusions We demonstrate a rapid thermolysis protocol (requiring less annealing time) to synthesise high-quality Pt-based nanoalloys with hyperbranched morphologies.We have determined that, using this fast and low-cost protocol, Pt nanoalloys with Ni and/or Co solute exhibit a dendritic morphology, not observed in Pt nanoalloys containing Fe. Furthermore, in these Pt nanoalloys with Ni and Co, the morphology remains dendritic when the host:solute ratio is varied.Their open-framework morphology confers a high surface area, which allows significant molecular accessibility to surface atoms.All Pt(CoNi) nanoalloys display outstanding catalytic functionality for the sluggish ORR: our electrochemical measurements show that these nanoalloys exhibit functionality enhancement in both mass-specific and area-specific activities compared to the state-of-the-art commercial Pt/C catalyst.The catalytic activity of these nanoalloys is observed to increase with increasing solute concentration, offering both a cost advantage and a catalytic advantage, relative to standard Pt catalysts.Although our preliminary data on the electrochemical measurements conducted suggest that these high surface area ternary Pt(NiCo) nanoalloys display enhanced ORR functionality, they were observed to degrade under corrosive environments even after 100 potential cycles of catalyst surface cleaning.This observation is in line with the research findings by Cui et al.where the structural transformation of active nanostructures resulted in diminished activity with prolonged potential cycling [19].The accelerated durability tests (ADTs) of these dendritic ternary nanostructures thus require further investigations.The synthesis effort reported here provides a new opportunity for further development of cost-effective Pt-substituted nanostructures as high-performance electrocatalysts. Figure 3 . Figure 3. HAADF-STEM-EDS elemental mapping results for ternary alloys synthesized as (a) Pt 3 (NiCo) 2 ; (b) Pt 4 (NiCo) and (c) Pt 5 (NiCo), exhibiting a homogeneously mixed distribution of Co, Ni and Pt within the nanoalloys.The blue, green and red colours in the HAADF-STEM images represent Co, Ni and Pt, respectively.(d-f) show line scan profiles through regions in (a-c) respectively along the lines (white) shown in the inserts. Figure 4 . Figure 4. (a) FT-IR spectra of surfactants oleylamine (OAm), oleic acid (OLEA), trioctylamine (TOA) and highly-branched nanostructures of Pt3(NiCo)2, Pt4(NiCo) and Pt5(NiCo); (b) schematic illustration of the proposed crystal growth mechanism of both the binary and ternary nanostructures of Pt with Ni and/or Co, synthesized in the presence of a homogeneous mixture of three surfactants. Figure 4 . Figure 4. (a) FT-IR spectra of surfactants oleylamine (OAm), oleic acid (OLEA), trioctylamine (TOA) and highly-branched nanostructures of Pt 3 (NiCo) 2 , Pt 4 (NiCo) and Pt 5 (NiCo); (b) schematic illustration of the proposed crystal growth mechanism of both the binary and ternary nanostructures of Pt with Ni and/or Co, synthesized in the presence of a homogeneous mixture of three surfactants.
7,495.6
2018-06-26T00:00:00.000
[ "Materials Science", "Chemistry" ]
Interlayer Sliding Phonon Drives Phase Transition in the Ph-BTBT-10 Organic Semiconductor In the field of organic electronics, the semiconductor 7-decyl-2-phenyl[1]benzothieno[3,2-b][1]benzothiophene (Ph-BTBT-10) has become a benchmark due to its high charge mobility and chemical stability in thin film devices. Its phase diagram is characterized by a crystal phase with a bilayer structure that at high temperature transforms into a Smectic E liquid crystal with monolayer structure. As the charge transport properties appear to depend on the phase present in the thin film, the transition has been the subject of structural and computational studies. Here such a process has been investigated by polarized low frequency Raman spectroscopy, selectively probing the intermolecular dynamics of the two phases. The spectroscopic observations demonstrate the key role played by a displacive component of the transition, with the interpenetration of the crystal bilayers driven by lattice phonon mode softening followed by the intralayer rearrangement of the molecule rigid cores into the herringbone motif of the liquid crystal. The mechanism can be related to the effectiveness of thermal annealing to restore the crystal phase in films. ■ INTRODUCTION Displacive phase transitions are structural transformations, common for inorganic periodic systems, that only require small collective displacements of individual constituents of the crystal. For instance, the phase transition at 105 K in SrTiO 3 has become one of the archetype examples of the class alongside those of systems such as quartz and ferroelectric perovskites. In SrTiO 3 the tetragonal to cubic phase transformation is accompanied by the softening of a vibrational mode measured by both Raman 1 and neutron scatterings. 2 In fact, following the soft-mode theory of displacive phase transitions, the transition occurs as the result of some phonon frequency going to zero at a critical temperature. 3,4 Notably, displacive transitions are not commonly encountered in organic molecular crystals, with the exception of the Peierls type neutral−ionic transitions typical of charge transfer complexes at low temperature or high pressure, which lead to the dimerization of the donor−acceptor molecules. 5−7 However, concerted molecular displacements associated with a specific normal mode which result in a new phase have also been invoked for the molecular crystal DL-norleucine, which undergoes entire bilayer shifts during a displacive transformation. 8 In the field of organic electronics, the compound 7-decyl-2phenyl [1]benzothieno [3,2-b] [1]benzothiophene (Ph-BTBT-10) has become a benchmark material because of its high charge mobility and chemical stability even in thin films, 9,10 unlike pentacene and rubrene, the most studied systems in the past. 11 −13 In Ph-BTBT-10 the rigid BTBT core is functionalized with a phenyl group and a flexible decyl chain (Figure 1a), in a structure designed to achieve both good solubility and ordered liquid crystal phases, which are precursors to the formation of uniform crystalline thin films with increased 2-D mobility. 9,10 Due to the asymmetric substitution, Ph-BTBT-10 crystallizes in a bilayer structure, with a monoclinic unit cell where the ab plane is parallel to the layers and the long molecular axis is nearly parallel to the c axis (Figures 1b, S1, and S2). The strong interactions of the BTBT cores result in their herringbone arrangement and segregation from the decyl chains. 14 Ph-BTBT-10 is reported to undergo three first order phase transitions on heating: (i) Crystal to SmE at 150°C; (ii) SmE to SmA at 215°C; and finally (iii) SmA to isotropic liquid at 225°C. 15 In the first one the structure changes from bilayer head-to-head to monolayer head-to-tail, with two adjacent antiparallel molecules in the unit cell. 16,17 Since charge transport properties in thin films appear to be regulated by transformations between crystalline and SmE phases, 15,17−20 a deeper understanding of the underlying processes is desirable. In this work, we report on a low frequency Raman study aimed at clarifying the mechanism of the crystal to smectic E phase transition of Ph-BTBT-10. In fact, low frequency Raman spectroscopy is highly sensitive to the 3D arrangement of the crystal state by probing the intermolecular dynamics and therefore detecting the patterns of interaction. Polarized Raman measurements on oriented single crystals were used to probe the crystal planes ab, parallel to the molecular layers, and bc, perpendicular to them, allowing for the qualitative description of the lattice modes in terms of the crystal interactions patterns. Measurements as a function of the temperature revealed the existence of mode softening, providing direct information about the displacive nature of the transition. However, the spectral features also show evidence of a discontinuity, demonstrating that the overall transformation process involves a two step mechanism. ■ RESULTS AND DISCUSSION Room Temperature Raman Spectra. The Ph-BTBT-10 crystals display an elongated platelet morphology, with the two in-plane symmetry axes coinciding with the extinction directions under crossed polarized light. The observed morphology completely agrees with the prediction of the BFDH model (Bravais, Friedel, Donnay, and Harker) 21 for the monoclinic P2 1 /a structure ( Figure S2), allowing for the assignment of the longer and shorter in-plane axes to the a and b crystallographic directions, respectively. Both of these directions are parallel to the molecular layers and nearly perpendicular to the long Ph-BTBT-10 molecular axis. The observed morphology originates from a faster growth along a and b, driven by the strong in-plane interactions between the aromatic cores. 14 In the vibrational analysis of a molecular crystal, it is customary to distinguish between inter-and intramolecular modes on the basis of their different force constants, as the former depend on the weak vdW interactions and the latter on those of the chemical bonds. In the case of the flexible Ph-BTBT-10 molecule, such a distinction is made difficult by the occurrence of torsional degrees of freedom of low energy. However, we can assume that in the wavenumber range below 120 cm −1 most modes have predominant intermolecular character, and thus correspond to librations and translations of a (nearly) rigid molecule. In the P2 1 /a space group, Ph-BTBT-10 Raman active modes are either of A g or B g symmetry: the former can be detected in the aa, bb, cc, or ac configurations of polarization, while the latter are observed in ab and bc cross-polarization. The two letters are currently adopted to label the polarized spectra and indicate the polarization directions of the exciting and scattered light, respectively. 22 In Figure 2, we report the polarized Raman spectra collected from the ab and bc crystal faces. All the in-plane polarized spectra of the crystal (i.e., aa, ab, and bb) show medium intensity bands around 90 cm −1 whereas in the out-of plane polarizations, i.e., cc and bc, very strong bands appear below 20 The unit cell viewed perpendicular to the corresponding planes is shown on the right side of each panel. The two letters inside parentheses are used to label the polarization of the spectra indicate the polarization direction of the exciting and scattered light, respectively. The two letters outside parentheses indicate the corresponding light propagation and scattering directions, which are perpendicular to the investigated crystal planes. The 4−25 cm −1 spectral range is shown in the inset with a high intensity scale for clarity. In some spectra, a narrow plasma line from the laser at 7 cm −1 has been removed. cm −1 ( Table 1). As can be seen from the figure, the aa and bb spectra share the same A g bands, with only small differences in the relative intensities, while modes with B g symmetry can be identified in the ab spectrum. Interestingly, the bc spectrum, which probes the scattering from the corresponding crystal plane, is characterized by a huge intensity increase of the very low frequency B g band by nearly an order of magnitude with respect to the ab plane. Due to the strong anisotropy of the Ph-BTBT-10 crystalline arrangement, the modes polarized in the ab plane must correspond to in-plane translations or rotations about the long axes of the molecules. The out-of-plane polarized modes must instead involve translations along the long molecular axis. Such assumptions are supported by the results of the DFT simulation of the Raman spectra for the similar system C 8 O-BTBT-OC 8 . 23 The assignment is further confirmed by the comparison between the polarized spectra of Ph-BTBT-10 and unsubstituted BTBT, which also displays a layered packing ( Figure S3). Thus, in-plane polarized spectra mainly reflect intralayer molecular packing interactions, whereas the out-ofplane polarized spectra probe interlayer interactions. Accordingly, the lower frequencies of the interlayer polarized lattice phonons result from the weaker interactions between molecules belonging to adjacent layers, in agreement with the thin platelet morphology displayed by the crystal. Toward the Transition: The Soft Phonons. The soft behavior of two lattice phonons on approaching the Crystal to SmE phase transition becomes evident in the temperature evolution of the bc polarized spectra, as shown in Figure 3. As clearly seen in the figure, the B g band centered at 23 cm −1 at 83 K shifts to lower energy and broadens significantly on increasing temperature. Around 300 K it closes in on the band at 12 cm −1 , the two bands overlap, and the spectral weight is redistributed between them. Near a phase transition, the potential energy surface is expected to become strongly anharmonic, leading to the mixing of normal modes having the same symmetry and similar frequencies. Here such an effect becomes visible above 300 K, when the corresponding B g modes with a large projection along the c-axis get strongly mixed and the soft behavior is transferred to the lower energy band, which moves toward zero frequency, while that at higher energy it gradually loses intensity and turns into a broad shoulder (see inset Figure 3). In the same spectrum, the strong narrow band at 5 cm −1 is no longer detectable above 320 K, as it falls below the wavenumber detection limit, and its behavior with temperature cannot be investigated any further. For this reason, we cannot exclude a priori that the lowest frequency mode also plays a role in the transition. However, its temperature evolution in the available range, i.e., from 83 to 320 K, is characterized by the absence of sizable broadening and by minimal red-shift, suggesting a very little interaction or mixing with the other B g soft phonon modes. Since the band is visible in both bc and cc polarizations, it could be assigned to an intramolecular chain mode as such low frequency phonons have been predicted in alkylated BTBT derivatives. 24 Unlike the low frequency B g phonons, A g phonons display with temperature an expected typical trend, as can be seen by comparing the cc and the bc polarized spectra (see Figure 4). In particular, the A g lowest frequency bands, overlapping at 83 K the B g bands with soft behavior, never shift to zero frequency on increasing temperature, as shown by the plot of the mode frequencies vs temperature ( Figure S5). In addition, they are narrower than their B g counterparts at all temperatures. These characteristics are shared by the high frequency phonons detected in the in-plane polarized spectra, which do not display any effect that anticipates the transition ( Figure S4). The SmE Phase. The final occurrence of the LC SmE phase is signaled by the sudden replacement of the bb polarized bands of the crystal Raman spectrum with a single broad one around 70 cm −1 ( Figure 5, left panel). The aa (not shown here) and bb spectra become fully superimposable in the new phase, while the ab polarized one behaves similarly but The letters s (strong), m (medium), and w (weak) refer to the relative intensities of the bands. The two letters used to label the band polarization indicate the polarization direction of the exciting and scattered light, respectively. The features of the SmE low frequency spectra convey information about its organization and symmetry. Overall, for instance, the relative intensity patterns of the SmE in-plane and out-of-plane polarized scatterings are the same as in the crystal phase, demonstrating that the orientation of the layer structure is maintained in the transition. In addition, the observation that bb (aa) and ab polarized spectra are distinguishable is a clear indication of the presence of ab in-plane order. Indeed, in-plane orientational and positional orders are features characterizing the SmE phases. The Transition Mechanism. The bulk of spectroscopic information collected at the onset of the bilayer to monolayer phase transition must now be linked to its preparatory stage, where the experiments have detected the intervention of the softening involving crystal modes of B g symmetry strongly outof-plane polarized. More properly, the mode softening would be better described as an effective vibration, resulting from the combination of lattice phonons, as suggested by the strong anharmonicity characterizing the system dynamics on approaching the transition. In attempting the qualitative description of the responsible vibration, it must be remembered that its Raman activity implies a motion at k = 0, i.e., where all unit cells move in phase. An intuitive representation depicts this motion as comprehending the counter translation along the crystal c axis of pairs of adjacent molecules belonging to the same layer, and such a condition is satisfied by a lattice phonon of B g symmetry in which two opposite layers in the unit cell move out of phase, following the scheme of Figure 6a. In fact, the interpenetration of the opposite layers by displacement of the molecules along the c axis has been proposed as the most likely transition mechanism. 27,28 The association of such a displacement with the identified B g effective lattice vibration is thus straightforward, with a motion that appears to overcome the restoring force in the process that mixes the adjacent layers while maintaining the molecular density. Following this, the monolayer structure typical of the SmE phase can be thought of as resulting from condensation of the soft mode eigenvectors (Figure 6b). Notice that instead the softening of the total-symmetric A g counterpart of the vibrational motion would produce the collapse of two layers In the mode softening stage, the lattice phonon spectra display a seamless evolution in temperature. However, it is the discontinuity detected in the in-plane bb and ab spectra at the onset of the transition, i.e., above 418 K, which identifies the actual lattice transformation (see Figure 5, left panel). In fact, the sudden band broadening indicates an in-plane rearrangement that follows the layer mixing. This is consistent with the BTBT cores assuming a new herringbone structure in the SmE phase, 29 due to the CH−π interactions, which are established by rotation around the long molecular axes ultimately resulting in a monolayer rather than bilayer arrangement. ■ CONCLUSIONS By carrying out the study on single crystals, rather than on polycrystalline samples or thin films, the spectral features of the Crystal to SmE transformation of Ph-BTBT-10 could be related to the lattice dynamics along specific crystal directions and therefore to the anisotropic properties of the system. The two step mechanism revealed by the spectroscopic approach involves first the interpenetration of the molecular layers of the crystal driven by an effective soft mode, followed by the discontinuous intralayer rearrangement of the molecule rigid cores into the herringbone motif of the final phase. The former process in fact anticipates the transition, and the softening entails lattice vibrations with a translational component along the layer shifting direction. The latter displays instead the fingerprint of discontinuity in the abrupt spectral changes at the transition, with features typical of crystal to liquid crystal transitions. 25,30−32 The findings are consistent with the results found in previous works on Ph-BTBT-10. XRD measurements on oriented thin films demonstrated that the layers maintain the same orientation through the Crystal to SmE transition, while a herringbone packing still characterizes the ordered SmE phase. 16 An interlayer translation of the molecules as a first step of the transformation was also proposed by Molecular Dynamics simulations. 27,28 These observations show the predominant displacive character of the transition with the key role played by cooperative lattice vibrations in which the restoring force appears to decay, driving the transformation from bilayer to monolayer. Such a mechanism also explains the effectiveness of thermal treatment of the films in recovering the crystal phase in the reverse transformation. In fact, at the thermal annealing temperature, the crystal is thermodynamically stable, while the vibration involved is sufficiently soft to facilitate the sliding process of the layers. ■ EXPERIMENTAL SECTION Ph-BTBT-10 was synthesized following the previously reported procedure. 15 Single crystals were obtained by recrystallization of the synthesized powder in a 1,2-dichlorobenzene solution, and after slow solvent evaporation, white platelets were obtained. The Raman spectra were recorded with a Horiba LabRAM HR Evolution Raman microspectrometer equipped with a 633 nm HeNe laser and a set of Bragg filters to reject the Rayleigh radiation. The microspectometer was equipped with a diffraction grating with 1800 grooves per mm and an 800 mm focal length allowing for a maximum spectral resolution of 0.3 cm −1 and a lowest accessible frequency of 4 cm −1 . The crystals have sheetlike morphology (typical size 100 × 200 × 5 μm 3 ) and tend to overlap. Thus, single crystal domains were selected by microscopic observation using Polarized Optical Microscopy (POM). The measurements were performed in backscattering geometry on both the bc and ab planes. The experimental setup is described in Figure S1. Since the extended face is parallel to the ab plane, the measurements on the bc plane required a homemade sample holder composed of thin glass slides. A crystal was oriented and fixed between them. The temperature was controlled in the range 83−430 K using a Linkam HFS 91 stage, fitted under the microscope. When comparing spectra recorded at different temperatures, the raw data were converted into the imaginary part of the dynamic susceptibility χ″), as described in refs 25 and 33. This corrects the intensity enhancement at small wavenumbers due to the thermal excitation of vibrational modes according to the following relationships: The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.chemmater.3c00209. Experimental configuration and crystal orientation of the polarized Raman measurements; BFDH morphology of the Ph-BTBT-10 crystal; polarized low frequency Raman spectra of a unsubstituted BTBT single crystal; and temperature dependent spectra (PDF)
4,219.6
2023-07-20T00:00:00.000
[ "Materials Science" ]
A Real-Time Energy Response Correction Method for Cs3Cu2I5:Tl Scintillating Dosimeter The uneven energy response of radiation detectors severely limits the accuracy of the dose rate meter used for radiation protection. Currently widely used in dose rate meters as a physical method of setting shielding compensation, the energy response correction error of the detector at different energies is mostly between 15 and 25%. This work designs a real-time correction method for energy response based on a novel Cs3Cu2I5:Tl scintillation detector to improve the accuracy of the dose rate meter used for radiation protection. The technique utilizes the idea of pulse amplitude weighting (PAW) to segment the pulse amplitude histogram. This detector achieves an almost constant energy response after our correction. The experimental results show that compared to 137Cs γ rays, the maximum error of the response is 8.26% in the photon energy ranging from 33 keV to 1.25 MeV, which is much better than ±30% of the recommended IEC 61526:2010, verifying the feasibility of PAW. Introduction In radiation protection, scintillation detectors, semiconductor detectors, and gas ionization detectors are generally used as sensing probes for dose equivalent rate meters, which are used by converting the sensors' count rate or current response to the corresponding dose rate.The inconsistent response to photon energy, regardless of the detector, is caused by the different photon detection efficiency of the detector for other energy photons.In particular, the over-response problem is more severe in the low-energy region.It has several times the difference compared to 137 Cs or 60 Co and other higher energy radiation photons [1].There is a significant error in the measurement results of the dosage rate instrument, which seriously affects the accurate monitoring of the dose rate.Therefore, its energy response correction needs to meet the measurement accuracy requirements of the dose rate instrument [2].Several attempts have been made to solve this problem.For example, the most common method is to use physical shielding materials for correction.Most of the energy response correction errors of detectors at different energies are between 15% and 25%, which can meet the standard ±30% error requirement.However, it will reduce the detection efficiency and increase the detector's weight.Furthermore, obtaining a more accurate dose rate in a wide energy range is not easy.We can also correct the detectors' energy response by solving the energy spectrum-dose conversion function G(E) [3,4].However, this method needs to solve the G(E) function, and it is complex.Usually, it requires a combination of energy spectrum measurement and offline processing, which cannot achieve real-time dose rate measurement.Moreover, the method puts higher requirements on the hardware of the dose rate measurement instrument, especially under a stronger radiation field [5,6]. In order to find a convenient and reliable calibration method, this work proposes a real-time calibration idea based on pulse amplitude weighting (PAW).The method divides the pulse amplitude generated by the energy deposited in the detector from X/γ rays into multiple intervals according to their distribution.Different intervals are given their weighting coefficients relevantly, obtained by a linear constraint algorithm, and the correction coefficient of each interval is written into the energy response real-time correction system.Finally, the response values in all intervals after correction are counted and summed.This method is different from the energy-spectrum-dose conversion G(E) function method, in which neither the acquisition of energy-spectrum data for conversion nor the high data processing capability of the instrument is required.Furthermore, the corresponding coefficients must only be written simultaneously for a particular detector.The method requires that the sensor has an energy-resolving capability.In this investigation, we used a recently developed Cs 3 Cu 2 I 5 :Tl scintillation crystal, a very effective copper-based chalcogenide material grown at the Shanghai Institute of Ceramics, Chinese Academy of Sciences.Copper(I) halide perovskite Cs 3 Cu 2 I 5 :Tl has attracted tremendous interest and has been considered an exceptionally promising scintillator due to its excellent optical properties, environmental stability, and low toxicity.The Cs 3 Cu 2 I 5 :Tl can reach a light yield of 87,000 photons/MeV under 137 Cs γ-ray radiation and gives a remarkable energy resolution of 3.4% for 662 keV [7][8][9].Meanwhile, a large Stokes shift of 140 nm between the PL and PLE makes the crystal have good air stability and no self-absorption.These reports suggest promise for Cs 3 Cu 2 I 5 :Tl.The 4.53 g/cm 3 density and the effective atomic number 51.9 are competitive with other scintillators highly suitable for efficient X/γ ray detection.Therefore, it has great application value in the field of radiation monitoring [8,9].Table 1 shows the performance comparison between Cs 3 Cu 2 I 5 :Tl single-crystal and other commonly used scintillators [10][11][12]. Methods and Principles The scintillation detectors work by depositing energy in the scintillation body to produce a light pulse, and the magnitude of the pulse amplitude characterizes the amount of energy of the incident particles [13].The subsequent analog circuit further amplifies and shapes the electrical signal.When a photon of energy E enters the scintillator, the detector will record a pulse signal with an amplitude of v after a series of interactions.Let the probability of interaction be p(E, v).The pulse amplitude spectral function N p(E, v) of this detector is obtained when N photons of energy E irradiate it.Without energy response correction, the area of the pulse amplitude curve is characterized as the dose rate response [14].The real-time energy response correction technique based on PAW is mainly by weighting the counting rate partitions at different pulse amplitudes.After each interval correction, the total counting rate is characterized as the corrected dose rate response.In this way, we can correct the energy response at different energies. Under the photon radiation field with energy E(keV) and Air Kerma rate . K a (E), the detector's counting rate n(E) is divided into m clusters according to the pulse amplitude magnitude, and the system noise is filtered out by threshold screening.When the photon is irradiated into the detector, the detector will generate a pulse signal after photoelectric conversion.The amplitude of the pulse signal will be compared with a selectable threshold.This signal will be judged as valid when the pulse height exceeds this threshold, the clustering interval of the pulse amplitude will be determined, and the counter of the corresponding partition will be increased.The counting rate n i (E) within the partition i is also corrected and weighted by a correction factor of k i , at which point the detector response is Equation (1): The detector response value before correction when m = 1 and k i = 1 without impulse classification weighting is Equation (2): Equation ( 1) is the corrected detector response value when m > 1. The above equation yields the energy response matrix of the detector at different energies.The value of the variance S 2 of R i (E) represents the magnitude of the fluctuation of the detector response at different photon energies for this detector (Equation ( 3)): The detector has the best energy response consistency when S 2 takes the minimum value.According to this constraint, the values of k 1 -k m , which are the energy response correction coefficients of the m clustering intervals, can be solved. The minimal value of the constraint S 2 , which denotes the minimum fluctuation of the detector's response value across energies, is necessary if we wish to calculate the energy response correction factor.Therefore, it is necessary to obtain the response curve of the detector at different energy photons first.The energy points are selected using a total of ten energy points of 33 keV, 48 keV, 65 keV, 83 keV, 100 keV, 118 keV, 164 keV, 208 keV, 662 keV, and 1250 keV in IEC 61526:2010 [15]. When partitioning the pulse amplitude of the detector at different photon energies, it is known from the correction principle that the correction effect will be better if there are more partitions.However, from a practical application, the fewer the number of partitions, the better.Furthermore, the number of partitions should be divided into as few as possible to meet the energy response requirements, making the subsequent development and testing of the dose rate equipment easier.The over-response problem of the detector appears in the low-energy region, so the interval division is concentrated chiefly in the low-energy region. The correction coefficients derived from the linear constraint algorithm of Equation ( 3) are written into the energy response correction software Corso v1.0 developed by LabVIEW to achieve the real-time correction of detector energy response.To evaluate the response error at a specific energy point, the energy response at that point needs to be normalized, and generally, the response value at 137 Cs is selected for normalization, i.e., the relative energy response R rel (E) at a specific energy E is Equation (4): In summary, the principle flow of the correction method of this study species is shown in Figure 1. Acquisition of Energy Response Correction Coefficients The system of radiation signal detection unit consists of a detector, signal amplification circuit, and 60 MB/s pipeline high-speed analog-to-digital converter (Fast-ADC, FADC), Field-Programmable Logic Gate Array (FPGA), and host computer software Corso v1.0.The energy spectrum acquisition system consists of a detector, a multichannel analyzer, an oscilloscope, and a PC.The schematic diagram of the experimental setup is shown in Figure 2. Acquisition of Energy Response Correction Coefficients The system of radiation signal detection unit consists of a detector, signal amplification circuit, and 60 MB/s pipeline high-speed analog-to-digital converter (Fast-ADC, FADC), Field-Programmable Logic Gate Array (FPGA), and host computer software Corso v1.0.The energy spectrum acquisition system consists of a detector, a multichannel analyzer, an oscilloscope, and a PC.The schematic diagram of the experimental setup is shown in Figure 2. Acquisition of Energy Response Correction Coefficients The system of radiation signal detection unit consists of a detector, signal amp tion circuit, and 60 MB/s pipeline high-speed analog-to-digital converter (Fast-FADC), Field-Programmable Logic Gate Array (FPGA), and host computer sof Corso v1.0.The energy spectrum acquisition system consists of a detector, a multich analyzer, an oscilloscope, and a PC.The schematic diagram of the experimental se shown in Figure 2. The scintillation detector consists of a Cs 3 Cu 2 I 5 :Tl column scintillator of Φ 7 mm × 3 mm coupled with a Sensor-J60035 silicon photomultiplier (SiPM) with a photosensitive area of 6.07 mm × 6.07 mm.The SiPM proportionally converts the light signal generated in the scintillator into electrons and multiplies them to form an electrical pulse signal.The pulse signal output from the detector is amplified, filtered, shaped, and converted into a voltage pulse signal by a pulse signal amplifier circuit and then transmitted to the FADC for analog-to-digital conversion.The 12-bit, 60 MHz AD9238 was selected for the high-speed analog-to-digital converter.The PC software written in LabVIEW displays the counting rate in real time after the energy response correction. The Cs 3 Cu 2 I 5 :Tl detector completed the measurement of gamma energy spectrum under 137 Cs, 60 Co, 241 Am, and 152 Eu gamma radiation sources and the measurement of X-ray energy spectrum at 33 keV, 48 keV, 65 keV, 83 keV, 100 keV, 118 keV, 164 keV, and 208 keV at the Ionizing Radiation Standard Laboratory.Since the SiPM signal is a positive pulse, we used the rising edge trigger method and set a 10 mV trigger threshold to minimize interference from noise [16].The pulse amplitude spectra are acquired using the multichannel analyzer ORTEC EASY-MCA-8K (Easley, SC, USA).In accordance with IEC 61526:2010 requirements, the standard radiation field using radiation sources, including X-rays and radioisotopes 137 Cs and 60 Co, can provide certified values of Air Kerma rate from 33 keV to 1.25 MeV.The energy response experiments based on Cs 3 Cu 2 I 5 :Tl scintillation detectors in Figure 2 were done under the standard radiation field mentioned above, and the standard radiation field experimental setup is shown in Figure 3. Sensors 2023, 23, x FOR PEER REVIEW 5 of 17 The scintillation detector consists of a Cs3Cu2I5:Tl column scintillator of Φ 7 mm × 3 mm coupled with a Sensor-J60035 silicon photomultiplier (SiPM) with a photosensitive area of 6.07 mm × 6.07 mm.The SiPM proportionally converts the light signal generated in the scintillator into electrons and multiplies them to form an electrical pulse signal.The pulse signal output from the detector is amplified, filtered, shaped, and converted into a voltage pulse signal by a pulse signal amplifier circuit and then transmitted to the FADC for analog-to-digital conversion.The 12-bit, 60 MHz AD9238 was selected for the highspeed analog-to-digital converter.The PC software written in LabVIEW displays the counting rate in real time after the energy response correction. The Cs3Cu2I5:Tl detector completed the measurement of gamma energy spectrum under 137 Cs, 60 Co, 241 Am, and 152 Eu gamma radiation sources and the measurement of X-ray energy spectrum at 33 keV, 48 keV, 65 keV, 83 keV, 100 keV, 118 keV, 164 keV, and 208 keV at the Ionizing Radiation Standard Laboratory.Since the SiPM signal is a positive pulse, we used the rising edge trigger method and set a 10 mV trigger threshold to minimize interference from noise [16].The pulse amplitude spectra are acquired using the multichannel analyzer ORTEC EASY-MCA-8K.In accordance with IEC 61526:2010 requirements, the standard radiation field using radiation sources, including X-rays and radioisotopes 137 With reference to the requirements of IEC 61526:2010, the standard radiation field with average energy covering the range of 33 keV to 208 keV is achieved by an X-ray unit producing N40, N60, N80, N100, N120, N150, N200, and N250 narrow-spectrum radiation masses (Figure 3a), with a console (Figure 3c) to adjust the filter, tube voltage, and tube current to set the energy and dose rate of different X-rays.The standard radiation fields with average energies of 662 keV and 1.25 MeV were then realized by choosing radionuclides 137 Cs and 60 Co.The dose rates of different γ-rays were obtained by adjusting the source distance, and their true values were measured by a certified spherical ionization chamber.In the experiment, the detector is in charged particle equilibrium, so the effect of the particles' Bremsstrahlung can be ignored.At this time, the value of the Air Kerma rate and the Air-absorbed dose rate is equal.The Air-absorbed dose rate is experimentally obtained from a spherical ionization chamber test, so in this work, the Air Kerma rate is considered to be the same value as the Air-absorbed dose rate.The parameters of the laboratory radioactive source conditions are shown in Table 2.It should be noted that radioactive isotopes generate photons with energies of 662 keV and 1250 keV, so there are no two parameters of tube voltage and tube current.With reference to the requirements of IEC 61526:2010, the standard radiation field with average energy covering the range of 33 keV to 208 keV is achieved by an X-ray unit producing N40, N60, N80, N100, N120, N150, N200, and N250 narrow-spectrum radiation masses (Figure 3a), with a console (Figure 3c) to adjust the filter, tube voltage, and tube current to set the energy and dose rate of different X-rays.The standard radiation fields with average energies of 662 keV and 1.25 MeV were then realized by choosing radionuclides 137 Cs and 60 Co.The dose rates of different γ-rays were obtained by adjusting the source distance, and their true values were measured by a certified spherical ionization chamber.In the experiment, the detector is in charged particle equilibrium, so the effect of the particles' Bremsstrahlung can be ignored.At this time, the value of the Air Kerma rate and the Air-absorbed dose rate is equal.The Air-absorbed dose rate is experimentally obtained from a spherical ionization chamber test, so in this work, the Air Kerma rate .K a is considered to be the same value as the Air-absorbed dose rate.The parameters of the laboratory radioactive source conditions are shown in Table 2.It should be noted that radioactive isotopes generate photons with energies of 662 keV and 1250 keV, so there are no two parameters of tube voltage and tube current.The detector was placed on top of the 3D motion platform, and the center of the detector was fixed at the reference point of the standard value.We tested the environmental background and system noise before the experiment in order to assess the influence of environmental background and system noise on the experimental results [17].The detector signals were tested with the radioactive source shutter off and on and monitored in real-time by an oscilloscope.The trigger threshold was set to 10 mV to eliminate the effect of system noise as much as possible.Figure 4a shows the trigger frequency less than 10 Hz under the system noise and environmental background; Figure 4b shows the trigger frequency is above 15 kHz under the X-rays of 33 keV.In summary, the effects of environmental background and system noise on the experimental results can be ignored.The detector was placed on top of the 3D motion platform, and the center of the detector was fixed at the reference point of the standard value.We tested the environmental background and system noise before the experiment in order to assess the influence of environmental background and system noise on the experimental results [17].The detector signals were tested with the radioactive source shutter off and on and monitored in real-time by an oscilloscope.The trigger threshold was set to 10 mV to eliminate the effect of system noise as much as possible.Figure 4a shows the trigger frequency less than 10 Hz under the system noise and environmental background; Figure 4b shows the trigger frequency is above 15 kHz under the X-rays of 33 keV.In summary, the effects of environmental background and system noise on the experimental results can be ignored.The acquisition energy spectrum unit consists of the multi-channel analyzer ORTEC EASY-MCA-8K and the accompanying software MAESTRO for Windows Version 7.01.To realize the signal of the received sensor, AD conversion of the pulse amplitude signal, and classification to obtain the pulse amplitude spectrum, the acquisition threshold, number of channel addresses, acquisition time, etc., can be configured within the software.To avoid the influence of the system dead time on the experimental results, the dose rate point chosen for the experiment should be within the dose rate linear response region of the detector [18].The linear response between the dose rate and counts per second of the Cs 3 Cu 2 I 5 :Tl detector at different energies was tested by placing the detector at the reference point position of the radiation field and varying the dose rate by adjusting the magnitude of the tube current of the X-ray machine.As shown in Figure 5, the Cs 3 Cu 2 I 5 :Tl detector has reasonable linearity within about 50 µGy/h dose rate.Therefore, we chose the radiation field with a dose rate near 50 µGy/h to carry out the energy response experiment in this work. Sensors 2023, 23, x FOR PEER REVIEW 7 of 17 The acquisition energy spectrum unit consists of the multi-channel analyzer ORTEC EASY-MCA-8K and the accompanying software MAESTRO for Windows Version 7.01.To realize the signal of the received sensor, AD conversion of the pulse amplitude signal, and classification to obtain the pulse amplitude spectrum, the acquisition threshold, number of channel addresses, acquisition time, etc., can be configured within the software.To avoid the influence of the system dead time on the experimental results, the dose rate point chosen for the experiment should be within the dose rate linear response region of the detector [18].The linear response between the dose rate and counts per second of the Cs3Cu2I5:Tl detector at different energies was tested by placing the detector at the reference point position of the radiation field and varying the dose rate by adjusting the magnitude of the tube current of the X-ray machine.As shown in Figure 5, the Cs3Cu2I5:Tl detector has reasonable linearity within about 50 µGy/h dose rate.Therefore, we chose the radiation field with a dose rate near 50 µGy/h to carry out the energy response experiment in this work. Real-Time Correction of Detector Energy Response The signal acquisition part of the energy response real-time correction device uses the Xilinx Spartan6 series high-performance FPGA to process the digital waveform data output from the AD9238 and send the data to the host computer via Gigabit Ethernet.The energy response real-time calibration software was written in LabVIEW Version 2015.The program includes three modules: a digital waveform data acquisition module, a Gigabit Ethernet data splicing module, and an energy response real-time correction module.The input of the digital waveform data acquisition module was connected to the SiPM readout circuit to convert the pulse voltage signal output from the detector into digital waveform data, extract the waveform amplitude value, and transmit it to the PC.The Gigabit Ethernet data splicing module was used to transfer data from the energy response real-time correction program to the digital waveform data acquisition module to achieve real-time energy response correction.The real-time energy response correction module was used to process the shape amplitude information extracted from the digital waveform data acquisition module, perform energy response correction, and display the counts or counting rate (including the actual and corrected values) in real time.These enable digital Real-Time Correction of Detector Energy Response The signal acquisition part of the energy response real-time correction device uses the Xilinx Spartan6 series high-performance FPGA to process the digital waveform data output from the AD9238 and send the data to the host computer via Gigabit Ethernet.The energy response real-time calibration software was written in LabVIEW Version 2015.The program includes three modules: a digital waveform data acquisition module, a Gigabit Ethernet data splicing module, and an energy response real-time correction module.The input of the digital waveform data acquisition module was connected to the SiPM readout circuit to convert the pulse voltage signal output from the detector into digital waveform data, extract the waveform amplitude value, and transmit it to the PC.The Gigabit Ethernet data splicing module was used to transfer data from the energy response real-time correction program to the digital waveform data acquisition module to achieve real-time energy response correction.The real-time energy response correction module was used to process the shape amplitude information extracted from the digital waveform data acquisition module, perform energy response correction, and display the counts or counting rate (including the actual and corrected values) in real time.These enable digital waveform data acquisition, amplitude extraction, energy response correction factor setting, and count displaying.The front-end interface of the upper computer software is shown in Figure 6.The software allows real-time adjustment of correction coefficients, set of trigger thresholds, real-time monitoring of pulse waveform signals, and real-time display of actual and actual counting rates, corrected counts, and corrected counting rates. Calculation of Correction Factor The pulse amplitude spectrum acquired by the multichannel analyzer ORTEC EASY-MCA-8K is shown in Figures 7 and 8. Figure 7 shows the X-ray pulse amplitude, in which we can observe that when the incident X-ray energy reaches 164 keV, the Compton flat appears in front of the X-ray energy spectrum.The interaction between X/γ ray and matter mainly includes the photoelectric effect, Compton scattering, and electron pair effect.For the same substance, the three effects have a certain dependence on the energy of the incident photon.In the case of incident low-energy photons, the photoelectric effect is dominant.As the incident photon energy increases, the Compton scattering effect gradually increases so that we can see the appearance of the platform in the front in the energy spectrum at 164 keV and 208 keV energies.For photons with energy less than 164 keV, the Compton scattering effect is so weak that we cannot see the Compton platform very clearly.Figure 8 shows the γ pulse amplitude spectrum under the γ radiation device. Calculation of Correction Factor The pulse amplitude spectrum acquired by the multichannel analyzer ORTEC EASY-MCA-8K is shown in Figures 7 and 8. Figure 7 shows the X-ray pulse amplitude, in which we can observe that when the incident X-ray energy reaches 164 keV, the Compton flat appears in front of the X-ray energy spectrum.The interaction between X/γ ray and matter mainly includes the photoelectric effect, Compton scattering, and electron pair effect.For the same substance, the three effects have a certain dependence on the energy of the incident photon.In the case of incident low-energy photons, the photoelectric effect is dominant.As the incident photon energy increases, the Compton scattering effect gradually increases so that we can see the appearance of the platform in the front in the energy spectrum at 164 keV and 208 keV energies.For photons with energy less than 164 keV, the Compton scattering effect is so weak that we cannot see the Compton platform very clearly.Figure 8 shows the γ pulse amplitude spectrum under the γ radiation device.Figure 8 also shows the energy versus channel address scaling information obtained from the peak position information of the full energy peak, which is used to calibrate the linear relationship between channel address and photon energy in the multichannel analyzer ORTEC EASY-MCA-8K.The scale information between the energy and the channel address in the multichannel analyzer allows the conversion of the channel address in the horizontal coordinate to energy.The vertical coordinate in the pulse amplitude spectrum is the counts, which can be converted into the counting rate by the information of the measured spectrum time.The counting rate was divided by the dose rate at the standard position to obtain data on the detection efficiency of the detector at different energies, i.e., the detector response at different energies [19,20].A total of 10 X/γ ray data measurements covering the energy range of 33 keV-1.25 MeV were completed for the average energy, as shown in Figures 9 and 10.The scale information between the energy and the channel address in the multichannel analyzer allows the conversion of the channel address in the horizontal coordinate to energy.The vertical coordinate in the pulse amplitude spectrum is the counts, which can be converted into the counting rate by the information of the measured spectrum time.The counting rate was divided by the dose rate at the standard position to obtain data on the detection efficiency of the detector at different energies, i.e., the detector response at different energies [19,20].A total of 10 X/γ ray data measurements covering the energy range of 33 keV-1.25 MeV were completed for the average energy, as shown in Figures 9 and 10.The scale information between the energy and the channel address in the multichannel analyzer allows the conversion of the channel address in the horizontal coordinate to energy.The vertical coordinate in the pulse amplitude spectrum is the counts, which can be converted into the counting rate by the information of the measured spectrum time.The counting rate was divided by the dose rate at the standard position to obtain data on the detection efficiency of the detector at different energies, i.e., the detector response at different energies [19,20].A total of 10 X/γ ray data measurements covering the energy range of 33 keV-1.25 MeV were completed for the average energy, as shown in Figures 9 and 10.The initial energy response of the detector without correction is the area of the detector response spectra at different energies.By dividing the response spectrum into multiple intervals according to the energy or pulse amplitude values, the corresponding coefficients corrected the different intervals and summed up the energy response of the detector after energy response correction.The energy spectrum information was used to classify the energy into ten intervals in this study: 20-40 keV, 40-60 keV, 60-80 keV, 80-100 keV, 100-120 keV, 120-150 keV, 150-200 keV, 200-500 keV, 500-800 keV, and 800-1500 keV, covering the photon energy range of 33 keV-1.25 MeV.Moreover, scale out the detector The initial energy response of the detector without correction is the area of the detector response spectra at different energies.By dividing the response spectrum into multiple intervals according to the energy or pulse amplitude values, the corresponding coefficients corrected the different intervals and summed up the energy response of the detector after energy response correction.The energy spectrum information was used to classify the energy into ten intervals in this study: 20-40 keV, 40-60 keV, 60-80 keV, 80-100 keV, 100-120 keV, 120-150 keV, 150-200 keV, 200-500 keV, 500-800 keV, and 800-1500 keV, covering the photon energy range of 33 keV-1.25 MeV.Moreover, scale out the detector energy and pulse amplitude information and experimentally verify the linear relationship between incident photon energy and pulse amplitude, as shown in Table 3.The real-time correction of the energy response based on PAW is mainly required for the convergence of the detector response values at different energies.Under this constraint, the minimum value of the variance of R i (E) in Equation ( 2) represents the stability of the detector response. According to Equation (3), when S 2 takes the minimum value, the values of k 1 -k m are obtained, which are the energy response correction coefficients for the m clustering intervals.Under the experimental conditions, this coefficient will be used as the base value and will be fine-tuned according to the actual energy response correction coefficient during the experiment.In this experiment, the pulse amplitude values of the detector were divided into ten pulse amplitude intervals, and the resulting coefficients corresponding to each interval were calculated, as shown in Table 4. 1) can be used to obtain the response value of the detector when we have the correction coefficient for each pulse amplitude value interval.This value is used to confirm the accuracy and viability of the energy response correction result.The counting rate values at different energies can also be obtained from the correction factors and Equation (5), and the corrected counting rate can convert the actual dose rates in the radiation field. The k i is the correction factor of interval i, and n i (E) is the counting rate of interval i.The total counts N i (E) is obtained by correcting and weighting the counting rates within all partitions.The dose-rate to count-rate conversion function is obtained by fitting and correcting the experimental data of the detector's dose rate response at 662 keV photons.By positioning the detector at a reference point within the radiation field and varying the dose rate at the detector location by adjusting the distance between the detector and the 137 Cs radiation source, the linear response curve of the dose rate versus counting rate of the Cs 3 Cu 2 I 5 :Tl detector at 662 keV was obtained.At this time, under the 662 keV γ photon radiation field, when the Air-absorbed dose rate is 51.5 µGy/h, the counting rate of the detector is 2917 cps.The counting rate value after correction by the PAW method is 7236 cps obtained from Equation (5), and the dose rate response curve of the detector is reset to obtain the conversion function between the counting rate and the dose rate, as shown in the Figure 11. detector is 2917 cps.The counting rate value after correction by the PAW method is 7236 cps obtained from Equation ( 5), and the dose rate response curve of the detector is reset to obtain the conversion function between the counting rate and the dose rate, as shown in the Figure 11. Analysis of Energy Response Correction Results Set the correction coefficients in the energy response real-time correction upper computer software according to Table 4.When the upper computer program captured the valid nuclear pulse waveform data, it first extracted the amplitude, then used the amplitude to determine the energy interval in which the pulse falls, and finally used the set correction coefficients of the energy intervals as the pulse counts, resulting in the correction of the energy response [21], implemented by the LabVIEW program in the host computer and displayed in real time.A total of ten measurements were taken under each energy condition, and the counting rate (counts per second, cps) values in the table are the arithmetic averages of the ten measurements under each energy condition.The relative energy response was calculated from Equation (4). The effect of the energy response correction on PAW is shown in Figures 12 and 13.The uncorrected energy response curve in Figure 12 shows that the photon energy response of the Cs3Cu2I5:Tl detector before energy response correction is inconsistent in the energy range from 33 keV to 1.25 MeV.The detector has severe over-response in the lowenergy area.For the Cs3Cu2I5:Tl detector, the response value peaks at photon incidence with an energy of 65 keV.The maximum response difference from low to high energy can be as much as eight times without photon energy response correction, which will seriously affect the accurate measurement of radiation field dose equivalent. Figure 12 shows the detector response curves before and after energy response correction by the PAW method.Figure 13 shows the detector response error curves at different energies for the Cs3Cu2I5:Tl detector after energy response correction by the PAW method.Figure 13 identifies the maximum allowable error range (±30%) in the standard IEC 61526:2010, and the maximum error of the detector after energy response is 8.26% in the positive direction and 4.36% in the negative direction.The maximum error of the Analysis of Energy Response Correction Results Set the correction coefficients in the energy response real-time correction upper computer software according to Table 4.When the upper computer program captured the valid nuclear pulse waveform data, it first extracted the amplitude, then used the amplitude to determine the energy interval in which the pulse falls, and finally used the set correction coefficients of the energy intervals as the pulse counts, resulting in the correction of the energy response [21], implemented by the LabVIEW program in the host computer and displayed in real time.A total of ten measurements were taken under each energy condition, and the counting rate (counts per second, cps) values in the table are the arithmetic averages of the ten measurements under each energy condition.The relative energy response was calculated from Equation (4). The effect of the energy response correction on PAW is shown in Figures 12 and 13.The uncorrected energy response curve in Figure 12 shows that the photon energy response of the Cs 3 Cu 2 I 5 :Tl detector before energy response correction is inconsistent in the energy range from 33 keV to 1.25 MeV.The detector has severe over-response in the low-energy area.For the Cs 3 Cu 2 I 5 :Tl detector, the response value peaks at photon incidence with an energy of 65 keV.The maximum response difference from low to high energy can be as much as eight times without photon energy response correction, which will seriously affect the accurate measurement of radiation field dose equivalent. Figure 12 shows the detector response curves before and after energy response correction by the PAW method.Figure 13 shows the detector response error curves at different energies for the Cs 3 Cu 2 I 5 :Tl detector after energy response correction by the PAW method.Figure 13 identifies the maximum allowable error range (±30%) in the standard IEC 61526:2010, and the maximum error of the detector after energy response is 8.26% in the positive direction and 4.36% in the negative direction.The maximum error of the relative photon response of the detector after calibration is 8.26%, which is much better than the ±30% requirement of IEC 61526:2010. relative photon response of the detector after calibration is 8.26%, which is much better than the ±30% requirement of IEC 61526:2010.After the energy response correction, regardless of the energy of the incident photon, the count rate of the detector in the radiation field with different energy photons but the relative photon response of the detector after calibration is 8.26%, which is much better than the ±30% requirement of IEC 61526:2010.After the energy response correction, regardless of the energy of the incident photon, the count rate of the detector in the radiation field with different energy photons but the After the energy response correction, regardless of the energy of the incident photon, the count rate of the detector in the radiation field with different energy photons but the same dose rate is equal.That is, the count rate displayed by the detector is linearly related to the dose rate in the radiation field, regardless of the energy, which is the purpose of the PAW method.In this way, we can easily carry out the conversion between the counting Sensors 2023, 23, 8910 14 of 16 and dose rates.There is no need to worry about the measurement error caused by the inconsistent detection efficiency of the detector for different energy photons.Dose rate conversion before and after correction can be performed using the conversion function in Figure 11. Figure 14 shows the measured values before and after correction against the standard values.The relative deviation of the measured dose rate from the radiation field standard values after correction by the PAW method is shown in Figure 15, and the relative deviation of the dose rate is calculated to be in the range of −3.3% to 8.6%.same dose rate is equal.That is, the count rate displayed by the detector is linearly related to the dose rate in the radiation field, regardless of the energy, which is the purpose of the PAW method.In this way, we can easily carry out the conversion between the counting and dose rates.There is no need to worry about the measurement error caused by the inconsistent detection efficiency of the detector for different energy photons.Dose rate conversion before and after correction can be performed using the conversion function in Figure 11. Figure 14 shows the measured values before and after correction against the standard values.The relative deviation of the measured dose rate from the radiation field standard values after correction by the PAW method is shown in Figure 15, and the relative deviation of the dose rate is calculated to be in the range of −3.3% to 8.6%. Conclusions This paper proposes an example of a real-time correction technique of energy response based on pulse amplitude weighting (PAW) for Cs 3 Cu 2 I 5 :Tl scintillation detectors.The correction of photon energy response in the energy range of 33 keV-1.25 MeV is achieved by zonal correction of the pulse amplitude of the detector, and the principle and method of photon energy response correction based on PAW are introduced.The related characteristics of the standard radiation field are described.The experimental results show that with the use of the PAW method to correct the photon energy response of the Cs 3 Cu 2 I 5 :Tl scintillation detector, the response is almost constant between the different photon energies and the difference only ±8.26% relative to 137 Cs.The detector satisfies the energy response specifications specified by the IEC 61526:2010 standard [15] at various energies in relation to 137 Cs.For detectors that adhere to the IEC 61526:2010's energy response specifications, the energy response error is within 30%.The error range of the dose rate readings is −3.3% to 8.6% after energy response correction and dose rate conversion.By digitally adjusting the energy response of the detector in the dose rate meter in real time, the accuracy of dose rate meter measurements at various photon energies is increased.The method has fewer energy response correction coefficients, and all correction coefficients can be obtained simultaneously instead of identifying the energy within the radiation field each time.Analog devices such as comparators can be used for the microcontroller to achieve the energy response correction of the detector, thus reducing the development cost of the dose rate meter, which has the value of promoting the application in the radiation environment dose monitoring system. Figure 1 . Figure 1.The math principle flow chart of energy response correction based on PAW. Figure 2 . Figure 2. Schematic diagram of the experimental setup. Figure 1 . Figure 1.The math principle flow chart of energy response correction based on PAW. Sensors 2023 ,Figure 1 . Figure 1.The math principle flow chart of energy response correction based on PAW. Figure 2 . Figure 2. Schematic diagram of the experimental setup.Figure 2. Schematic diagram of the experimental setup. Figure 2 . Figure 2. Schematic diagram of the experimental setup.Figure 2. Schematic diagram of the experimental setup. Cs and 60 Co, can provide certified values of Air Kerma rate from 33 keV to 1.25 MeV.The energy response experiments based on Cs3Cu2I5:Tl scintillation detectors in Figure 2 were done under the standard radiation field mentioned above, and the standard radiation field experimental setup is shown in Figure 3. Figure 5 . Figure 5.The dose rate linear response of the Cs3Cu2I5:Tl detector. Figure 5 . Figure 5.The dose rate linear response of the Cs 3 Cu 2 I 5 :Tl detector. Sensors 2023 , 23, x FOR PEER REVIEW 8 of 17 waveform data acquisition, amplitude extraction, energy response correction factor setting, and count displaying.The front-end interface of the upper computer software is shown in Figure6.The software allows real-time adjustment of correction coefficients, set of trigger thresholds, real-time monitoring of pulse waveform signals, and real-time display of actual and actual counting rates, corrected counts, and corrected counting rates. Figure 6 . Figure 6.Front-end interface of LabVIEW upper computer software. Figure 8 also shows the energy versus channel address scaling information obtained from the peak position information of the full energy peak, which is used to calibrate the linear relationship between channel address and photon energy in the multichannel analyzer ORTEC EASY-MCA-8K. Figure 6 . Figure 6.Front-end interface of LabVIEW upper computer software. Figure 10 . Figure 10.The γ response graph of Cs3Cu2I5:Tl detector.The initial energy response of the detector without correction is the area of the detector response spectra at different energies.By dividing the response spectrum into multiple intervals according to the energy or pulse amplitude values, the corresponding coefficients corrected the different intervals and summed up the energy response of the detector after energy response correction.The energy spectrum information was used to classify the energy into ten intervals in this study: 20-40 keV, 40-60 keV, 60-80 keV, 80-100 keV, 100-120 keV, 120-150 keV, 150-200 keV, 200-500 keV, 500-800 keV, and 800-1500 keV, covering the photon energy range of 33 keV-1.25 MeV.Moreover, scale out the detector Figure 11 . Figure 11.The dose-rate to count-rate conversion function graph. Figure 11 . Figure 11.The dose-rate to count-rate conversion function graph. Figure 12 . Figure 12.Comparison of energy response correction before and after PAW. Figure 13 . Figure 13.Relative response diagram after PAW of the Cs3Cu2I5:Tl detector. Figure 12 . Figure 12.Comparison of energy response correction before and after PAW. Figure 12 . Figure 12.Comparison of energy response correction before and after PAW. Figure 13 . Figure 13.Relative response diagram after PAW of the Cs3Cu2I5:Tl detector. Figure 13 . Figure 13.Relative response diagram after PAW of the Cs 3 Cu 2 I 5 :Tl detector. Figure 14 . Figure 14.Comparison of standard and measured dose rate.Figure 14.Comparison of standard and measured dose rate. Figure 14 . 17 Figure 15 . Figure 14.Comparison of standard and measured dose rate.Figure 14.Comparison of standard and measured dose rate.Sensors 2023, 23, x FOR PEER REVIEW 15 of 17 Figure 15 . Figure 15.Plot of relative bias dose rate. Table 1 . The performance comparison between Cs 3 Cu 2 I 5 :Tl single-crystal and other commonly used scintillators. Table 2 . Experimental parameters of radiation field. Table 2 . Experimental parameters of radiation field. Table 3 . Correspondence between energy and pulse amplitude. Table 4 . Energy response correction intervals and correction factors.
10,005.8
2023-11-01T00:00:00.000
[ "Physics", "Engineering" ]
Assortativity Analysis of Real-World Network Graphs Based on Centrality Metrics Assortativity index (A. Index) of real-world network graphs has been traditionally computed based on the degree centrality metric and the networks were classified as assortative, dissortative or neutral if the A. Index values are respectively greater than 0, less than 0 or closer to 0. In this paper, we evaluate the A. Index of real-world network graphs based on some of the commonly used centrality metrics (betweenness, eigenvector and closeness) in addition to degree centrality and observe that the assortativity classification of real-world network graphs depends on the node-level centrality metric used. We also propose five different levels of assortativity (strongly assortative, weakly assortative, neutral, weakly dissortative and strongly dissortative) for real-world networks and the corresponding range of A. Index value for the classification. We analyze a collection of 50 real-world network graphs with respect to each of the above four centrality metrics and estimate the empirical probability of observing a real-world network graph to exhibit a particular level of assortativity. We claim that a real-world network graph is more likely to be neutral with respect to the betweenness and degree centrality metrics and more likely to be assortative with respect to the eigenvector and closeness centrality metrics. Introduction The assortativity index (A.Index) of a network is a measure of the similarity of the end vertices of the edges in the network with respect to a particular node-level metric (Newman, 2010).That is, the A. Index of a network is a measure of the extent to which a vertex with a higher value for a particular node-level metric is connected to another vertex that also has a higher value for the node-level metric.Since the A. Index is nothing but a correlation coefficient (Pearson's product-moment correlation coefficient) (Newman, 1999) quantifying the extent of similarity of the end vertices of the edges, its value ranges from -1 to 1 (Strang, 2006).Traditionally, in the literature (Newman, 1999), networks with positive values of A. Index (closer to 1) are referred to as assortative networks; networks with negative values of A. Index (closer to -1) are referred to as dissortative networks and networks with A. Index values closer to 0 are classified as neutral.The similarity has been typically evaluated with respect to the degree centrality metric of the vertices, and the classification of networks (as either as assortative, dissortative or neutral) has been so far only based on the degree centrality metric (Newman & Girvan, 2003;Noldus & Van Mieghem, 2015). Our hypothesis in this paper is that the assortativity classification of a real-world network could depend on the centrality metric used to compute the A. Index value of the network.In other words, a network could be classified as assortative with respect to one centrality metric and it could end up being classified as dissortative or neutral with respect to another centrality metric.Also, just having three different levels (assortative, neutral and dissortative) would not be sufficient to accurately assess the extent of assortativity of real-world network graphs whose A. Index values are neither close to 0, but nor close to 1 or -1.Until now, a formal range of A. Index values has not been defined to assess the level of assortativity of real-world networks.In this paper, we propose to divide the range of values (-1.0 to 1.0) for the A. Index fairly even to five levels and setup the following rule: strongly assortative (0.6 ≤ A. Index ≤ 1.0), weakly assortative (0.2 ≤ A. Index < 0.6), neutral (-0.2 < A. Index < 0.2), weakly dissortative (-0.6 < A. Index ≤ -0.2) and strongly dissortative (-0.2 < A. Index ≤ -1.0). We investigate the validity of our hypothesis by analyzing a broader collection of 50 real-world networks whose spectral radius ratio for node degree (a measure of the variation in node degree) ranges from 1.01 to 3.48 (Meghanathan, 2014).We compute the A. Index values for these 50 real-world networks with respect to each of the four commonly used centrality metrics: degree centrality (DegC), eigenvector centrality (EVC; Bonacich, 1987), betweenness centrality (BWC; Freeman, 1977) and closeness centrality (ClC; Freeman, 1979), and apply the above proposed range of values to assess the assortativity levels of the real-world networks with respect to each of these four centrality metrics.For 40 of the 50 real-world networks analyzed, we observe that the level of classification of the network (strongly or weakly assortative, neutral, strongly or weakly dissortative) depends on the centrality metric under consideration. Since we have analyzed a vast collection of networks with varying levels of complexity, we use the results of our assortativity analysis to empirically propose the likelihood of a real-world network being classified neutral or assortative (strongly assortative or weakly assortative) with respect to a particular centrality metric.Based on the results of our assortativity analysis on 50 real-world networks, we claim that any chosen real-world network is more likely (i.e., with a probability of 0.72) to be classified as neutral (neither assortative nor dissortative) with respect to the betweenness centrality metric, and more likely (i.e., with a probability of 0.66) to be classified as assortative (strongly or weakly) with respect to the ClC and EVC metrics.More specifically, we expect a chosen real-world network to be somewhat strongly assortative (with a probability of 0.38) with respect to the ClC metric and somewhat weakly assortative (also with a probability of 0.38) with respect to the EVC metric. To the best of our knowledge, we have not come across a paper that has conducted a comprehensive assortativity analysis of complex real-world networks with respect to the four commonly used centrality metrics as well as empirically proposed the likelihood of observing a real-world network to be neutral, strongly assortative or weakly assortative with respect to a particular centrality metric.The rest of the paper is organized as follows: Section 2 reviews the four centrality metrics along with an example to illustrate their computation on a sample graph.Section 3 introduces the formulation for Assortativity Index (A.Index) and the proposed range of A. Index values to classify the assortativity level of a real-world network as well as presents a motivating example to illustrate that the A. Index of a network and its classification (as neutral, strongly/weakly assortative or dissortative) could depend on the centrality metric under consideration.Section 4 introduces the 50 complex real-world networks that are analyzed in this paper.Section 5 presents the results of assortativity analysis conducted on the real-world networks with respect to the four centrality metrics.Section 6 discusses related work and highlights the novel contribution of the work done in this paper.Section 7 concludes the paper.Throughout the paper, we use the terms 'node ' and 'vertex', 'link' and 'edge', 'network' and 'graph' interchangeably.They mean the same.All the real-world networks analyzed in this paper are modeled as undirected graphs. Centrality Metrics The four commonly used centrality metrics in complex network analysis are: degree centrality (DegC), eigenvector centrality (EVC; Bonacich, 1987), betweenness centrality (BWC; Freeman, 1977) and closeness centrality (ClC; Freeman, 1979).DegC and EVC are degree-based centrality metrics; whereas BWC and ClC are shortest path-based centrality metrics.Until now, the DegC metric has been typically used for assortativity analysis of real-world networks (Newman & Girvan, 2003;Noldus & Van Mieghem, 2015).In this paper, we are interested in conducting assortativity analysis of real-world networks with respect to all the above four centrality metrics.In this section, we briefly review these four centrality metrics and the procedure to compute them, along with an example for each. Degree Centrality The degree centrality (DegC) of a vertex is the number of edges incident on the vertex.The DegC of the vertices is computed by multiplying the adjacency matrix of the graph with a unit vector of 1s (the number of 1s in the unit vector corresponds to the number of vertices in the graph).Figure 1 illustrates an example to compute the degree centrality of the vertices.As can be noticed from this example, the DegC metric is vulnerable to incurring several ties among the vertices (as the metric values are integers and not real numbers). Eigenvector Centrality The eigenvector centrality (EVC) of a vertex is a measure of the degree of the vertex as well as the degree of its neighbors.The EVC values of the vertices in a graph correspond to the entries in the principal eigenvector of the adjacency matrix of the graph.We use the JAMA: A Java Matrix package (http://math.nist.gov/javanumerics/jama/) to compute the principal eigenvector of the adjacency matrix of a real-world network graph.The entries in the principal eigenvector can also be computed using the Power-Iteration method (Strang, 2006) that is illustrated in Figure 2. The tentative eigenvector X i+1 of a network graph at the end of the (i+1) th iteration is given by: , where ||AX i || is the normalized value of the product of the adjacency matrix and the tentative eigenvector X i at the end of the i th iteration.We continue the iterations until the normalized value of the product vector converges and the tentative eigenvector at that juncture corresponds to the principal eigenvector of the adjacency matrix of the graph.As the EVC values of the vertices are likely to be real numbers and are dependent on the degree of a vertex as well as the degrees of its neighbors, the EVC values of the vertices are more likely to be unique and relatively fewer ties are incurred (compared to degree centrality). Betweenness Centrality The betweenness centrality (BWC) of a vertex is a measure of the fraction of shortest paths going through the vertex when considered across all pairs of vertices in the graph (Freeman, 1977).If sp jk (i) is the number of shortest paths between vertices j and k that go through vertex i and sp jk is the total number of shortest paths between vertices j and k, then . The BWC of vertices is computed using a Breadth First Search (BFS; Cormen et al., 2009)-based implementation of the Brandes' algorithm (Brandes, 2001).We use the BFS algorithm to determine the shortest path trees rooted at each vertex and thereby deduce the level numbers of the vertices in the shortest path trees rooted at every vertex in the graph.The level number of a vertex i in the shortest path tree rooted at vertex j is the minimum number of hops from vertex j to i.The root vertex of a shortest path tree is said to be at level 0. The number of shortest paths from a vertex j to itself is 1.The number of shortest paths from a vertex j to a vertex k (at level l in the shortest path tree rooted at vertex j) is the sum of the number of shortest paths from vertex j to each of the vertices that are neighbors of vertex k in the graph and located at level l-1 in the shortest path tree rooted at j.The number of shortest paths between vertices j and k that go through vertex i is the maximum of the number of shortest paths from vertex j to vertex i and the number of shortest paths from vertex k to vertex i. Figure 3 illustrates an example for the computation of the BWC of the vertices in the same sample graph used in Figures 1-2.We notice that vertices with a high degree and/or EVC need not have a high BWC and vice-versa.For example, vertices 0 and 2 that had the largest value for the EVC metric have relatively low BWC value; whereas, vertex 4 (with a low degree and low EVC) has the largest value for the BWC. Closeness Centrality The closeness centrality (ClC) of a vertex (Freeman, 1979) is a measure of the relative closeness of the vertex with the rest of the vertices in the graph.The ClC of a vertex is measured by running the BFS algorithm on the vertex and determining the minimum number of hops to each vertex on the shortest path tree rooted at the vertex.The ClC of a vertex is the inverse of the sum of the shortest path lengths (hop counts) to the rest of the vertices in the graph.Figure 4 illustrates an example for the calculation of ClC of the vertices on the same sample graph used in Figures 1-3. Network Model Let G = (V, E) be the set of vertices and edges constituting a real-world network and let C(i) be the value of a centrality metric (C) for any node i in the network.We refer to the first vertex (vertex u) in an edge (u, v) as the upstream vertex and the second vertex (vertex v) in an edge (u, v) as the downstream vertex.As the focus of this research is on undirected graphs, we conveniently adopt the following convention to represent the edges: the ID of the upstream vertex of an edge (u, v) is always less than the ID of the downstream vertex of the edge (i.e., u < v).Let U and D be respectively the set of upstream and downstream vertices constituting the edges of a graph.Let U C and V C (calculated as in formulation-1 below) be respectively the average values for the centrality metric of interest among the vertices constituting the sets U and V. (1) Assortativity Index The Assortativity Index (A.Index) of a network (Newman, 1999) with respect to a particular node-level centrality metric is a quantitative measure of the extent of similarity of the end vertices of the edges with respect to the chosen centrality metric.The extent of similarity is calculated as the Pearson's Product-Moment Correlation Coefficient (Strang, 2006) between the set of upstream vertices (U) and set of downstream vertices (D) constituting the end vertices of the edges in a real-world network graph.Accordingly, the A. Index of a network with respect to a centrality metric C could be formulated as below. [ (2) Range of Values for Assortativity Classification As Assortativity Index is a measure of the level of correlation between the sets of upstream and downstream vertices constituting the edges in a network graph, the values for A. Index C with respect to any centrality metric (C) would range from -1 to 1. Until now in the literature, a network is considered to be assortative (dissortative) with respect to the chosen node-level metric (C) if the A. Index C value is closer to 1 (-1).If the A. Index C value is closer to 0, the network is considered to be neutral with respect to the metric C.However, we do not have a formally defined range of values that clearly indicate how the network should be classified if the A. Index C values are neither close to 1 or -1 and nor to 0. We seek to address this concern as follows: Since A. Index C is evaluated as a measure of correlation, we will adapt the range of correlation coefficient values (rounded to two decimals) proposed in the literature (Evans, 1995) for the level of correlation (shown in Table 1) and propose the range of assortativity index values (shown in Table 2) for classifying a network with respect to the level of assortativity.We propose only two levels of assortativity and two levels of dissortativity (rather than 5 levels for each) to give enough space for the range of A. Index values to classify a network at a particular level, including neutral (i.e., neither assortative nor dissortative), but still be able to differentiate a strongly assortative (dissortative) network from a weakly assortative (dissortative) network or neutral network with respect to a node-level metric.The color code to be used for the various levels of assortativity are also shown in Table 2. Motivating Example In this sub section, we illustrate the computation of the assortativity index of the sample graph used in Figures 1-4 with respect to the degree centrality (Figure 5) and eigenvector centrality (Figure 6) metrics.Adopting the proposed range of classification for the level of assortativity, we notice that the sample graph of Figures 1-4 could be classified as "weakly dissortative" (A.Index value of -0.22; see Figure 5) with respect to the degree centrality metric and "strongly assortative" (A.Index value of 0.81; see Figure 6) with respect to the eigenvector centrality metric.This is a motivating example to vindicate our hypothesis that the assortativity level classification for a network could vary depending on the centrality metric used to compute the A. Index values. Real-World Network Graphs We now present a brief overview of the 50 real-world network graphs analyzed in this paper.We model each network as an undirected graph of nodes and edges.The networks are identified with a unique ID (1, ..., 50) and a three character acronym.We use the spectral radius ratio for node degree (Meghanathan, 2014) to capture the extent of variation in the degree of the nodes: the spectral radius ratio for node degree is the ratio of the principal eigenvalue (Strang, 2006) of the adjacency matrix of the network graph to that of the average node degree.The values for the spectral radius ratio for node degree are 1 or above; the farther is the value from 1, the larger is the variation in node degree.We analyze real-world networks ranging from random networks to scale-free networks as the spectral radius ratio for node degree of the real-world networks analyzed in this graph ranges from 1.01 to 3.48.Table 3 lists the 50 networks along with the values for the number of nodes and edges, the spectral radius ratio for node degree (λ sp ) and average degree (k avg ).A brief description of the networks is as follows: 1) Word Adjacency Network (ADJ; Newman, 2006): This is a network of 112 words (adjectives and nouns, represented as vertices) in the novel David Copperfield by Charles Dickens; there exists an edge between two vertices if the corresponding words appeared adjacent to each other at least once in the novel. 2) Anna Karnenina Network (AKN; Knuth, 1993): This a network of 140 characters (vertices) in the novel Anna Karnenina; there exists an edge between two vertices if the corresponding characters have appeared together in at least one scene in the novel. 3) Jazz Band Network (JBN; Geiser & Danon, 2003): This is a network of 198 Jazz bands (vertices) that recorded between the years 1912 and 1940; there exists an edge between two bands if they shared at least one musician in any of their recordings during this period. 4) C. Elegans Neural Network (CEN; White et al., 1986): This is a network of 297 neurons (vertices) in the neural network of the hermaphrodite Caenorhabditis Elegans; there is an edge between two vertices if the corresponding neurons interact with each other (in the form of chemical synapses, gap junctions and neuromuscular junctions). 5) Centrality Literature Network (CLN; Hummon et al., 1990): This is a network of 118 papers (vertices) published on the topic of centrality in complex networks from 1948 to 1979.There is an edge between two vertices v i and v j if one of the corresponding papers has cited the other paper as a reference. 6) Citation Graph Drawing Network (CGD; Biedl & Franz, 2001): This is a network of 259 papers (vertices) that were published in the Proceedings of the Graph Drawing (GD) conferences from 1994 to 2000 and cited in the papers published in the GD'2001 conference.There is an edge between two vertices v i and v j if one of the corresponding papers has cited the other paper as a reference. 7) Copperfield Network (CFN; Knuth, 1993): This is a network of 89 characters in the novel David Copperfield by Charles Dickens; there exists an edge between two vertices if the corresponding characters appeared together in at least one scene in the novel. 8) Dolphin Network (DON; Lusseau et al., 2003): This is a network of 62 dolphins (vertices) that lived in the Doubtful Sound fiord of New Zealand; there is an edge between two vertices if the corresponding dolphins were seen moving with each other during the observation period.9) Drug Network (DRN; Lee, 2004): This is a network of 212 drug agents (vertices) of different ethnicities. There is a link between two vertices if the corresponding agents know each other. 10) Dutch Literature 1976 Network (DLN; Nooy, 1999): This is a network of 37 Dutch literary authors and critics (vertices) in 1976; there exists an edge between two vertices v i and v j if the person corresponding to one of them is a critic who made a judgment (through a review or interview) on the literature work of the author corresponding to the other vertex.There is an edge between two nodes if the corresponding authors have co-authored at least one publication. 12) Faux Mesa High School Friendship Network (FMH; Resnick et al., 1997): This is a network of 147 students (vertices) at a high school community in the rural western part of US; there exists an edge between two vertices if the corresponding students are friends of each other. 13) Friendship Ties in a Hi-Tech Firm (FHT; Krackhardt, 1999): This is a network of 33 employees (vertices) of a small hi-tech computer firm that sells, installs and maintains computer systems; there exists an edge between two vertices v i and v j if the employee corresponding to at least one of them considers the employee corresponding to the other vertex as a personal friend. 14) Flying Teams Cadet Network (FTC; Moreno, 1960): This is a network of 48 cadet pilots (vertices) at an US Army Air Forces flying school in 1943 and the cadets were trained in a two-seated aircraft; there exists an edge between two vertices v i and v j if the pilot corresponding to at least one of them has indicated the pilot corresponding to the other vertex as a preferred partner with whom s/he likes to fly during the training schedules. 15) US Football Network (FON; Girvan & Newman, 2002): This is a network of 115 football teams (nodes) of US universities that played in the Fall 2000 season; there is an edge between two nodes if the corresponding teams have played against each other in the league games. 16) College Dorm Fraternity Network (CDF; Bernard et al., 1980): This is a network of 58 residents (vertices) in a fraternity college at a West Virginia college; there exists an edge between two vertices if the corresponding residents were see in a conversation at least once during a five day observation period.17) GD'96 Network (GD96; Batagelj & Mrvar, 2006): This is a network of 180 AT&T and other WWW websites (vertices) that were cited in the proceedings of the Graph Drawing (GD) conference in 1996; there exists an edge between two vertices if the website corresponding to one of them has a link to the website corresponding to the other vertex. 18) Marvel Universe Network (MUN; Gleiser, 2007): This is a collaborative network of 167 characters (vertices) in the comic books published by the Marvel Universe publishing company; there exists an edge between two vertices if the corresponding characters had appeared together in at least one publication. 19) Graph and Digraph Glossary Network (GLN; Batagelj & Mrvar, 2006): This is a network of 67 terms (vertices) that appeared in the glossary prepared by Bill Cherowitzo on Graph and Digraph; there appeared an edge between two vertices if the term corresponding to one of them is used to describe the meaning of the term corresponding to the other vertex.There exists an edge between two vertices if the corresponding conference visitors had face-to-face contact that was active for at least 20 seconds. 22) Huckleberry Coappearance Network (HCN; Knuth, 1993): This is a network of 76 characters (vertices) that appeared in the novel Huckleberry Finn by Mark Twain; there is an edge between two vertices if the corresponding characters had a common appearance in at least one scene. 23) Infectious Socio-patterns Network (ISP; Isella et al., 2011): This is a network of 309 visitors (vertices) who visited the Science Gallery in Dublin, Ireland during Spring 2009.There existed an edge between two vertices if the corresponding visitors had a continuous face-to-face contact for at least 20 seconds when they participated in the Infectious Socio-patterns event (an electronic simulation of the spreading of an epidemic through individuals who are in close proximity) as part of an art science exhibition. 24) Karate Club Network (KCN; Zachary, 1977): This is a network of 34 members (nodes) of a Karate Club at a US university in the 1970s; there is an edge between two nodes if the corresponding members were seen interacting with each other during the observation period. 25) Korea Family Planning Network (KFP; Rogers & Kincaid, 1980): This is a network of 37 women (vertices) at a Mothers' Club in Korea; there existed an edge between two vertices if the corresponding women were seen discussing family planning methods during an observation period. 26) Les Miserables Network (LMN; Knuth, 1993): This is a network of 77 characters (nodes) in the novel Les Miserables; there exists an edge between two nodes if the corresponding characters appeared together in at least one of the chapters in the novel. 27) Macaque Dominance Network (MDN; Takahata, 1991): This is a network of 62 adult female Japanese macaques (monkeys; vertices) in a colony, known as the "Arashiyama B Group", recorded during the non-mating season from April to early October, 1976.There existed an edge between two vertices if a macaque corresponding to one of them was recorded to have exhibited dominance over the macaque corresponding to the other vertex.Batagelj & Mrvar, 2006): This is a network of 35 teams (vertices) that participated in the 1998 edition of the Soccer World Cup.A player for a national team could sometimes have contract with one or more other countries.In this network, there is an edge between two vertices if the national team corresponding to at least one of them has contracted players from the country represented by the national team corresponding to the other vertex. 28) Madrid 42) Sawmill Strike Communication Network (SSM; Michael, 1997): This is a network of 24 employees (vertices) in a sawmill who planned a strike against the new compensation package proposed by their management.There exists an edge between any two vertices if the corresponding employees mutually admitted discussing about the strike with a frequency of three or more (on a 5-point scale). 43) Taro Exchange Network (TEN; Schwimmer, 1973): This is a network of 22 families (vertices) in a Papuan village.There exists an edge between two vertices if the corresponding families were seen exchanging gifts during the observation period. 44) Teenage Female Friendship Network (TWF; Pearson & Michell, 2000): This is a network of 47 female teenage students (vertices) who studied as a cohort in a school in the West of Scotland from 1995 to 1997.There exists an edge between two vertices if the corresponding students reported (in a survey) that they were best friends of each other. 45) UK Faculty Friendship Network (UKF; Nepusz et al., 2008): This is a network of 83 faculty (vertices) at a UK university.There exists an edge between two vertices if the corresponding faculty are friends of each other. 46 ) US Airports 1997 Network (APN; Batagelj & Mrvar, 2006): This is a network of 332 airports (vertices) in the US in the year 1997.There is an edge between two nodes if there is a direct flight connection between the corresponding airports. 47) US States Network (USS): This is a network of the 48 contiguous states in the US and the District of Columbia (DC).Each of the 48 states and DC is a node and there is an edge involving two nodes if the corresponding states (or DC) have a common border between them. 48) Residence Hall Friendship Network (RHF; Freeman et al., 1998): This is a network of 217 residents (vertices) living at a residence hall located on the Australian National University campus.There exists an edge between two vertices if the corresponding residents are friends of each other. 49) Windsurfers Beach Network (WSB; Freeman et al., 1989): This is a network of 43 windsurfers (vertices) on a beach in southern California during Fall 1986.There exists an edge between two vertices if the corresponding windsurfers were perceived to be close to each other (determined based on a survey). 50) World Trade Metal Network (WTN; Smith & White, 1992): This is a network of 80 countries (vertices) that are involved in trading miscellaneous metals during the period from 1965 to 1980.There exists an edge between two vertices if one of the two corresponding countries imported miscellaneous metals from the country corresponding to the other vertex. Results of Assortativity Analysis We now present the A. Index values obtained for each of the 50 real-world network graphs (listed in Section 4) with respect to each of the four centrality metrics (introduced in Section 2).Table 4 lists the A. Index values and the values are color coded as per the range outlined in Table 2.One can easily see that for about 80% of the real-world networks analyzed (i.e., for 40 of the 50 real-world networks analyzed), the level of assortativity is not the same for all the four centrality metrics.For a majority (i.e., 56%) of the real-world networks (i.e., for 28 of the 50 real-world networks), we observe two different levels of assortativity and most of these are the neutral and weakly assortative levels.For very few real-world networks, the two different levels of assortativity represent levels whose ranges of assortativity index values are not contiguous (for example: neutral and strongly assortative).For about 24% of the real-world networks analyzed (i.e., 12 of the 50 real-world networks), we observe three levels of assortativity.For none of the real-world networks, we observe four different levels of assortativity (i.e., one assortativity level per centrality metric).Only 6-14% of the real-world networks are either weakly or strongly dissortative with respect to any centrality metric. We also plot (Figures 7-10) the distribution of the A. Index values for each of the four centrality metrics.We estimate the probability of observing a network to be at a particular level of assortativity with respect to a centrality metric as the fraction of the total number of real-world networks exhibiting the particular level of assortativity with respect to the centrality metric.These empirically estimated probability values are also listed in Figures 7-10.As a high-level conclusion, we could say that there is at least a 50% chance for a real-world network to be neutral (neither assortative nor dissortative) with respect to the degree centrality and betweenness centrality metrics.On the other hand, we observe that there is at least a 50% chance for a real-world network to be assortative (either strongly assortative or weakly assortative) with respect to the closeness centrality and eigenvector centrality metrics. More specifically: we observe a real-world network to be neutral with respect to the BWC and DegC metrics with a probability of 0.72 and 0.58 respectively.When considered with respect to the EVC metric, we observe a real-world network to be weakly assortative with a probability of 0.38 and strongly assortative with a probability of 0.28.When considered with respect to the ClC metric, we observe a real-world network to be strongly assortative with a probability of 0.38 and weakly assortative with a probability of 0.28.Note that though both BWC and ClC are shortest path-based centrality metrics, we observe that they are poles apart with respect to assortativity.While a real-world network is more likely to be neutral (neither assortative nor dissortative) with respect to the BWC metric, we observe a real-world network to be more likely to be strongly assortative or weakly assortative with respect to the ClC metric.Table 5 summarizes these empirically estimated probability values for all the five levels of assortativity and all the four centrality metrics.Figure 11 presents a pictorial view of the empirically estimated probability values for observing a real-world network at a particular level of assortativity with respect to a centrality metric.An interesting and significant observation from the color-coded Table 3 is that for real-world networks with two or three levels of assortativity with the centrality metrics: the level of assortativity typically exhibited a transition from dissortative to neutral (or) neutral to weakly assortative to strongly assortative when the centrality metrics are considered in this order: BWC, DegC, EVC and ClC.We also notice from Figures 7-10 that the distribution of the A. Index values gradually drifts from a predominantly neutral-level distribution (corresponding to the BWC and DegC metrics) to a predominantly assortative-level distribution (corresponding to the EVC and ClC metrics).Such observations further vindicate our conclusions (in the previous paragraphs) regarding the probability of observing a real-world network to be neutral, weakly assortative and strongly assortative with respect to the centrality metrics. Related Work To the best of our knowledge, all the results reported in the literature (e.g., Newman, 1999;Newman & Girvan, 2003;Noldus & Van Mieghem, 2015) on assortativity of real-world network graphs is based on the degree centrality metric.Ours is the first effort to study the assortativity of real-world network graphs based on the other commonly used centrality metrics such as betweenness centrality, eigenvector centrality and closeness centrality.We analyze the assortativity of a large collection of real-world network graphs (with a broad range of variation in node degree) and empirically propose the likelihood of observing a real-world network graph to be neutral or assortative with respect to a centrality metric.In this section, we discuss results from the most related work on assortativity and centrality metrics in the literature. Traditionally, based on the degree centrality metric, social networks have been found to be assortative (high degree nodes tend to attach to high degree nodes); whereas, the technological and biological networks have been observed to be dissortative (i.e., low degree nodes tend to attach to high degree nodes and vice-versa; Newman, 2003).The networks generated from theoretical models such as the Erdos-Renyi random networks (Erdos & Renyi, 1959), Barabasi-Albert scale-free networks (Barabasi & Albert, 1999) and the Watts-Strogatz small-world networks (Watts & Strogatz, 1998) have also been observed to be neutral (neither assortative nor dissortative) with respect to the degree centrality metric (Newman, 1999).In addition, networks that evolve with time without any constraints have been observed to reach a maximum entropy state (entropy is a quantitative measure of robustness; Demetrius & Manke, 2005) with heterogeneous connectivity distribution and in such a state, networks have been usually dissortative (Johnson et al., 2010) with respect to the degree centrality metric.On the other hand, networks that evolved with constraints (with respect to the number of links a node can maintain) tend to transition from being dissortative to assortative with time (Konig et al., 2010).Also, synthetic social network graphs generated using the Monte Carlo Metropolis-Hastings type algorithms (Chib & Greenberg, 1995) were observed to quickly evolve to a giant component if edge distribution (based on remaining degree: one less than the degree centrality; Newman, 1999) follows assortative matching (rather than dissortative matching; Newman, 2003).Iyer et al. (2013) analyzed the robustness of networks due to targeted removal of vertices that are ranked higher with respect to centrality metrics.It has been observed that dissortative networks degrade more rapidly due to the removal of vertices with higher degree; whereas, assortative networks degrade more rapidly due to the removal of vertices with higher betweenness (at least for the first 25% of the vertices) as the high degree vertices in assortative networks tend to form a concentrated interconnected core that would be difficult to break due to the removal of few vertices.For neutral networks (with assortativity index close to 0), targeted node removal based on degree has been observed to be the most effective method to degrade the network and targeted removal based on eigenvector centrality has been observed to be the least effective (Iyer et al., 2013).The findings from this paper could be considered complementary to the above research results as we observe real-world network graphs to be more likely assortative with respect to the EVC metric; hence, removal of vertices with higher EVC is more likely to have a relatively less degrading effect on the assortativity of networks.Zhang et al. (2012) argued that assortativity level of the different communities with their neighborhood need not be the same as the assortativity level of the entire network.This could be attributed to the differences in the connectivity of the vertices in the various communities to their respective outside world.In this regard, Zhang et al. (2012) proposed an alternate metric called the Universal Assortativity Coefficient (UAC) defined for a community (sub graph) of vertices as the sum of the local assortativity indices of edges (Newman, 1999) emanating from the vertices that are part of the community.The local assortativity index of an edge is calculated as per the remaining degree based formulation proposed by Newman (1999): Edges with positive local assortativity index are referred to as assortative and edges with negative local assortativity index are referred to as dissortative.Zhang et al. (2012) claimed that a globally assortative network could still have majority of its edges to be locally dissortative and vice-versa.Similar to local edge assortativity, a measure called local node assortativity (Piraveenan et al., 2008) based on the remaining degree of a node has also been proposed in the literature; the sum of local node assortativity values is equal to the network assortativity.It has been shown by Piraveenan et al (2009) that distribution profiles of the local assortativity of nodes vs. their degrees could be used to identify assortative hubs in social and biological networks and dissortative hubs in scale-free networks such as the Internet.All of the above analyses has been based on only the degree centrality metric and is heavily based on the concept of remaining degree.Joyce et al. (2010) proposed the notion of leverage centrality to capture the assortative or dissortative neighborhood of a node.The range of values for leverage centrality is (-1, ..., 1): positive values indicating an assortative neighborhood and negative values indicating a dissortative neighborhood.A node has a positive leverage centrality if it is connected to more nodes than its neighbors (assortative neighborhood); a node connected to fewer nodes than its neighbors has a negative leverage centrality (dissortative neighborhood).Nodes having higher leverage centrality are perceived to be important for facilitating information flow to and from its neighbors.Leverage centrality of a node is estimated simply based on the degree of the node and that of its neighbors; the centrality metric has been observed to be strongly correlated with betweenness centrality and weakly correlated with eigenvector centrality metric.Per the trends observed in this paper, we expect a real-world network to be more likely to be neutral (neither assortative nor dissortative) with respect to the leverage centrality metric (as is also observed for the betweenness centrality metric).Meghanathan (2015) observed the degree centrality and betweenness centrality metrics to be highly correlated for real-world network graphs: we opine such a correlation is justified with the real-world network graphs exhibiting almost similar levels of assortativity with respect to both these centrality metrics in this paper.Meghanathan (2016) showed that a maximal assortative matching of vertices (with the objective of maximizing the assortativity index) in real-world network graphs (with respect to the degree centrality metric) cannot maximize the number of matched vertices and vice-versa.We attribute such a phenomenon to the relatively neutral levels of assortativity of the edges with respect to the degree centrality metric and its closely correlated betweenness centrality metric.As we observe the closeness centrality metric to exhibit stronger levels of assortativity, we opine that a maximal assortative matching of vertices based on closeness centrality would relatively increase the number of vertices matched (compared to the degree centrality metric). Conclusions and Future Work We have shown that the assortativity classification of real-world network graphs is dependent on the node-level centrality metric used to compute the assortativity index values of the edges.As part of this analysis, we formally propose five levels of assortativity and their associated ranges in the space of assortativity index values from -1 to 1.We computed the assortativity index values for a suite of 50 real-world network graphs (with spectral radius ratio for node degree ranging from 1.01 to 3.48) with respect to each of the four commonly used centrality metrics: degree centrality (DegC), eigenvector centrality (EVC), betweenness centrality (BWC) and closeness centrality (ClC).We observe about 80% of the real-world network graphs to exhibit more than one assortativity level (depending on the centrality metric used to compute the assortativity index values): 56% exhibiting two assortativity levels and 24% exhibiting three assortativity levels.We notice for a majority of these real-world network graphs, the level of assortativity exhibited a transition from dissortative to neutral (or) neutral to weakly assortative to strongly assortative when the centrality metrics are considered in this order: BWC, DegC, EVC and ClC.Using the results of the assortativity analysis, we also estimated the empirical probability for a real-world network graph to exhibit a particular level of assortativity: We claim that a real-network graph is more likely (probability of 0.72) to be neutral (neither assortative nor dissortative) with respect to the BWC metric and is more likely to be assortative (strongly or weakly assortative: probability of 0.38 + 0.28 = 0.66) with respect to the EVC and ClC metrics. We have thus unraveled significant information about the assortativity of real-world network graphs with respect to the other commonly used centrality metrics such as betweenness, closeness and eigenvector centrality.As part of future work, we plan to analyze the centrality-based assortativity of complex network graphs generated from theoretical models (such as the Erdos-Renyi random network model (Erdos & Renyi, 1959), Barabasi-Albert scale-free network model (Barabasi & Albert, 1999) and the Watts-Strogatz small-world network model; Watts & Strogatz, 1998).We also plan to investigate the use of centrality metrics (other than degree centrality) to compute maximal assortative matching and maximal dissortative matching (Meghanathan, 2016) for real-world network graphs. Figure 1 . Figure 1.Example to Illustrate the Computation of the Degree Centrality of the Vertices in a Graph Figure 2 . Figure 2. Example to Illustrate the Computation of the Eigenvector Centrality of the Vertices in a Graph Figure 3 . Figure 3. Example to Illustrate the Computation of the Betweenness Centrality of the Vertices in a Graph Figure 4 . Figure 4. Example to Illustrate the Computation of the Closeness Centrality of the Vertices in a Graph Figure 5 . Figure 5. Example to Illustrate the Calculation of the Assortativity Index based on Degree Centrality 20) Graph Drawing 2001 (GD01) Network(Batagelj & Mrvar, 2006): This is a network of 101 papers (vertices) that were cited as references in the papers published in the proceedings of the 2001 Graph Drawing (GD'01) conference; there exists an edge between two vertices if the corresponding papers have been co-cited in at least one paper published in the GD'01 conference.21) Hypertext 2009 Network (HTN; Isella et al., 2011): This is a network of the face-to-face contacts of 115 attendees (vertices) of the ACM Hypertext 2009 conference held in Turin, Italy from June 29 to July 1, 2009. Figure 7 . Figure 7. Distribution of Assortativity Index Values for Real-World Networks (based on Betweenness Centrality) Table 1 . Range of Correlation Coefficient Values and the Corresponding Levels of Correlation Table 3 . Fundamental Properties of the Real-World Network Graphs used for Assortativity Analysis Loomis et al., 1953)4))MTB;Hayes, 200Gil-Mendieta & Schmidt, 1996)uspected individuals and their relatives (vertices) reconstructed by Rodriguez using press accounts in the two major Spanish daily newspapers (El Pais and El Mundo) regarding the bombing of commuter trains in Madrid onMarch 11, 2004.There existed an edge between two vertices if the corresponding individuals were observed to have a link in the form of friendship, ties to any terrorist organization, co-participation in training camps and/or wars, or co-participation in any previous terrorist attacks.Mexican Political Elite Network (MPN;Gil-Mendieta & Schmidt, 1996): This is a network of 35 Mexican presidents and their close collaborators (vertices); there exists an edge between two vertices if the corresponding two people have ties that could be either political, kinship, friendship or business ties.33)ModMathNetwork(MMN;Batagelj&Mrvar,2006):Thisisanetwork of 30 school superintendents (vertices) in Allegheny County, Pennsylvania, USA during the 1950s and early 1960s.There exists an edge between two vertices if at least one of the two corresponding superintendents has indicated the other person as a friend in a research survey conducted to see which superintendents (who are in office for at least a year) are more influential to effectively spread around some modern Math methods among the school systems in the county.34)USPoliticsBooksNetwork(PBN;Krebs,2003):This is a network of 105 books (vertices) about US politics sold by Amazon.comaround the time of the 2004 US presidential election.There exists an edge between two vertices if the corresponding two books were co-purchased by the same buyer (at least one buyer).35)PrimarySchoolContactNetwork(PSN;Gemmettoetal., 2014): This is a network of children and teachers (238 vertices) used in the study published by an article in BMC Infectious Diseases, 2014[40].There exists an edge between two vertices if the corresponding persons were in contact for at least 20 seconds during the observation period.36)PrisonFriendshipNetwork(PFN;MacRae, 1960): This is a network of 67 prison inmates (vertices) surveyed by John Gagnon in the 1950s regarding their sociometric choice.There exists an edge between two vertices if an inmate corresponding to at least one of them has listed the inmate corresponding to the other vertex as one of his/her closest friends.37)SanJuan Sur Family Network (SJN;Loomis et al., 1953): This is a network of 75 families (vertices) in San Juan Sur, Costa Rica, 1948.There exists an edge between two vertices if at least one of the corresponding families has visited the household of the family corresponding to the other vertex once or more. Batagelj & Mrvar, 2006)he production of 295 articles for the Social Networks Journal since its inception until 2008; there is an edge between two vertices if the corresponding authors co-authored at least one paper published in the journal.31)AuthorFacebookNetwork(AFB):This is a network of the 171 friends (vertices) of the author in Facebook.There exists an edge between two vertices if the corresponding people are also friends of each other.32)38)ScotlandCorporateInterlocksNetwork(SDI;Scott,1980):This is a network of multiple directors (a director who serves on multiple boards) and companies (a total of 230 vertices) during 1904-05 in Scotland.There exists an edge between two vertices v i and v j if any of the following are true: (i) both v i and v j correspond to two different multiple directors who are in the board of at least one company; (ii) one of the two vertices corresponds to a multiple director and the other vertex corresponds to one of the companies in whose board the person serves.39)SenatorPressReleaseNetwork (SPR;Grimmer, 2010): This is a network of 92 US senators (vertices) during the period from 2007 to 2010.There exists an edge between two senators if they issued at least one joint press release.40)SlovenianMagazine Network (SMN;Batagelj & Mrvar, 2006): This is a network of 126 different magazines (vertices); there exists an edge between two vertices if at least one reader (among a total of 100,000 readers) indicated that s/he reads the corresponding two magazines as part of a survey conducted in 1999 and 2000.41) Soccer World Cup 1998 Network (SWC; Table 4 . Fundamental Properties of the Real-World Network Graphs used for Assortativity Analysis Table 5 . Empirically Estimated Probability Values for the Assortative Level of a Real-World Network with respect to the Centrality Metrics
10,978.8
2016-07-03T00:00:00.000
[ "Computer Science", "Mathematics" ]
G6PD Deficiency Does Not Enhance Susceptibility for Acquiring Helicobacter pylori Infection in Sardinian Patients Background Subjects with glucose-6-phosphate dehydrogenase (G6PD) deficiency may be more susceptible to infections due to impaired leukocyte bactericidal activity. The disorder is common in the Mediterranean area. The aim of this study was to investigate whether G6PD deficiency may be a risk factor for acquiring H. pylori infection. Methods We performed a retrospective study. Data from clinical records of 6565 patients (2278 men and 4287 women, median age 51, range 7‒94) who underwent upper endoscopy between 2002 and 2014 were collected. H. pylori status, assessed by histology plus rapid urease test or 13C-urea breath test, and G6PD status were also reported. A multiple logistic regression model was used to investigate the association between G6PD deficiency and H. pylori infection. Results Enzyme deficiency was detected in 12% (789/6565) of the entire cohort, and more specifically in 8.3% of men and in 14.0% of women. Overall, the proportion of patients positive for H. pylori was 50.6% and 51.5% among G6PD deficient and non-deficient patients (χ² = 0.271; p = 0.315). Moreover, among G6PD-deficient and normal patients the frequency of previous H. pylori infection was similar. After adjustment for age and gender the risk for acquiring H. pylori infection was similar in G6PD-deficient and normal patients. Only age was a strong statistically significant risk predictor. Conclusions These results demonstrate for the first time that G6PD deficiency does not enhance patients’ susceptibility to acquire H. pylori infection in Sardinia. Introduction Helicobacter pylori infection affects more than 50% of the world population and is associated with various disorders of the proximal gastro-intestinal tract such as chronic gastritis and duodenal ulcers, gastric adenocarcinoma and gastric lymphoma [1]. The infection is usually acquired in childhood [2,3], and risk factors are related to socio˗economic status and living conditions [2][3][4][5]. Gastric colonization by the microorganism is also influenced by host defences including reactivity of immune cells [6], and efficiency of mechanisms against oxidative challenge [7,8]. Although some of these conditions have been investigated in detail, a number of them remained largely unexplored. D-glucose-6-phosphate dehydrogenase (G6PD, D-glucose-6-phosphate: NADP oxydoreductase; EC 1.1.1.49) is a cytoplasmic enzyme able to catalyze the first step of the pentose phosphate pathway, which plays a key role in producing the extra-mitochondrial coenzyme nicotinamide-adenine dinucleotide phosphate (NADPH) as well as the ribose necessary to synthesize DNA [9]. G6PD deficiency is the most common inherited enzymatic disorder of red blood cells in humans. The G6PD gene maps on the long arm of X chromosome (Xq28 locus), hence deficiency is transmitted by a X-linked pattern [10]. Therefore, total enzyme deficiency is observed in hemizygote men and homozygote women, whereas variable levels of enzyme activity are detected in heterozygote women, resulting from partial compensation by the normal allele and random inactivation of one X chromosome [11]. Carriers of the G6PD deficient allele appear to have a selective advantage against malaria caused by Plasmodium falciparum [12]. For this reason, in areas of the world where malaria is endemic, the frequency of G6PD deficiency could be considerably increased. Among hundreds of mutant alleles associated with G6PD deficiency, the Mediterranean mutation (C!T transition at nucleotide 563 of G6PD gene) has a frequency of about 12-24% in Sardinia, the highest in Italy [13,14]. The majority of G6PD-deficient individuals are entirely asymptomatic. However, subjects carrying the most common G6PD variants are subjected to recurrent acute hemolytic anemia. Hemolysis is triggered by infections, ingestion of Vicia faba (favism), and treatment with several pro-oxidant drugs. Deficient enzyme activity may impair protection from oxidative injury in cells other than erythrocytes [15]. More specifically, leukocytes from G6PD deficient subjects show a reduced response to bacterial challenge [16][17][18][19][20][21]. G6PD deficient granulocytes in G6PD Mediterranean variant displayed reduced function by 25% to 33% in most studies [22,23], suggesting an increased susceptibility to bacterial infection. However, population studies on the incidence and prevalence of bacterial infections, and more specifically on H. pylori infection in G6PD deficiency patients are lacking. Therefore, in this study we investigate a possible association between H. pylori infection and G6PD deficiency in a large patient cohort from Northern Sardinia. Study population Clinical records of patients scheduled for upper endoscopy at the Department of Internal Medicine, University of Sassari, Italy, from January 2002 to December 2014 were collected. Demographic data including gender and age were available. In the case of multiple esophago-gastro-duodenoscopies (EGDs) for the same patient within the given time period, only results from the first procedure were included in the analysis. Inclusion criteria. Only patients with a known H. pylori status were included in the study. The infection was assessed according to histological findings on the gastric biopsies plus a positive 13 C urea breath test ( 13 C-UBT) or rapid urease testing. A minimum of four gastric biopsy specimens were available for each patient: two from the antrum, one from the angulus, and one from the corpus of the stomach. The presence of H. pylori and active -chronic gastritis, follicular gastritis, intestinal metaplasia, atrophy and dysplasia was assessed by an expert Gastro-Intestinal pathologist, based on tissue morphology [5]. Since the purpose of this study was to assess the susceptibility to H. pylori infection in G6PD deficient subjects-regardless of the type of gastritisactive gastritis, intestinal metaplasia, follicular gastritis and atrophy were reported as present if they were observed in at least one biopsy irrespective of the site [5]. H. pylori infection and active gastritis associated with intestinal metaplasia and/or atrophy was considered to be a consequence of H. pylori infection. The presence of intestinal metaplasia and/or atrophy in the gastric biopsies without H. pylori infection was considered evidence of a past infection. Ethical Considerations An Institutional Review Board approval was obtained from Comitato di Bioetica, Azienda Ospedaliero-Universitaria di Sassari (Prot N°2099/CE, 2014). Since only pre-existing charts were used, all patient records were de-identified before the analysis and there was no need to gather informed consent from participants. G6PD assay G6PD activity was determined in all patients with a quantitative assay based on the ratio between G6PD/6GPD in erythrocytes, by using a standard routine enzymatic assay. Genotyping analysis was not performed in G6PD deficient patients. Statistical analysis All patients were stratified by age in 10-year intervals. The distribution of patients according to gender and age (at the time of EGD) was calculated and expressed as a percentage. Based on histological features, patients were classified as having: i) Active H. pylori infection (chronic active gastritis positive for H. pylori); ii) Past H. pylori infection (metaplasia and/or atrophy negative for H. pylori infection); and iii) Long-standing H. pylori infection (metaplasia and/or atrophy positive for H. pylori infection). Results of G6PD activity were recorded as a dichotomous variable (deficient/normal) and mild or moderate G6PD activity was considered normal for the analysis. The association between G6PD and H. pylori infection was expressed as the unadjusted odds ratios (ORs) and their 95% confidence intervals (CIs). The statistical power was calculated using the online DSS Research Statistical Power Calculator [24], on the basis of an alpha error level of 5%, while testing a 5% difference in frequency between G6PD deficient and non deficient subjects. Moreover, a multiple logistic regression model was fitted while controlling for potential confounding variables. For each covariate, the regression coefficients and their standard error were calculated as well as the ORs and their 95% CIs using the Wald formula (95% CI = OR 1±ß/SE ). Adjusted R² statistic was used to assess model fit. All statistical analyses were performed using SPSS statistical software (version 16.0, Chicago, IL, USA) p-values <0.05 were considered statistically significant. Results and Discussion A total of 6565 clinical records were available for the analysis ( Table 1). The proportion of women was 65.3% (4287). Mean age was comparable between G6PD-deficient patients and controls (50.4 ± 16.9 vs 50.6 ± 17.3, p = 0.789). The entire study cohort was white Caucasian. Prevalence of H. pylori infection according to age decade and G6PD status is shown in Fig 1. Enzyme deficiency was recorded in 12% (789/6565) of the entire cohort, and more specifically in 8.3% of men and in 14.0% of women. Overall, the proportion of patients positive for H. pylori (active, past and long standing infection) was 50.6% and of 51.5% among G6PD deficient and non-deficient patients respectively, with no statistical difference (χ² = 0.271; p = 0.315). The statistical power given by our sample size was 84%, large enough to avoid the rejection of a false zero hypothesis of a 5% difference around a frequency of H. pylori of about 50%. Table 2 displays the unadjusted ORs and their 95% CIs for acquiring H. pylori infection among patients with and without G6PD deficiency. Interestingly, the G6PD deficiency seems to entail a risk for active or past H. pylori infection similar to patients without enzyme deficiency. Patients with G6PD deficiency had a risk of long-standing infection slightly increased, (OR = 1.21 [95% CI 0.80-1.83]) although not significantly. Multiple logistic regression analysis is reported in Table 3. After adjustment for all covariates G6PD deficiency was not associated with a significantly increased risk for acquisition of H. pylori infection (OR = 1.02, 95% CI 0.827-1.252). This lack of association was further confirmed also in the subgroups of patients with past infection (G6PD deficient 104/494; 21.1% vs G6PD normal 794/3598; 22.1%) (p = 0.643) suggesting that the clearance of infection was not affected by the G6PD status. Similarly, no difference was detected in subgroups of patients with long-standing infection according to the G6PD status (G6PD deficient 28/418; 6.7% vs G6PD normal 166/2964; 5.6%) (p = 0.368). In the regression model the only strong significant predictor of H. pylori infection was age (OR = 1.81, 95% CI 1.572-2.090). Several loss-of-function mutations in the G6PD gene resulting in decreased erythrocyte enzymatic activity have been described. The distribution of the G6PD Mediterranean mutation in Sardinia varies according to historical and geographical pattern of malaria and altitude [25,26] and the allele occurs in about 12% of the general population [13]. Since G6PD is a housekeeping enzyme, a number of studies focused on its role in different blood cells and tissues [27][28][29]. Cultured neutrophils, fibroblasts and lymphocytes displayed low levels of enzyme activity (8-15% of normal) and a marked reduction in the NADPH/ NADP + ratio [28,29]. Several studies indicated that G6PD deficiency in leukocytes can result in chronic granulomatous disease with impaired host defense mechanisms against bacterial or fungal infection [30,31]. For example G6PD-deficient epithelial cells in vitro showed a reduced tolerance to Staphylococcus aureus [18]. In one of the few studies in humans, Clark et al. reported a higher frequency of G6PD deficiency in patients hospitalized for infection in Iran significantly higher compared to noninfected control groups (p < 0.05) [32]. Previous studies reported a high prevalence of H. pylori infection in Sardinia [3,33], making the island the ideal scenario to investigate the association between G6PD deficiency and H. pylori infection. However, our study failed to detect a significant difference in the risk for acquiring H. pylori infection across subgroups of patients with normal or deficient G6PD enzyme activity. More specifically, no difference was observed among patients who acquired H. pylori infection at any time in their life (i.e. no matter if still active or past infection) and patients who never had contact with the pathogen, as demonstrated by the biopsy samples negative for the bacteria. The logistic regression model clearly showed that the major predictor of H. pylori infection was age, confirming the birth cohort effect [3]. The lack of association persisted after adjustment for age, and gender. Our results are consistent with the observation that G6PD-deficient mice are not more susceptible to infection than animals with normal enzyme activity [34]. It has been postulated that in cells other than erythrocytes, the intra-mitochondrial NADPH generation may be sufficient to compensate the lack of its production in the pentose phosphate pathway making G6PD deficiency in these cells of poor clinical significance [35]. More interestingly, subgroups analysis showed that the prevalence of G6PD deficiency and G6PD normal activity was similar in patients with a past H. pylori infection indicating a similar behaviour for clearing infection. Several limitations in our study should be taken into consideration. First, the patients' cohort investigated might pose some problems of sample representativeness as they were referred to a GI section for dyspeptic symptoms. However, the genetic and sociocultural homogeneity of the study population is great enough to minimize the effect of residual confounders. In addition, the chance to have gastric histological samples gave us the opportunity to accurately classify the type of H. pylori associated gastritis. Conclusion Our results demonstrate for the first time that G6PD deficiency does not enhance patients' susceptibility to acquire H. pylori infection and does not affect its eradication in Sardinia.
2,968
2016-07-28T00:00:00.000
[ "Medicine", "Biology" ]
Trop-2 protein as a therapeutic target: A focused review on Trop-2-based antibody-drug conjugates and their predictive biomarkers Antibody-drug conjugates represent a new class of highly potent antineoplastic drugs built by attaching a small molecule of an anticancer drug (payload) or another therapeutic agent to an antibody recognizing an epitope on the targeted cells. Trophoblast cell-surface antigen-2 (Trop-2) was originally described in trophoblasts and fetal tissues, but subsequently its overexpression has been demonstrated in various solid malignancies. Sacituzumab govitecan (SG), a conjugate of anti-Trop-2 antibody and SN-38 payload (an active metabolite of irinotecan), is the first in the class that has been clinically validated and approved by the Food and Drug Administration for the treatment of metastatic triple-negative breast (2020) and urothelial carcinomas (2021). In the current review, we summarize and critically appraise the most recent advances with regard to SG, emphasizing the predictive biomarker analysis. Trop-2 expression was originally described in trophoblasts (placenta) and fetal tissues (e.g. lungs). Its expression was subsequently described in the normal stratified squamous epithelium of the skin, uterine cervix, esophagus, and tonsil crypts [6]. However, many normal tissues lack or show low Trop-2 protein expression (e.g. colon, kidney, liver, lung, prostate, and breast) [1]. Figure 1 shows the lack of Trop-2 expression in normal ductal epithelium whereas neoplastic breast cells strongly overexpress Trop-2. Aberrant Trop-2 overexpression has been described in various solid cancers, including those with low Trop-2 expression in their normal counterparts (e.g. colorectal, renal, lung, and breast carcinomas) (reviewed in Goldenberg et al. [1] and Shvartsur and Bonavida [7]). Trop-2 plays a role in tumor progression, given its active interplay with several key molecular pathways traditionally associated with cancer development and progression [1]. High Trop-2 expression usually confers a poor outcome [8]. In a meta-analysis by Zeng et al. that included 2569 cancer patients (reflecting 13 common solid malignancies), increased Trop-2 expression was particularly associated with poor overall survival (OS) and disease-free survival outcomes in patients with gastrointestinal and gynecological malignancies [8]. Despite the marked limitations of this study (inconsistency in Trop-2 assessment and definition of Trop-2 positivity), the authors concluded that a frequent Trop-2 expression in the majority of solid tumors and its association with a poor prognosis provided a good rationale to target Trop-2 for therapeutic purposes [8]. The Trop-2 expression has also been www.bjbms.org carcinoma), and Inotuzumab ozogamicin (for B-cell acute lymphoblastic leukemia) among others [20,[23][24][25][26]. Very recently (December 2019), the Food and Drug Administration (FDA) granted accelerated approval for trastuzumab deruxtecan for the treatment of unresectable or metastatic HER2-positive breast cancer [27]. In addition, enfortumab vedotin was approved for the treatment of locally advanced or metastatic urothelial carcinoma [23] (an overview of selected ADCs approved for the breast and urothelial carcinomas is provided in Table 1). ADCs are a rapidly expanding class of agents with 160 drugs included in preclinical and >70 in clinical trials [20,23]. Nine ADCs have already entered the clinical practice [28]. Two recently approved indications of anti-Trop-2 ADCs are discussed in the following paragraphs. ADC USING TROP-2 AS A HOMING TARGET Two different ADCs targeting the Trop-2 protein have been synthesized, including Sacituzumab govitecan (SG) and RN927C. SG is a conjugate of anti-Trop-2 antibody and SN-38, while in RN927C anti-Trop2 antibody is coupled to a microtubule inhibitor derivate auristatin [1] (Table 1). SG is the first in the class that has been clinically validated and approved by the FDA for heavily pretreated metastatic triple-negative breast and urothelial carcinomas (Table 1). Several other anti-Trop2based drugs are currently available preclinically, but have not yet entered the clinical trials. ADC ADCs represent a new generation of highly potent antineoplastic drugs [19]. These drugs are built by attaching a small molecule of an anticancer drug (payload) or another therapeutic agent to an antibody, using either a permanent or a labile linker. The antibody targets a specific antigen that is preferably over-expressed on malignant cells [20,21]. The linker connects the cytotoxic drug (payload) with the monoclonal antibody and it is responsible for the ADC maintenance and stability in the circulation [21]. Although the first ADCs were synthesized >55 years ago (coupling cyclic chemotherapeutics to immune gamma globulins) [22], their clinical relevance has been limited until 2000 when Gemtuzumab ozogamicin was approved for CD33positive acute myelogenous leukemia [23]. A decade ago, other ADCs also entered clinical practice, such as Brentuximab vedotin (for Hodgkin lymphoma and anaplastic large cell lymphoma), Trastuzumab emtansine (for HER2-positive breast www.bjbms.org BREAST CANCER In April 2020, the FDA granted accelerated approval to SG (TRODELVY, Immunomedics, Inc.) for the patients with metastatic TNBC that was treated with at least two prior treatment modalities for their metastatic disease [34]. It is the first ADC that has been approved by the FDA for relapsed or refractory metastatic TNBC and is also the first FDAapproved anti-Trop-2 ADC. The FDA decision was based on the efficacy of SG that had been demonstrated in the IMMU-132-01 (NCT 01631552) clinical trial [35]. NCT 01631552 was a multicenter, single-arm study that enrolled 108 patients with metastatic TNBC. All that is conjugated through a hydrolyzable linker to SN-38, which is an active metabolite of irinotecan [1,[30][31][32]. Irinotecan is a camptothecin that inhibits the nuclear topoisomerase I enzyme, thereby inducing double-stranded DNA breaks during the S-phase of the cell cycle [33] (Figure 2). Enhanced SG uptake by the cancer cells is achieved by the conjugation of a higher number of SN-38 molecules to the Ig. This leads to a drug to antibody ratio = 7-8:1, administration of higher doses (10 mg/kg) and repeated therapy cycles of SG (Days 1 and 8 of 21-day cycles) [33]. www.bjbms.org the enrolled patients received at least two prior therapies for their metastatic disease (median: 3, range, 2-10 therapies). The patients received SG 10 mg/kg intravenously on days 1 and 8 every 21 days. Furthermore, tumor imaging was done every 8 weeks, and patients were treated until disease progression or intolerance to therapy. The two primary outcomes included overall response rate (ORR) and response duration. The ORR was 33.3% (95% CI: 24.6-43.1), while the median response duration was 7.7 months (95% CI: 4.9-10.8) [35]. The recommended SG dose was 10 mg/kg administered by intravenous infusion on days 1 and 8 every 21 days until disease progression or unacceptable toxicity [35]. The most severe side effect reported was myelotoxicity (anemia and neutropenia including febrile neutropenia that affected 9% of the treated patients) [35]. This trial led to the accelerated FDA approval, which was based on ORR and response duration outcomes [34]. Further verification of clinical benefits of SG in TNBC has just been reported in a confirmatory, randomized, phase 3 trials [36]. The trial included 468 patients with metastatic TNBC, excluding brain metastases. The patients were randomly assigned to either SG or classical chemotherapy groups. The objective response rate was significantly higher (35%) in the SG group compared with the chemotherapy group (5%). Consequently, the two groups differed significantly concerning the median progression-free survival (PFS) (5.6 months with SG vs. 1.7 months with chemotherapy) and median OS (12.1 months with SG vs. 6.7 months with chemotherapy). However, myelosuppression and diarrhea were more prevalent in the SG group, the authors reported no deaths directly related to the SG treatment [36]. Trop-2 expression has also been described in estrogen receptor (ER)-positive breast cancers [1,[37][38][39], although it appeared to be lower than in TNBC. This was observed in breast cancer cell lines [40] and in breast tumor samples [14,38,39,[41][42][43] (Figure 1). Recent data also indicate a promising therapeutic activity of SG in patients with luminal (ER+) subtype of breast cancer [44]. Thus, a phase I/II single-arm basket trial involving 54 heavily pretreated patients with ER+/HER2-breast cancer revealed convincing therapeutic effects of SG [44]. At a median follow-up of 11.5 months, the ORR was 31.5%, median duration of response (DOR) was 8.7 months, and median PFS was 5.5 months, while the median OS was 12 months [44]. A new Phase III clinical trial (TROPiCS-02 study, NCT03901339), evaluating SG versus standard treatment in ER+/HER2-metastatic breast cancers, has also been initiated recently and is expected to provide data in the near future [45]. SG IN UROTHELIAL CARCINOMA Just a year after the first approval for TNBC, the FDA, in April 2021, has granted another accelerated approval for SG [46]. The drug was approved for patients with locally advanced and/or metastatic urothelial carcinomas who had previously received platinum-containing chemotherapy or immune checkpoint inhibitors (against Programmed cell death receptor/PD-1/or its ligand PD-L1). Efficacy and safety of SG were evaluated in the TROPHY-U-01 trial (IMMU-132-06; NCT03547973) [47]. Patients received SG, 10 mg/kg intravenously, on days 1 and 8 of a 21-day treatment cycle. The main efficacy endpoints were ORR and DOR. ORR and DOR were evaluated by independent review using the response evaluation criteria in solid tumors (RECIST) 1.1 criteria. The confirmed ORR was 27%, with six patients (5%) having complete responses and 25 patients (22%) with partial responses [47]. The median DOR was 7.2 months, while the median OS was 10.9 months. The most common adverse reactions (grade ≥3) in patients receiving SG were neutropenia (35%), leukopenia (18%), diarrhea (10%), and febrile neutropenia (10%). In addition, 6% of the patients had to discontinue the treatment due to severe side effects of SG [47]. Interestingly, this trial included a routine assessment of the Uridine diphosphate glucuronosyltransferase (UGT1A1) gene polymorphisms [47]. Among other functions, this isoform of the UGT1 enzyme glucoronates (inactivates) SN-38 in the liver [48]. However, the presence of the UGT1A1*28 variant that causes reduced UGT activity is associated with the increased risk of irinotecan toxicity. The study confirmed that the presence of UGT1A1*28 was associated with an increased risk of neutropenia in patients with urothelial carcinoma [47]. The recommended SG dose was 10 mg/kg once weekly on days 1 and 8 of 21-day treatment cycles until disease progression or unacceptable toxicity. This indication is approved under accelerated approval based on tumor response rate and DOR. Continued approval of SG for this indication will largely be dependent on the further verification and description of clinical benefits of SG in a confirmatory Phase III trial (3 TROPiCS-04 trial and NCT04527991) [49]. This trial was initiated in November 2020 and is expected to involve approximately 600 patients with advanced/metastatic urothelial carcinomas. No predictive testing is planned for this trial, which is currently in the recruitment phase [49]. PREDICTIVE BIOMARKERS OF SG EFFICACY The use of standard molecular marker testing for cancer patient' s helps guide targeted therapy decisions and advances personalized care for these patients. Leading professional bodies (e.g. ASCO, ESMO, and CAP) have been continuously putting efforts into developing and improving the standards of www.bjbms.org molecular testing in oncology. The ASCO' s recent report on advances in cancer research and treatment further highlighted the importance of biomarker testing (both tissue-and bloodbased) in predicting the response, cancer control, side effects, and resistance [50]. There is also emerging evidence indicating that the efficacy of ADCs depends on how their components (antibody, linker, and payload) interact with the cancer cells and tumor microenvironment [28]. Experimental preclinical in vitro and in vivo data indicate that cell lines that strongly overexpress Trop-2 protein are highly sensitive to SG [51][52][53][54][55]. Further evidence of the experimental efficiency of SG in Trop-2 positive cells was provided in the study by Cardillo et al. [51]. The authors transfected TROP-2 cDNA into the MDA-MB-231 cell line (TNBC cell line), which normally exhibits low Trop-2 protein expression and is unresponsive to SG. However, the transfection resulted in a four-fold increase in Trop-2 expression followed by a significantly higher sensitivity of the breast cancer cells to SG [51]. Phase 1 trial with SG included 25 patients with diverse solid tumors and included predictive immunohistochemistry (IHC) testing for Trop-2 protein; 16 cases had tissue available for Trop-2 IHC, revealing no significant association between the tissue expression of Trop-2 and response to SG [56]. The lack of association in the reported trial could be due to the small sample size. The Phase I/II clinical trial of Bardia et al. [57] that reported the therapeutic benefit of SG in TNBC also included a routine IHC assessment of Trop-2 protein expression with a cutoff of 10%. In contrast to the Phase 1 trial, the authors reported a good association between the IHC expression of Trop-2 and clinical response to SG [57]. These results are in line with preclinical studies. However, two later phase clinical trials on Trop-2 ADC, including the Phase III clinical trials [35,36] did not routinely test the cancer specimens for predictive markers (neither Trop-2 nor Topoisomerase I). This may be acceptable given that TNBC is considered a Trop-2 protein-positive cancer as reported by Goldenberg et al. (~85% positivity rate) [1,37] and 73% in the study of Ambrogi et al. [38]. However, Khoury et al. [58], using the same threshold (10%) for Trop-2 positivity as in Phase I/II trials, reported that only 56% of TNBC were positive for Trop-2 protein. Furthermore, this study found that a substantial discrepancy exists in Trop-2 expression between the primary (49%) and metastatic TNBC (64%) [58]. Intratumoral heterogeneity of Trop-2 expression was also highlighted in the study of Ambrogi et al. [38]. An initial small pilot study (IMMU-132) involving only six heavily pretreated patients with urothelial carcinoma included Trop-2 IHC testing on archival urothelial carcinoma specimens revealed a strong Trop-2 protein expression with a positive correlation to therapy response [59]. A larger, Phase I/II study on urothelial carcinoma (IMMU-132) involved 45 patients with metastatic urothelial carcinoma but did not include predictive biomarker testing [60]. A subsequent TROPHY study did not include the biomarker testing either. Apart from the publicly available data in the Human Protein Atlas [61], very limited information is available concerning the Trop-2 status in urothelial carcinoma. The only two studies available on PubMed/MEDLINE are those of Avellini et al. [62] and Stepan et al. [6]. The first one is based on a small number of invasive urothelial carcinomas (n = 10), revealing significantly higher Trop-2 protein expression in invasive carcinomas than non-invasive urothelial carcinomas and normal urothelium [62]. The latter included various normal and cancer tissues of human and mouse origin, indicating a common Trop-2 overexpression in urothelial carcinoma [6]. An active component (payload) of SG is SN-38, which is itself an active metabolite of irinotecan. Irinotecan is a wellknown anti-proliferative drug that has been used for the treatment of metastatic colorectal cancer (CRC) [63]. The key molecular target of irinotecan is the topoisomerase I (Topo-1) protein, which belongs to the topoisomerase family of enzymes that are essential in unwinding coiled DNA to facilitate the replication and transcription of the cells [64] ( Figure 2). Topo-1 is a nuclear enzyme that is required for replication and unwinding DNA and preventing lethal strand breaks [65,66]. SN-38 is a cytotoxic drug that destabilizes the Topo-1/DNA covalent complex formed in the CRC cells ( Figure 2). It induces irreversible double-strand breaks, leading to S-phase arrest, and cell death. This is done by attaching the SN-38 molecule to the complexes and blocks future replication forks preventing repairs of double-strand breaks [66][67][68]. Topo-1 protein has been found frequently overexpressed in various solid malignancies, including breast cancer [69,70]. A large study by Heestand et al., based on >3,000 breast cancer samples, revealed Topo-1 protein positivity in about 64% of cases [69], while our study in TNBC revealed Topo-1 overexpression in about 70% of the cases [70]. Heestand et al. also reported that 56% of urothelial carcinomas overexpressed Topo-1 expression [69], while our comprehensive theranostic study on urothelial bladder carcinoma reported the Topo-1 positivity rate of 63% [71]. However, the clinical trials failed to confirm a therapeutic benefit of irinotecan alone in patients with advanced/metastatic urothelial carcinomas that were previously treated with one systemic chemotherapy regimen (cisplatin or carboplatin) [72]. Although classical cytotoxic drugs are distinguished from targeted drugs and are generally given to patients without prior biomarker testing, it may be reasonable to further explore the tumor Topo-1 status for its capacity to optimize the response to the drugs such as SG. Our literature survey revealed very limited information in this regard. Cardillo et al. [30] provided solid evidence that SG is more efficient in homologous www.bjbms.org recombination repair (HRR)-proficient cancer cells with a high Trop-2 expression as well as in the HRR-deficient cancers with low to moderate Trop-2 protein expression [30]. A clinical study of Khoury et al. [58] utilizing the expression of both Trop-2 and Topo-1 in a cohort of primary and metastatic TNBC revealed that ~30% Trop-2-positive cancer were Topo-1 negative. No study (either preclinical or clinical) is currently available regarding the co-expression of Topo-1 and Trop-2 in urothelial carcinoma or other carcinomas. This is an opportunity to develop predictive double biomarker testing for optimization of therapy of complex drugs such as SG and other ADC. CONCLUSION ADCs development and their clinical utility represent one of the most rapidly expanding fields in oncology, with nine currently approved ADCs in clinical practice. In addition, approximately 160 drugs are already included in preclinical and >70 in clinical trials. Although several ADCs, including anti-Trop-2 ADC, have already demonstrated marked therapeutic activity in various hard-to-treat cancers such as metastatic TNBC, HER2-positive, or urothelial carcinomas, more attention should be paid to the identification and development of predictive biomarkers to enhance their efficacy. In addition, a more in-depth and comprehensive understanding of these complex drugs, including the selection of the cell surface targets, antibodies, cytotoxic payload, and the linker technology, will definitely enhance and optimize the efficacy of these promising anticancer agents.
4,067
2021-06-04T00:00:00.000
[ "Biology", "Chemistry" ]
Planar photonic crystal cavities with far-field optimization for high coupling efficiency and quality factor Different types of planar photonic crystal cavities aimed at optimizing the far-field emission pattern are designed and experimentally assessed by resonant scattering measurements. We systematically investigate the interplay between achieving the highest possible quality (Q) factor and maximizing the inand out-coupling efficiency into a narrow emission cone. Cavities operate at telecommunications wavelengths, i.e. around ∼ 1.55 μm, and are realized in silicon membranes. A strong modification of the far-field emission pattern, and therefore a substantial increase of the coupling efficiency in the vertical direction, is obtained by properly modifying the holes around L3, L5 and L7 type PhC cavities, as we predict theoretically and show experimentally. An optimal compromise yielding simultaneously a high Q-factor and a large coupling to the fundamental cavity mode is found for a L7-type cavity with a measured Q ≃ 62000, whose resonant scattering efficiency is improved by about two orders of magnitude with respect to the unmodified structure. These results are especially useful for prospective applications in light emitting devices, such as nano-lasers or single-photon sources, in which vertical inand out-coupling of the electromagnetic field is necessarily required. © 2010 Optical Society of America OCIS codes:(230.5750) Photonic crystals; (350.4238) Nanophotonics and photonic crystals. References and links 1. J. D. Joannopoulos, S. G. Johnson, J. N. Winn, and R. D. Meade, Photonic Crystals: Molding the Flow of Light. (Princeton University Press, Princeton, 2008). 2. A. Badolato, K. Hennessy, M. Atat üre, J. Dreiser, E. Hu, P. M. Petroff, and A. Imamo ǧlu, “Deterministic coupling of single quantum dots to single nanocavity modes,” Science 308, 1158–1161 (2005). 3. D. Englund, D. Fattal, E. Waks, G. Solomon, B. Zhang, T. Nakaoka, Y. Arakawa, Y. Yamamoto, and J. Vu čković, “Controlling the spontaneous emission rate of single quantum dots in a two-dimensional photonic crystal,” Phys. Rev. Lett.95, 013904 (2005). 4. S. Noda, M. Fujita, and T. Asano, “Spontaneous-emission control by photonic crystals and nanocavities,” Nat. Photon.1, 449–458 (2007). 5. S. Strauf, K. Hennessy, M. T. Rakher, Y.-S. Choi, A. Badolato, L. C. Andreani, E. L. Hu, P. M. Petroff, and D. Bouwmeester, “Self-tuned quantum dot gain in photonic crystal lasers,” Phys. Rev. Lett. 96, 127404 (2006). 6. M. Notomi, A. Shinya, S. Mitsugi, E. Kuramochi, and H. Ryu, “Waveguides, resonators and their coupled elements in photonic crystal slabs,” Opt. Express 12, 1551–1561 (2004). 7. M. Notomi, A. Shinya, S. Mitsugi, G. Kira, E. Kuramochi, and T. Tanabe, “Optical bistable switching action of Si high-Q photonic-crystal nanocavities,” Opt. Express 13, 2678–2687 (2005). #129285 $15.00 USD Received 1 Jun 2010; revised 25 Jun 2010; accepted 5 Jul 2010; published 14 Jul 2010 (C) 2010 OSA 19 July 2010 / Vol. 18, No. 15 / OPTICS EXPRESS 16064 8. M. Belotti, J. Galisteo-Lopez, S. De Angelis, M. Galli, I. Maksymov, L. C. Andreani, D. Peyrade, and Y. Chen, “All-optical switching in 2D silicon photonic crystals with low loss waveguides and optical cavities,” Opt. Express16, 11624–11636 (2008). 9. K. Srinivasan and O. Painter, “Momentum Space Design of High-Q Photonic Crystal Optical Cavities,” Opt. Express10, 670–684 (2002). 10. D. Englund, I. Fushman, and J. Vu čković, “General recipe for designing photonic crystal cavities,” Opt. Express 13, 5961–5975 (2005). 11. L. C. Andreani, D. Gerace, and M. Agio, “Gap maps, diffraction losses, and exciton-polaritons in photonic crystal slabs,” Photon. Nanostruct. Fundam. Appl. 2, 103–110 (2004). 12. Y. Akahane, T. Asano, B. S. Song, and S. Noda, “High-Q photonic nanocavity in a two-dimensional photonic crystal,” Nature425, 944–947 (2003). 13. B. S. Song, S. Noda, T. Asano, and Y. Akahane, “Ultra-high-Q photonic double-heterostructure nanocavity,” Nat. Mater.4, 207–210 (2005). 14. E. Kuramochi, M. Notomi, S. Mitsugi, A. Shinya, T. Tanabe, and T. Watanabe, “Ultrahigh-Q photonic crystal nanocavities realized by the local width modulation of a line defect,” Appl. Phys. Lett. 88, 041112 (2006). 15. C. Sauvan, Ph. Lalanne, and J. P. Hugonin, “Slow-wave effect and mode-profile matching in photonic crystal microcavities,” Phys. Rev. B71, 165118 (2005). 16. S. Combrí e, A. De Rossi, Q. V. Tran, and H. Benisty, “GaAs photonic crystal cavity with ultra-high Q: microwatt nonlinearity at 1.55μm,” Opt. Lett.33, 1908–1910 (2008). 17. K. Srinivasan, P. E. Barclay, M. Borselli, O. Painter, “Optical-fiber-based measurement of an ultra-small volume high-Q photonic crystal microcavity,” Phys. Rev. B 70, 081306(R) (2004). 18. P. E. Barclay, K. Srinivasan, and O. Painter, “Nonlinear response of silicon photonic crystal micresonators excited via an integrated waveguide and fiber taper,” Opt. Express , 13801–820 (2005). 19. S. G. Johnson, S. Fan, A. Mekis, and J. D. Joannopoulos, “Multipole-cancellation mechanism for high-Q cavities in the absence of a complete photonic band gap,” Appl. Phys. Lett. 78, 3388–3390 (2001). 20. S.-H. Kim, S.-K. Kim, and Y.-H. Lee “Vertical beaming of wavelength-scale photonic crystal resonators,” Phys. Rev. B73, 235117 (2006). 21. F. R̈omer and B. Witzigmann, “Spectral and spatial properties of the spontaneous emission enhancement in photonic crystal cavities,” J. Opt. Soc. Am. B 25, 31–39 (2008). 22. M. Larque, T. Karle, I. Robert-Philipp, and A. Beveratos, “Optimizing H1 cavities for the generation of entangled photon pairs,” New J. Phys. 11, 033022 (2009). 23. N.-V.-Q. Tran, S. Combri é, and A. De Rossi, “Directive emission from high-Q photonic crystal cavities through band folding,” Phys. Rev. B79, 041101(R) (2009). 24. M. Toishi, D. Englund, A. Faraon, and J. Vu čković, “High-brightness single photon source from a quantum dot in a directional emission nanocavity,” Opt. Express 17, 14618–14626 (2009). 25. M. McCutcheon, G. W. Rieger, I. W. Cheung, J. F. Young, D. Dalacu, S. Fr éd́eric, P. J. Poole, G. C. Aers, and R. Williams, “Resonant scattering and second-harmonic spectroscopy of planar photonic crystal microcavities,” Appl. Phys. Lett.87, 221110 (2009). 26. M. Galli, S. L. Portalupi, M. Belotti, L. C. Andreani, L. O’Faolain, and T. F. Krauss, “Light scattering and Fano resonances in high-Q photonic crystal nanocavities,” Appl. Phys. Lett. 94, 071101 (2009). 27. P. Deotare, M. McCutcheon, I. Frank, M. Khan, and M. Loncar, “High quality factor photonic crystal nanobeam cavities,” Appl. Phys. Lett. 94, 121106 (2009). 28. D. Gerace and L. C. Andreani, “Effects of disorder on propagation losses and cavity Q-factors in photonic crystal slabs,” Photon. Nanostruct. Fundam. Appl. 3, 120–128 (2005). 29. L. C. Andreani and D. Gerace, “Photonic crystal slabs with a triangular lattice of triangular holes investigated using a guided-mode expansion method,” Phys. Rev. B 73, 235114 (2006). 30. Commercial FDTD software from Lumerical Solutions Inc. has been partly used for the 3D FDTD simulations reported in this work. 31. L. O’Faolain, X. Yuan, D. McIntyre, S. Thoms, H. Chong, R. M. De La Rue and T. F. Krauss, “Low-loss propagation in photonic crystal waveguides,” Electron. Lett. 42, 1454–1455 (2006). 32. A. Witvrouwa, B. Du Bois, P. De Moor, A. Verbist, C. Van Hoof, H. Bender, K. Baert, “A comparison between wet HF etching and vapor HF etching for sacrificial oxide removal,” Proc. SPIE 4174130–141 (2000). 33. D. Gerace, H. E. T̈ ureci, A. Imamǒglu, V. Giovannetti, and R. Fazio, “The quantum optical Josephson interferometer,” Nat. Phys. 5, 281–284 (2009). Introduction Planar photonic crystal (PPhC) cavities [1] have become a fundamental tool in modern photonics research, either for investigating basic cavity quantum electrodynamics effects [2][3][4][5] or for developing prospective nanophotonic devices for all-optical integration [6][7][8].One key feature of such nanocavities is the figure of merit represented by the ratio Q/V eff between the cavity mode quality factor and its effective confinement volume.In fact, ultra-high Q-factors have been proposed [9][10][11] and experimentally achieved [12][13][14] with a variety of different PPhC cavity designs, together with unprecedented small (diffraction-limited) V eff . However, even if such ultra-high-Q cavities are very well suited for in-plane applications on photonic chips, a major issue might be represented by their off-plane radiation pattern, which makes vertical in-and out-coupling difficult.Q-factor optimization mostly relies on the widespread strategy of reducing the Fourier components of the cavity mode profile within the light cone to achieve a "gentle confinement" [12] by means of a local geometry adjustment.Quite intuitively, this corresponds to reducing the coupling to radiative modes, which is the major source of losses in such systems.The Q-factor optimization can also be interpreted in terms of Bloch mode profile matching [15].Typical cavity designs that have been proposed and have become widely used among the groups involved in nanophotonic research in last few years are: Ln cavities [12], with n missing holes along the ΓK direction in a triangular lattice, heterostructure cavities [13] and modulated width cavities [14], in which a localized shifting of holes along a photonic crystal waveguide produces a strong field confinement in the propagation direction.This approach is particularly well suited for fully integrated devices in which efficient coupling of electromagnetic energy into the cavity region can be achieved through evanescent excitation from an access waveguide [13,14,16] or a fiber taper [17,18].However, many applications and research directions employing PPhC cavities require an optimized outcoupling (e.g., in emission experiments, such as photoluminescence from active media within the PPhC) or in-coupling efficiency (e.g., in optically pumped nanophotonic devices) along the direction orthogonal to the slab plane, or both (e.g., when excitation and emission are collected through the same optical channel).Nevertheless, very few researchers have considered possible strategies towards an optimization of far-field coupling for PPhC cavity modes after some early attempts to achieve a high-Q cavity design by reducing the out-coupling efficiency and simultaneously manipulating the far-field profile [19].Among them, Kim et al. [20] have discussed far-field optimization of hexapole modes in H1 cavities (i.e. a single missing hole in a triangular lattice), by properly placing a distributed-Bragg reflector below the membrane to get constructive interference of the vertically emitted beam.To the best of our knowledge, Ref. [21] is the first systematic numerical study of simultaneous Q-factor and far-field optimization, mainly for the H1-type cavity (see, e.g., Fig. 3 in the latter work).Optimization of H1-type cavity far-field has been also addressed more recently in Ref. [22].However, H1-type cavities intrinsically suffer from a relatively limited maximum achievable Q-factor.Working on cavity modes with a larger theoretical Q, Tran et al. have proposed a grating approach to concentrate light emission from a L5 cavity around the vertical direction [23], thus enhancing the out-coupling efficiency (with excitation in the plane through an access waveguide).The same concepts have been used for a L3-type cavity in Ref. [24] to demonstrate an efficient single photon source with a single quantum dot.In the latter, a useful collection efficiency in the far-field could be achieved together with Q-factors on the order of 10 4 . Here, we experimentally verify a systematic approach to simultaneously achieving the highest possible Q-factor and an enhanced in-and out-coupling efficiency.We employ PPhC cavities of the Ln type fabricated on a standard Silicon-on-Insulator (SOI) chip, targeting operation at the telecommunications wavelengths (i.e.around 1.55 µm).We elaborate on the simple idea described in Refs.[23,24], but explore the entire parameter space of PPhC cavity design in order to find the best possible compromise.Our modelling is confirmed by an experimental characterization of both Q-factor and coupling efficiency through resonant scattering measurements [25][26][27]. The paper is organized as follows: in Sec. 2 we recall the design strategy for Q-factor and farfield optimization, in Sec. 3 we describe sample fabrication and show our experimental results on the optical characterization of the fabricated devices, together with a discussion on its main outcome in Sec. 4. Finally, we analyze the implications of this work in Sec. 5. Theoretical modelling: design and simulation The principle of far-field optimization through the grating effect [23] relies on the consideration that Fourier components lying outside the light cone can be folded back to k = 0, i.e. around the normal direction to the sample surface, by superimposing a lattice with twice the periodicity of the underlying photonic crystal structure.This way, leakage will be mainly determined by the harmonic components oscillating with a wave-vector k ∼ π/a of the original lattice, which in the Brillouin zone of the modified lattice with period 2a are folded exactly at k = 0.For a L3-type cavity, the principle is illustrated in Fig. 1a.We start from a Q-optimized L3 cavity, with nearby holes along ΓK that have been shifted and shrunk by ∆x/a = 0.16 and r ′ − r = −0.06,respectively (see also Ref. [11]).In Fig. 1a we show which holes around the cavity region can be modified in order to superimpose a second lattice with periodicity 2a.As anticipated in Ref. [23], this procedure of far-field optimization is very robust with respect to disorder [28].In fact, the Fourier components of the L3 cavity mode are always partly folded to k = 0, no matter which kind of perturbation one performs on the selected holes.The far-field emission from the PPhC obviously reflects the Fourier spectrum of its real-space electric field intensity profile.We concentrate, in this Section and in the following ones, on modes of even parity with respect to mirror symmetry through the horizontal plane bi-secting the PPhC membrane (TE-like modes) [11].A systematic analysis of both the Q-factor and the far-field out-coupling efficiency of the fundamental cavity mode is required in order to find the structural parameters that realize the best compromise.To this end, we show in Fig. 1b the calculated Q-factor as a function of the filled holes' radius modification, ∆r ′′ = r ′′ − r: positive hole enlargement (∆r ′′ > 0) gives larger holes, while negative one (∆r ′′ < 0) is for smaller holes.The Q-factor calculations have been performed with a guided-mode expansion (GME) method [29], which allows a fast scanning of the structure parameters.For a selection of modified holes radii, we show in the same plot the calculated out-coupling efficiency in the far-field.For the latter simulation, we employed a commercial three-dimensional finite-difference time-domain (3D FDTD) software [30].We simulate the excitation of the cavity mode with an internal dipole source, recorded the nearfield intensity at the sample surface, and applied a standard near-to-far-field projection [24,30].A few normalized far-field patterns are shown in Fig. 1c, clearly displaying the evolution as a function of ∆r ′′ /a.Finally, for the collection efficiency calculation we assumed a filter in the far-field, corresponding to numerical aperture NA=0.5 (which is usually employed in experiments), i.e. a collection angle θ ≃ ±30 • around the direction normal to the PPhC surface.In practice, we simulate the collection efficiency of the objective by integrating over a definite solid angle around the normal incidence, corresponding to the given NA, which leads to the quantity defined η out .We assume the filter to be orthogonally polarized with respect to the cavity axis, i.e. parallel to the dominant field component of the PPhC cavity mode. The results show that modified PPhC of the L3-type should improve out-coupling efficiency by a factor of ∼ 3.5 as compared to a bare optimized L3 (corresponding to ∆r ′′ /a = 0), and about a factor of ∼ 7 as compared to the cavity with ∆r ′′ /a ∼ −0.01 that is the one with the minimum out-coupling efficiency.Interestingly and somewhat counter-intuitively, for the L3type cavity the behavior of both Q-factor and out-coupling efficiency is slightly asymmetric with respect to ∆r ′′ , showing a minimum collection (maximum Q-factor) for ∆r ′′ < 0. The latter effect is also evident from the far-field intensities of Fig. 1c.As a final comment to these results, we notice that the calculated out-coupling efficiency gain is generally at the expense of a Q-factor reduction.A discussion on the figure of merit leading to the best trade-off between these two quantities will be presented in Sec. 4. Sample fabrication and optical measurements PPhC are fabricated on a standard SOITEC silicon-on-insulator wafer, with a nominal 220 nm device layer with 2 µm buried oxide, using electron-beam lithography (hybrid ZEISS GEMINI 1530/RAITH ELPHY system) and reactive ion etching with a CHF 3 /SF 6 gas mixture (see [31] for details).The buried oxide layer underneath the photonic-crystal slab was selectively underetched using a vapor Hydrofluoric acid method [32] to leave the photonic crystal section as a suspended silicon membrane.The schematic structure designs and the holes to be modified for far-field optimization are represented in Fig. 2a.The lattice constant for all devices was 420 nm, with nominal hole radius r/a ∼ 0.28, and the dimensionless parameters, ∆x/a and ∆r ′ /a, were also held constant.The modified holes' radii have been reduced/increased in steps of 3 nm, i.e. from ∆r ′′ = −21 nm to ∆r ′′ = +21 nm.The exposure conditions were carefully chosen to allow such precise increments in hole radii.As L3 type PPhCs have maximum Q-factors on the on order of 10 5 , we have also designed and fabricated L5 and L7 type PPhC cavities (i.e., 5 and 7 missing holes along ΓK), which nominally have even larger Q-factors.For such cavity types, similar far-field optimization principles hold, promising useful coupling efficiencies at possibly higher Q factors.Figure 2b shows some examples of the fabricated modified PPhCs.Modified holes are also visible in the SEM images.Optical characterization of the PPhC devices is performed by resonant scattering (RS) from the sample surface.The technique is illustrated in Fig. 3a and detailed in Ref. [26].Briefly, it consists of measuring reflectance at normal incidence from the PPhC in a crossed-polarization geometry defined by a polarizer (P) and an analyzer (A).The cavity must be oriented at 45 • with respect to both P and A in order to achieve simultaneous coupling of incoming and outgoing polarizations with the fundamental cavity mode, therefore maximizing the resonant signal over the background.Asymmetric Fano lineshapes are in general observed and can be fitted with the function where q is the Fano parameter which determines the asymmetry of the lineshape and A 0 and F 0 are constant factors.The quality factor is determined as Q = ω 0 /Γ.Notice that for q ≫ (ω − ω 0 )/Γ the Fano lineshape reduces to a symmetric Lorentzian.In this case, the quantity F 0 q 2 represents the intensity of the RS signal at resonance with the cavity mode.A typical RS spectrum is shown in Fig. 3b together with the Fano lineshape fit.The Q-factors directly extracted from the RS measurements are reported in Fig. 3c, and compared to the GME calculations in Fig. 3d.A very good agreement can be noticed for all the measured devices.For L3-type PPhC, the maximum Q-factor occurs for ∆r ′′ < 0, as already anticipated in Fig. 1 and here confirmed experimentally.The maximum theoretical Q-factor (Q ∼ 10 6 ) is for the L7-type cavity with ∆r ′′ = 0.This is confirmed experimentally with the measured Q ∼ 4 × 10 5 , only a factor of ∼ 2 reduced with respect to the one predicted for the ideal L7 PPhC cavity.The RS measurements have been used also to give a qualitative estimation of the coupling efficiency of our devices.In fact, the quantity F 0 q 2 in Eq. ( 1) is proportional to the light intensity that has been coupled to the cavity mode and reflected back to the detector in crossed polarizations [24].To normalize this quantity, we determine the intensity I of the incident light by replacing the sample with a nearly ideal dielectric mirror and measuring the reflected intensity, under the same focusing conditions but with parallel polarizations.Thus, we define the RS efficiency as F 0 q 2 /I.The latter quantity is taken as a measure of cross-polarized scattering due to resonant coupling with the cavity mode (see Fig. 3b). A few representative RS spectra are shown in Fig. 4a for the L3-type PPhC with different hole modifications.The RS efficiencies, η RS = F 0 q 2 /I, are reported in Fig. 4b for L3 cavities, together with the corresponding Q-factors.The latter figure should be compared to the theoretically predicted behavior shown in Fig. 1b, which is qualitatively well reproduced from the experimental curves.In particular, the minimum RS efficiency occurs in correspondence with the maximum Q-factor, as anticipated in Fig. 1b.For the L3-type cavity, this happens theoretically for ∆r ′′ /a ∼ −0.003; in the experiment, it is the cavity with ∆r ′′ = −3 nm that simultaneously displays the largest Q-factor and the smallest RS efficiency. In order to complete our analysis, we show in Fig. 5a the measured RS efficiencies for the whole series of L3, L5, and L7 devices.For all the devices, we refer to their fundamental TE-like cavity mode.In general, the behavior of the RS efficiency as a function of the hole modification is analogous for all the three series of modified cavities, showing a pronounced minimum close to the unmodified cavity and a rapid increase for both positive and negative values of ∆r ′′ .A quantitative comparison between the different devices to infer the best tradeoff can be directly made by looking at the relevant figure of merit, i.e. the product of Q-factor (data reported in Fig. 3c) and RS efficiency, which is shown in Fig. 5b.From this plot, we can directly infer that an optimal compromise between Q-factor and RS efficiency is reached for the L7-type cavity with ∆r ′′ = 6 nm.For this device, we measured Q ∼ 62000 and a RS efficiency η RS ∼ 16%, improved by about two orders of magnitude with respect to the unmodified L7 cavity (i.e., the one with ∆r ′′ = 0).The figure of merit plotted in Fig. 5b Coupling efficiencies A direct quantitative comparison between the measured RS efficiency and a theoretically modeled coupling coefficient is a nontrivial task, requiring specific simulation of the RS configuration with in-and out-going focused Gaussian beams that imply a significant increase of convergence issues and computational effort.Moreover, even in presence of such an accurate modelling, extraction of the absolute coupling efficiency to the cavity mode from the RS measurements would not be straightforward.This is due to the combined effect of the specific experimental geometry and the polarization properties of the cavity mode itself, which depend nontrivially on the holes' modification that introduce scattering of field components both parallel and perpendicular to the cavity axis (as we have experimentally verified).This more advanced analysis is beyond the scope of the present manuscript, and it is left for future work. However, the key figure of merit quantifying the best trade-off between Q-factor and coupling coefficient to the cavity mode can be identified without the need for such an analysis.This is demonstrated in Fig. 5b, where the measured RS efficiency can be taken as an indicative measure of the real coupling efficiency from the external world to the cavity mode, along the lines already reported in previous work [24].Thus, we can give a qualitative interpretation of the experimental data shown in Fig. 5 by using our FDTD results obtained by exciting the cavity mode through an internal dipole source.To this end, and to approximate the experimental situation as closely as possible, we have assumed a convolution of the normalized cavity mode far-field profile with a (normalized) Gaussian obtained from the near-to-far field propagation of a spot corresponding to the NA used experimentally.The out-coupling efficiency is finally calculated by filtering this convolution at an angle corresponding to the same numerical aperture (standard NA=0.5, i.e. θ ∼ ±30 • around the normal incidence), thus mimicking the finite spatial extension of the collection lens.This quantity is defined as the "filtered" out-coupling efficiency, η FDTD .Results are shown in Fig. 6a for the simulated cavities, corresponding closely to the fabricated devices.Although a comparison of the absolute values reported in Figs.5a and 6a is not truly justified, we immediately notice that the qualitative behaviors compare fairly well across the entire parameter range.In particular, pronounced minima occur close to the unmodified L5 and L7 cavities, the first cavity type showing an even lower coupling efficiency.The latter effect is very well evidenced both in experiment and theoretical modelling.The figure of merit giving information on the best trade-off between Q-factor and out-coupling efficiency, Q × η FDTD , is reported in Fig. 6b.Also in this case, an overall qualitative agreement between theoretical modelling and experimental data can be recognized.In particular, the L7 cavity shows the best trade-off for values of ∆r ′′ slightly larger than zero.For the L7 cavity parameters that yield an optimal compromise between Q-factor and RS efficiency (i.e.∆r ′′ = 6 nm), theoretical modelling predicts Q ≃ 10 5 (see Fig. 3d) and filtered out-coupling efficiency η FDTD ∼ 50%.In summary, both experiment and theoretical modelling present the same qualitative behavior of the figure of merit for all the analyzed cavities, yielding concurring values for the cavity geometry giving the best trade-off between Q-factor and coupling efficiency. Conclusions We have designed, fabricated and characterized a series of silicon PPhC cavities with modified geometry to improve coupling of cavity modes in the far field with an incoming/outgoing beam at telecom wavelengths.A systematic investigation of L3, L5, and L7 cavity geometries by means of guided-mode expansion and 3D FDTD simulations allows us to quantify the Qfactors and the out-coupling efficiency.Measurements of cavity modes by means of resonant light scattering with crossed polarizations yield the cavity Q-factors, which agree very well with the theoretical calculations.Such measurements yield also the RS efficiency, which is strongly enhanced for far-field optimized cavities with suitably modified surrounding holes.Our results demonstrate that far-field optimized PPhC cavities can have simultaneously high coupling efficiency and quality factors.A new, relevant figure of merit has been considered to this end, namely the product of the experimentally determined Q-factor and RS efficiency.In particular, an optimal compromise was found for an L7 cavity with modified holes' radii increased by 6 nm, in which Q ∼ 60000 and RS efficiency improved by more than 2 orders of magnitude with respect to the unmodified cavity were experimentally measured.The present results can be important for the realization of efficient nano-lasers and single-photon sources, as well as implementation of recent proposals with multi-cavity devices [33], in which high Q-factors and good in-and out-coupling efficiency are simultaneously required in a PPhC-based architecture. Fig. 1 . Fig. 1.(a) Schematic of far-field optimized PPhC cavity of the L3-type.Holes with red edge are shrunk and shifted to optimize the Q-factor.Dark holes are modified to increase the vertical out-coupling.(b) Calculated Q-factor and out-coupling efficiency (η out ) as a function of the filled holes' radius enlargement.Parameters of the basic PPhC structure are: membrane thickness d = 220 nm, lattice constant a = 420 nm, photonic crystal holes' radius r/a = 0.265, refractive index of dielectric slab n diel = 3.46, red holes shift ∆x/a = 0.16, shrink ∆r ′ /a = −0.06.(c) A selection of calculated far-field patterns (electric field intensity profile, |E| 2 ) corresponding to the labeled numbers on the efficiency plot (see numbers in panel b).Field intensities are normalized to the total emitted power in the vertical half-space.Concentric circles correspond to θ = 20 • , 30 • , 40 • , 50 • , 60 • , 90 • from the inner to the outer one, respectively. Fig. 2 . Fig. 2. (a) Schematic pictures of the fabricated PPhC devices: 3, 5, and 7 missing holes define the L3, L5 and L7-type cavities, respectively.Red holes are shifted (∆x/a = 0.16) and shrunk (∆r ′ /a = −0.06)for Q-factor optimization, while dark holes are modified for far-field optimization.(b) SEM images of 3 fabricated devices on silicon membranes.Holes corresponding to the filled circles in (a) are enlarged by ∆r ′′ = 21 nm in these images.Lattice constant is a = 420 nm for all the investigated PPhC devices. Fig. 4 . Fig. 4. (a) Sample spectra from resonant scattering measurements on the fundamental mode of L3-type PPhC.(b) The extracted Q-factors and RS efficiencies extracted from the measured data in (a) and plotted as a function of ∆r ′′ , to be compared to Fig. 1(b). - 20 Figure of Merit, Q x η FDTD Fig. 6 . Fig. 6.(a) Modelling of the collection efficiency for L3, L5, and L7 devices (as obtained from 3D FDTD simulations) for an objective with NA=0.5, which has been filtered with a normalized gaussian spot propagated in the far-field whose divergence angle corresponds to the nominal NA (the result is defined η FDTD ); (b) the corresponding figures of merit (Q × η FDTD ) obtained from the calculated Q-factors (Fig. 3d) for the experimentally realized values of ∆r ′′ .
6,564.6
2010-07-19T00:00:00.000
[ "Engineering", "Physics" ]
Evaluation of power consumption of Spanke Optical Packet Switch The power consumption of an Optical Packet Switch equipped with SOA technology based Spanke switching fabric is evaluated. Sophisticated analytical models are introduced to evaluate the power consumption versus the offered traffic, the main switch parameters, and the used device characteristics. The impact of Amplifier Spontaneous Emission (ASE) noise generated by a transmission system on the power consumption is investigated. As a matter of example for 32 × 32 switches supporting 64 wavelengths and offered traffic equal to 0,8, the average energy consumption per bit is 5,07 · 10−2 nJ/bit and increases if ASE noise introduced by the transmission systems is increased. I. INTRODUCTION E NERGY efficiency can be considered as one of the biggest challenges in a large part of industrial and research fields. This arises from the need of reducing the energy related expenses of enterprises, industries as well as residential buildings, while keeping an eye on targets for the reduction of greenhouse gas emissions [1]. For example, as shown in [2], energy consumption of Telecom Italia network in 2006 has reached more than 2TWh (about 1% of the total Italian energy demand), increasing by 7.95% with respect to 2005, and by 12.08% to 2004. Another explanatory example is represented by British Telecom, which absorbed about 0.7% of the total UK's energy consumption in the winter of 2007, making it the biggest single power consumer in the nation [2]. Moreover, as evidenced in [3], about 10% of the UK's entire power consumption in 2007 was related to operating IT equipment. Research has been initiated in recent years for energy saving of the Internet. This effort has been called "Greening the Internet". Several promising technologies have been being proposed from the component level to the network level [4]. Other solutions are based on power-aware system design that leads to reduce the power consumption of the electronic routers. The optical switch [5]- [6] has long been considered the primary candidate for replacing the electronic routers. For lacking of optical buffer, buffeless Optical Packet Switches are promising nodes in reducing the power consumption [8]. To solve output packet contentions they use the wavelength domain. Contending packets are wavelength converted by using Wavelength Converters (WC). Due to the high power consumption of WCs, especially for bit-rate increasing, Optical Packet Switch (OPS) architectures with shared WCs have V. Eramo [8]. In some cases they allows us to reduce by 80% the power consumption with respect to OPS in which no WCs sharing is performed [8]. In this paper we propose an analytical model to evaluate the average energy consumption per bit of an Optical Packet Switch equipped with a Spanke switching fabric realized in Semiconductor Optical Amplifier (SOA) technology. Sophisticated analytical models are introduced to evaluate the the power consumption of the devices, in particular SOAs, needed to realize the switching fabric. The introduced models allow us to evaluate the impact that Amplifier Spontaneous Emission (ASE) noise, generated by a transport system, has on the SOA's power consumption due to the SOA gain saturation. By means of the these models, we are able to evaluate the average energy consumption per bit of the Spanke switch as a function of the main system and traffic parameters and versus the characteristic of the transmission system (length, number of amplifiers, . . . ). The remainder of the paper is organized as follows. Section II describes the Spanke switch. An analytical model evaluating the average energy consumption per bit in Spanke switches versus the offered traffic, the switch parameters and the characteristics of the transmission system is described in Section III. The main numerical results are illustrated in Section IV where we provide some results on the power consumption of the Spanke switch. Finally Section V provides some final remarks and concludes the paper. II. SPANKE OPTICAL PACKET SWITCH The studied general switching architecture is reported in Fig. 1. It is equipped with N input/output fibers (IF/OF) where each IF/OF supports M wavelengths channels. Let λ i (i = 0, . . . , M − 1) be the wavelengths carried on each OF. In order to save power consumption, the OPS is equipped with fully shared Wavelength Converters (WC). Packets not requiring wavelength conversion are directly routed towards the Output Fibers (OF). On the contrary packets requiring wavelength conversion will be directed to a pool of WCs, wavelength converted and next routed to the OF to which they are directed. An Optical Packet Switching architecture equipped with Spanke switching fabric realized in Semiconductor Optical Amplifier (SOA) technology is studied. An example of Spanke switch is illustrated in Fig. 2 in the case N =2, M =2 and r=1. A full pool of r Wavelength Converters is shared among the arriving packets. The selection of either an OF or a WC is realized by turning on one Semiconductor Optical Amplifier (SOA) of a 1×(N + r) Space Switching Module (SSM) of Control the 1st stage. Each SSM is composed by one splitter and N + r SOAs. The activated SOA allows the splitter loss to be overcome. One N M ×1 SSM of the 2nd stage, composed by one coupler and one SOA, has the function to forward to the WC stage a packet to be wavelength converted. The SOA gain allows the coupler loss to be overcome. The converted packets are sent to the OFs by turning on one SOA of a 1×N SSM of the 3rt stage. Finally each OF is equipped with an (N M + r)×1 SSM in the 4th stage whose function is to address towards the OF the packets coming either from the Input Wavelength Channels or from the pool of shared WCs. III. ANALYTICAL EVALUATION OF THE POWER CONSUMPTION IN SPANKE SWITCH To evaluate the Spanke switch energy consumption we use the model illustrate in [9] to evaluate the SOA's power consumption C SOA . According to this model, P SOA can be expressed as follows: where V b is the SOA forward bias voltage, I b is the polarization current, Γ SOA is the confinement factor, α SOA is the material loss, L SOA is the length and i t is the transparency current given by: w SOA being the SOA active region effective width, d SOA the active region depth, q = 1, 6×10 −9 C the electronic charge, N 0 the conduction band carrier density required for transparency, τ the carrier spontaneous decay lifetime. The amount of gain saturation G s is a function of the SOA input power P in SOA and the following nonlinear equation gives the unsaturated gain G us required to produce saturated gain G s [10]: In evaluating the various power consumption in the Spanke switch shown in Fig. 2 we notice that at time t: • there are as many turned on SOAs in 1st stage as the number N a (t) of packets forwarded; • the number of turned on SOAs in both 2nd stage and 3rd stage equals the number N c (t) of converted packets; • there are as many active turned on SOAs in 4th SSM stage as the number N d (t) of OFs in which at least one packet is directed; • we assume that all of the r Wavelength Converters are turned on. According to these remarks we can write the following expression for the average power consumption P Spanke av,T for the Spanke switch shown in Fig. 2: wherein: 4) is the power consumption of a turned on SOA in the ith stage (i=1, . . . , 4); • C W C is the power consumption of a Wavelength Converter; • C SOA of f is the power consumption of a turned off SOA; it is equal to V b i of f where i of f is the polarization current of an inactive SOA and needed to guarantee a high SOA switching rate [11] . . . , 4) we notice that an accepted packet may follow either of the paths reported in Fig. 3.a and Fig. 3.b according to the case in which it is or not wavelength converted. In the Figures we report both the values of loss introduced by the splitters and couplers and the SOA saturated gain. From Fig. 3.b and by using the expression of the power consumption of a cascade of SOA and passive elements reported in [13]- [14], the expressions Eqs (6)- (8) reported at page 3 can be obtained for C SOA i (i=1, 2, 3), wherein: • P in s is the input signal power; • P in ASE is the Amplifier Spontaneous Emission (ASE) noise power; • P se = n sp p ef f hν c B 0 ; each SOA is assumed to emit ASE with constant spectral density within the optical bandwidth B 0 ; ν c is the center frequency, h is the Planck constant, n sp is the excess spontaneous emission factor [15], p ef f is a factor which ranges from 1 for a device which amplifies only one polarization to 2 for a polarization-insensitive device. The evaluation of the power consumption C SOA 4 is evaluated in [14] and it is reported in Eq. (9) at page 4. Finally notice as by inserting Eqs (5)-(9) in Eq. (4) show how the ASE noise generated at the switch input may influence the power consumption of the Spanke switch due to SOA gain saturation. We perform the analysis under the following assumptions: • The SOA's power consumption model illustrated in [9] is adopted and allowing us, according to Eq. (1), to express the SOA power consumption as a function of the main SOA parameters (V b , i b , w SOA , . . . ); A 1 commercial SOAs [12] produced by manufacture A is used to implement the switching fabric. The A 1 parameter values are reported in Table I. • As Wavelength Converter, the Delayed Interference Signal Wavelength Converter (DISC) proposed in [16] is used. DISC utilizes an SOA and an Optical Bandpass Filter placed at the amplifier output. It can be constructed by using commercially available fiber-pigtailed components. It has a simple configuration and allows photonic integration. Its power consumption has been evaluated in [16] when commercial SOA produced by some manufactures are employed. In particular we consider the A 2 SOA characterized by a Multiple Quantum Well (MQW) type structure and produced by manufacture A. We report in [16], is also reported. It equals 187mW when the WC is operating at bit-rate B=40 Gb/s. • We assume that the ASE noise P in ASE is generated by a Wavelength-Division Multiplexing (WDM) transmission system comprising S identical stages, each of length (i.e., EDFA amplifier spacing) L as illustrated in Fig. 4. The total length of the transmission system is L tot = SL. Each stage in Fig. 4 is modeled by a fiber attenuation the attenuation and the gain of the splitters, couplers and SOAs are reported transmitter, S identical stages of optical gain. Each stage has length L and the used fiber is characterized by an attenuation equal to α dB/Km. Ltot is block with a power loss of D fiber = e αL , where α is the power attenuation per unit length of the fiber and an amplifier gain block with power gain G EDF A which is equal to the loss per stage (i.e., G EDF A = D fiber ). At each wavelength, the ASE noise P in ASE is given by the following expression [17]: where n EDF A sp is the excess spontaneous emission factor of each EDFA amplifier. We choose the switch parameters N =32, M =64 and the offered traffic p=0,8. The operation bit-rate is B=40 Gb/s and the optical bandwidth is B 0 =100Ghz. The used SOAs are characterized by the parameters n sp =3,5, p sp =2. Power consumption is not taken into account for the turned off SOAs (i of f =0). The energy consumptions are reported in Fig. 5 varying the number S of stages from 0 to 30 of the transmission system generating ASE noise at the switch input. Each stage has length L=60Km with attenuation α = 0, 2 dB/Km and each EDFA is characterized by n EDF A sp =1. The case S=0 corresponds to the case in which no ASE noise is generated because electrical regeneration is performed before the switching operation. We also report in Fig. 6 the SOA power consumption and versus the number r of WCs for S=0 and S=20. From Fig. 5 we can notice how the increase in ASE noise makes less energy efficient the Spanke switch. For instance in the case of Spanke switch and when r=320 WCs are used, E Spanke av,T increases from 3, 85 · 10 −2 nJ/bit to 5, 02 · 10 −1 nJ/bit when S increases from 0 to 30. That is consequence of the increase in ASE noise that saturates the SOAs gain leading to the need to increase the power consumption as indicated by the Eqs (5)- (9). The increase in power consumption due to the ASE noise is confirmed in Fig. 7 where we report the average energy We have proposed a sophisticated analytical model in order to evaluate the average energy consumption per bit of the Spanke switch. In the evaluation of the energy consumption we take into account the ASE noise generated by the transmission system that can degrade the performance in power consumption because of the gain saturation of the SOA gates needed to realize the switching fabric. We have verified the the ASE noise generated by a transmission system may strongly degrade the switch performance in terms of power consumption. As a matter of example, if a switch with N =32 and M =64 is taken into account and the offered traffic p equals 0,8, the average energy consumption per bit is 1, 53 · 10 −1 nJ/bit in the case of a transmission system of total length 2100Km and S=30 spans.
3,284.4
2011-09-01T00:00:00.000
[ "Engineering", "Physics" ]
Pentoxifylline/Chitosan Films on Wound Healing: In Vitro/In Vivo Evaluation This study aimed to develop films of chitosan (CSF) associated with pentoxifylline (PTX) for healing cutaneous wounds. These films were prepared at two concentrations, F1 (2.0 mg/mL) and F2 (4.0 mg/mL), and the interactions between the materials, structural characteristics, in vitro release, and morphometric aspects of skin wounds in vivo were evaluated. The formation of the CSF film with acetic acid modifies the polymeric structure, and the PTX demonstrates interaction with the CSF, in a semi-crystalline structure, for all concentrations. The release for all films was proportional to the concentration, with two phases: a fast one of ≤2 h and a slow one of >2 h, releasing 82.72 and 88.46% of the drug after 72 h, being governed by the Fickian diffusion mechanism. The wounds of the mice demonstrate a reduction of up to 60% in the area on day 2 for F2 when compared to CSF, F1, and positive control, and this characteristic of faster healing speed for F2 continues until the ninth day with wound reduction of 85%, 82%, and 90% for CSF, F1, and F2, respectively. Therefore, the combination of CSF and PTX is effective in their formation and incorporation, demonstrating that a higher concentration of PTX accelerates skin-wound reduction. Introduction Wounds can place a significant burden on healthcare systems and are estimated to cost over USD 25 billion annually worldwide to treat chronic wounds [1]. In addition, more than 180,000 deaths occur each year from burns, of which 75% involve bacterial infection in the region [2,3]. The definition of treatment is based on the assessment of multiple variables, such as the depth, extent, and location of the burn; the age of the patient; and trauma associated with diseases. Additionally, most clinical treatments use antimicrobial agents [4]. It is important to highlight that, in the wound environment, there is a loss of function of the innate barrier of the skin and the dermal appendages, which facilitates microbial colonization, since microbes find optimal conditions (temperature, humidity, and nutrients) for their Materials and Methods Pentoxifylline (PTX) was kindly donated by Fagron (São Paulo, SP, Brazil) with 99% purity. Chitosan (CS) was purchased from Sigma Aldrich, St. Louis, MO, USA (degree of deacetylation > 85%, Mw = 30,000), and was used as a film-forming polymer. Other solvents such as glacial acetic acid Labsynth (São Paulo, SP, Brazil), ethanol Neon (São Paulo, SP, Brazil), and purified water, which was prepared using an ultrapure water system, were used. All chemical solvents were of analytical grade and were used without purification. Film Preparation Chitosan films (CSF) and films containing the drug PTX (PTXCSF) were prepared under different concentrations using the solvent evaporation method (casting). For chitosan films (CSF), a 1% (w/v) solution was prepared by dissolving it in acetic acid 1% (v/v). This solution was subjected to magnetic stirring for 24 h without interruption in the stirring time. After this period, it was vacuum filtered (qualitative filter paper 125 mm with retention of particles greater than 12 µm) to remove any insoluble material, thus obtaining a polymeric Pharmaceutics 2023, 15,1122 3 of 18 solution of CS. Then, the solution was poured into a Petri dish (diameter of 5.5 cm, in a volume of 5 mL) and subjected to drying for solvent evaporation in an air circulation oven (TE-394/2 TECNAL Piracicaba, São Paulo, Brazil), at a constant temperature of 50 • C for 24 h. The PTXCSF was prepared by solubilizing PTX in water. Thus, 1 mL of the PTX solution was added to the CS solution, obtaining final concentrations of 2 mg/mL (F1) and 4 mg/mL (F2). These solutions were continuously stirred for 24 h and then filtered and poured into a Petri dish and dried under the same conditions described above. For all systems, photomacrographs were obtained with a digital camera (Nikon ® D5300 24.2 MP + Tamron ® lens 16-300 mm F/3.5-6.3 Di II VC PZD MACRO). Characterization of Films All analyzes were performed in triplicate for the correct characterization of the films produced for this research. Analyzes of CS, PTX, CSF, F1, and F2 were performed. Scanning Electron Microscopy (SEM) The photomicrographs of the PTX, CSF, F1, and F2 samples were observed with a TESCAN VEGA 3 microscope (Tescan Analytics, Fuveau, France) using secondary electron images at 2.0 kV for PTX and 8.0 kV for the film formulations increased on a 20 µm scale to examine their surface morphology and internal structure. The sample was mounted with platinum several micrometers thick, which was deposited on the surface of the sample to prevent loading and protect the bottom surface from being damaged by the ion beam. Fourier Transform Infrared Spectroscopy (FTIR) Fourier transform mid-infrared spectroscopy analysis was performed using the Perkin Elmer ® Spectrum 400 FT-IR/FT-NIR Spectrometer equipment (Waltham, MA, USA) using a universal attenuated reflectance accessory (UATR-FTIR) with crystal in its diamond upper base and a zinc selenide focusing element. The analyzed samples were performed with a scan from 4000 to 600 cm −1 with a resolution of 4 cm −1 . X-ray Diffraction (XRD) The diffractograms were obtained using a Shimadzu diffractometer (model XRD 6000, Tokyo, TYO, Japan) equipped with a copper anode. The samples were prepared under a glass support with a thin layer of powder material and were analyzed with 2-45 • angular scanning at a scanning speed of 0.5 • min −1 , using Cu radiation (k α1 ). Differential Scanning Calorimetry (DSC) Differential scanning calorimetry (DSC) curves were obtained on a DSC Q20 (TA Instruments, New Castle, DE, USA). The samples (2000 ± 0.005 mg) were placed in hermetically sealed aluminum crucibles. The analysis was performed under the following conditions: from 25 to 335 • C, at a heating rate of 10 • C/min, in a nitrogen atmosphere with a flow of 20 mL/min. In Vitro Release Test with Franz Cells in Synthetic Membrane The in vitro release study of the prepared F1 and F2 films was evaluated in Franz-type vertical diffusion cells, with a diffusion area of 0.7539 cm 2 , with receptor chambers of an approximate volume of 7 mL, using artificial hydrophilic acetate membranes of cellulose with a pore diameter of 0.45 µm (Millipore, Barueri, Brazil). The receptor compartment was filled with a 7.4 phosphate buffer saline solution (PSB) and ethanol cosolvent in a ratio of 60:40, in a system composed of six individual cells connected to a thermostated bath at 37 ± 0.5 • C under constant stirring at 100 rpm with a magnetic stirrer for a period of 72 h. Acetate membranes were placed on top of the recipient cell and in the donor compartment under this membrane, where the F1 and F2 films were directly arranged, being adjusted to the diffusional area, resulting in a real PTX concentration of 1.37 and 2.74 mg for F1 and F2, respectively, and being after the closed system. Receptor solution samples were collected at predetermined times of 0.25, 0.5, 1, 1. 5, 2, 4, 6, 8, 12, 18, 24, 30, 36, 42, 48, and 72 h. All the solution in the receiving compartment was collected and immediately replaced with PBS to maintain the system's sink conditions. The cumulative amount of PTX released through the membrane was calculated taking into account the area (µg/cm 2 ), with results plotted as a function of time. The values were quantified in a UV-Vis spectrophotometer (SHIMADZU 1800, Kyoto, Japan) (λ = 273 nm). PTX release kinetics were evaluated using five different theoretical mathematical models using in vitro transdermal drug release data (Shah, 2017); these were the zero-order (µg/cm 2 versus time), first-order (log µg/cm 2 versus time), Higuchi (µg/cm 2 versus time), Korsmeyer-Peppas (log µg/cm 2 versus log time), and Hixson-Crowell models. It was verified which model best described the release of the drug from the films, considering the value of the correlation coefficient (r), using the equations (Equations (1)-(4)) described: • Korsmeyer-Peppas Model [37] Mt M∞ = kt n (4) • Hixson-Crowell Model [38] where Q is the amount of drug released, Q 0 is the initial amount of drug in solution, k 0 is the zero-order release constant, C 0 is the initial drug concentration, and C t is the drug concentration in solution at time t. M t /M∞ is the fraction of the drug released at time t, and k is the rate constant. Animals To evaluate the wound healing efficiency of the films, the F1 and F2 formulations were used in male and female Swiss mice weighing between 25 and 35 g. The animals were housed in polypropylene boxes suitable for rodents and kept at a temperature of 22 ± 2 • C and relative humidity of about 60 ± 15%, with a light/dark cycle of 12 h, and fed with standard laboratory chow and water ad libitum. All experiments were conducted following the protocols approved by the Ethics Committee for the Use of Animals (ECUA) of the Faculty of Medical Sciences of Campina Grande/PB (FCM/CESED) (Protocol number: 0076022022018). Wound Healing Assay: Treatment Groups, Clinical and Morphometric Analysis To perform the experimental skin wounds, the animals were anesthetized by an intraperitoneal injection containing ketamine 100 mg.kg −1 and xylazine 0.05 mg.kg −1 . The dorsal region was shaved and two 7 mm diameter circular skin excisions were later performed with the aid of a dermatological punch, from which the skin fragments were removed, leaving the panniculus carnosus exposed. [39][40][41][42]. The treatment groups are described in Table 1. Topical treatments were applied daily (once a day) from the second day after a skin-wound formation (the day of experimental wound formation was considered D 0 ) until the penultimate day of the experiment. On days 2, 4, 6, and 9, the wounds were photographed to aid clinical assessment and perform a morphometric assessment. The total amount of animals 35 To evaluate the evolution of the clinical aspects of wound healing, a macroscopic analysis was performed, observing these phlogistic signs: edema, hyperemia, and formation of crust and exudate, on days 2, 4, 6, and 9 after the surgical wounds were performed [43]. In addition, the macroscopic efficacy of the treatments was evaluated by determining the percentage of wound closure throughout the experiment. Therefore, on days 2, 4, 6, and 9 of the study, the wounds were photographed using a digital camera (Nikon ® D5300 24.2 MP + Tamron ® lens 16-300 mm F/3.5-6.3 Di II VC PZD MACRO) at a fixed distance of 15 cm from the animal. The images of the wounds, in high definition, were obtained together with the image of a line on a graduated scale kept next to the wound that allowed the measurement of the wound areas (mm 2 ) using the ImageJ ® software (National Institutes of Health, Bethesda, MD, USA) [44][45][46]. The residual area of each wound was calculated using Equation (6): Statistical Analysis Data were expressed as mean ± standard deviation (SD). One-way analysis of variance (ANOVA) followed by the post hoc Neuman-Keuls test was used for comparison between groups. Differences with * p < 0.05 were considered significant. Film Development and Characterization The study of the morphology and characteristics of the surface of the films is represented in Figure 1, with photomicrographs analyzed by SEM and photomacrographs. The PTX powder (Figure 1a) has an acicular shape with irregular crystal sizes and polydisperse size distribution. The shapes have a jagged edge structure and are not sticking together. The CSF (Figure 1b) presents itself as a matrix with a homogeneous and non-porous surface with irregularly shaped folds [47,48]. Macroscopically, it presents a completely translucent color with flexible and elastic characteristics (Figure 1b(i)). On the other hand, films F1 and F2 (Figure 1c,d) showed surface modification when PTX was incorporated into the system and modification in the distribution in the polymer matrix, with no irregular crystal sizes being observed, explained by the amorphization of the drug. In F2 the degree of irregularity was higher when compared to F1, due to the higher concentration of PTX in this film altering the surface morphology of the films. This becomes evident in the photomacrographs in Figure 1c(ii),d(iii), where they exhibit a slightly yellowish coloration and apparent turbidity, due to the incorporation of the drug in the polymeric matrix. together. The CSF (Figure 1b) presents itself as a matrix with a homogeneous and nonporous surface with irregularly shaped folds [47,48]. Macroscopically, it presents a completely translucent color with flexible and elastic characteristics (Figure 1(bi)). On the other hand, films F1 and F2 (Figure 1c,d) showed surface modification when PTX was incorporated into the system and modification in the distribution in the polymer matrix, with no irregular crystal sizes being observed, explained by the amorphization of the drug. In F2 the degree of irregularity was higher when compared to F1, due to the higher concentration of PTX in this film altering the surface morphology of the films. This becomes evident in the photomacrographs in Figure 1(cii,diii), where they exhibit a slightly yellowish coloration and apparent turbidity, due to the incorporation of the drug in the polymeric matrix. FTIR Spectroscopy Analysis The analyses of possible interactions that may exist between the materials were observed using FTIR and the spectra are shown in Figure 2. The CS showed a broad absorption band of 3352 cm −1 attributed to the elongation of the hydroxyl groups (O-H) and a C-H axial stretch band at 2872 cm −1 , and at 1649 cm −1 a double bond deformation band was observed (C=O) of amide I, and a symmetrical plane of angular deformation of amide II was observed at 1589 cm −1 ; finally, at 1062-1025 cm −1 , specific bands of the C-O and C-O-C glycosidic bond were observed [49][50][51][52]. As for CSF formed from acetic acid as solvent, it was observed that the characteristic band of amide II at 1589 cm −1 was shifted to 1535 cm −1 , increasing the intensity, which suggests the presence of -NH in this film or even at the insertion of new amino groups in the chitosan structure, due to the electrostatic interaction between chitosan and acetic acid, and the reconstruction of a new network of CS hydrogen bonds during film formation, as previously verified by Qiao, et al. and Zhang, et al. [52][53][54][55]. In addition, a band shift at 1404 cm −1 is observed related to the vibrations of the carboxylate ion -COO-and an increase in the intensity of the bands at 1062 cm −1 . In addition, a band shift at 1404 cm −1 is observed related to the vibrations of the carboxylate ion -COO-and an increase in the intensity of the bands at 1062 cm −1 and a shift at 1016 cm −1 which, due to the rearrangement of the network, affects the bonds between the monosaccharides, and a shift at 1016 cm −1 which, due to the rearrangement of the network, affects the bonds between the monosaccharides [54,55]. X-ray Diffraction (XRD) The X-ray diffractograms are shown in Figure 3. For CS, due to the presence of N groups in the polymeric chain, which cause the formation of hydrogen bonds betwe the polymeric chains [57,58], there is the presence of two main diffraction peaks in 2θ 10.52° in the [020] plane and 19.94° in the (110) plane, demonstrating a semi-crystall structure, as described in the literature [52,[59][60][61]. However, when undergoing solub zation in acetic acid, forming CSF, the degree of crystallinity was reduced with the pe of 19.94° decreasing and shifting to 22.51° and the formation of halos. This indicates amorphous state, since the interaction of the acid with the NH2 bonds can reduce spaces between the polymer chain and, in a way, participate in the partial rupture of bond network and, consequently, decrease the crystallinity [52,62]. The same was verified for F1 and F2, which showed reflections of CS in differ crystalline planes not being observed as the characteristic reflections of PTX, and t drug confirmed its crystalline structure through the presence of its well-defined refl tions at 7.51°, 12.64°, 15.16°, 24.03°, and 27.93° [63], with all of these crystalline refl tions being absent in the films, suggesting the dispersion of PTX in the CSF matrix. addition, the influence of the possible interaction between CS and PTX in this drug d persion may have influenced a tendency to form organizational plans, as observed i peak at 8.27° in F1. The same reflection is present in F2, which, due to the increase concentration, gave rise to another plane at 9.53°, with both planes being of a se crystalline structure. PTX has characteristic bands at 2945 cm −1 referring to the CH stretching, at 1700 cm −1 and 1658 cm −1 referring to the C=O stretching vibration of the amide and the ketone stretching, respectively, 1545 cm −1 vibration of amide bending (N-H), at 1354 cm −1 amide elongation (C-N), and an intense absorption band in the 753 cm −1 range characteristic of aromatic stretching (C-H) bonds in the form of out-of-plane bending [34,56]. The spectra of the films (F1 and F2) showed vibrational bands typical of PTX and overlapped those of chitosan. For the F2 spectrum, there are bands in the same wave numbers as the PTX spectrum, while for F1 there was a reduction in the intensity of the bands, due to the concentration of PTX in the film, as well as the displacement of the main bands, specifically at 1712 cm −1 , which may demonstrate some interaction of these groups with the polymer. X-ray Diffraction (XRD) The X-ray diffractograms are shown in Figure 3. For CS, due to the presence of NH 2 groups in the polymeric chain, which cause the formation of hydrogen bonds between the polymeric chains [57,58], there is the presence of two main diffraction peaks in 2θ of 10.52 • in the [020] plane and 19.94 • in the (110) plane, demonstrating a semi-crystalline structure, as described in the literature [52,[59][60][61]. However, when undergoing solubilization in acetic acid, forming CSF, the degree of crystallinity was reduced with the peak of 19.94 • decreasing and shifting to 22.51 • and the formation of halos. This indicates an amorphous state, since the interaction of the acid with the NH 2 bonds can reduce the spaces between the polymer chain and, in a way, participate in the partial rupture of the bond network and, consequently, decrease the crystallinity [52,62]. Differential Scanning Calorimetry The DSC curves are shown in Figure 4. It is possible to observe for CS an endo thermic peak Tpeak = 96.12 °C that is associated with polymer melting and an exotherm peak Tpeak= 305 °C indicating the thermal decomposition of the polymer chain [52,64,65 These peaks were shifted to different values when the CSF was prepared in acetic aci where they obtained values between Tpeak = 126.05 °C and Tpeak = 289.01 °C, respectivel These observations indicated that CS and CSF are semi-crystalline, as previously ver fied, confirmed by the presence of crystalline peaks in X-ray diffraction. Likewise, th films showed values associated with CS melting at Tpeak = 118.91 °C and 123.93 °C for F and F2, respectively. The PTX sample showed an endothermic event at 105.11 °C corr sponding to the melting of the drug [66][67][68]. However, in the films, this event was shif ed to Tpeak = 78.04 °C due to the interaction of PTX with CS, showing a peak in th same temperature range for both F1 and F2 films. The same was verified for F1 and F2, which showed reflections of CS in different crystalline planes not being observed as the characteristic reflections of PTX, and this drug confirmed its crystalline structure through the presence of its well-defined reflections at 7.51 • , 12.64 • , 15.16 • , 24.03 • , and 27.93 • [63], with all of these crystalline reflections being absent in the films, suggesting the dispersion of PTX in the CSF matrix. In addition, the influence of the possible interaction between CS and PTX in this drug dispersion may have influenced a tendency to form organizational plans, as observed in a peak at 8.27 • in F1. The same reflection is present in F2, which, due to the increase in concentration, gave rise to another plane at 9.53 • , with both planes being of a semi-crystalline structure. Differential Scanning Calorimetry The DSC curves are shown in Figure 4. It is possible to observe for CS an endothermic peak T peak = 96.12 • C that is associated with polymer melting and an exothermic peak T peak = 305 • C indicating the thermal decomposition of the polymer chain [52,64,65]. These peaks were shifted to different values when the CSF was prepared in acetic acid, where they obtained values between T peak = 126.05 • C and T peak = 289.01 • C, respectively. These observations indicated that CS and CSF are semi-crystalline, as previously verified, confirmed by the presence of crystalline peaks in X-ray diffraction. Likewise, the films showed values associated with CS melting at T peak = 118.91 • C and 123.93 • C for F1 and F2, respectively. The PTX sample showed an endothermic event at 105.11 • C corresponding to the melting of the drug [66][67][68]. However, in the films, this event was shifted to T peak = 78.04 • C due to the interaction of PTX with CS, showing a peak in the same temperature range for both F1 and F2 films. In Vitro Release Study The analysis of the release study results is represented in Figure 5 and present PTX release profile in the F1 and F2 films. The release of both formulations incre over time, with the first 2 h having a higher speed, with 45.98% and 48.58% released responding to 842.83 µg/cm² and 1766.41 µg/cm² for both formulations F1 and F spectively. This release may be associated with the amount of amorphous drug pr on the surface of the film. After that, the profile demonstrated a controlled release o drug in the medium due to the swelling of the polymer, in which the drug is rele through the mesh to the medium and is influenced by the concentration. In F2, w has a higher concentration, the greater amount of PTX was released during the stud riod. The maximum concentrations released during the 72 h of the study were 14 and 3128.89 µg/cm² for F1 and F2, respectively. This corresponds to 82.72 and 88.46 the drug in the F1 and F2 films, respectively. Thus, the developed films showed a re rate proportional to the increase in drug concentration, with the F2 film having a gr amount released when compared to the F1 film, presuming a greater amount of available to exert its action during wound treatment [69]. In Vitro Release Study The analysis of the release study results is represented in Figure 5 and presents the PTX release profile in the F1 and F2 films. The release of both formulations increased over time, with the first 2 h having a higher speed, with 45.98% and 48.58% released corresponding to 842.83 µg/cm 2 and 1766.41 µg/cm 2 for both formulations F1 and F2, respectively. This release may be associated with the amount of amorphous drug present on the surface of the film. After that, the profile demonstrated a controlled release of the drug in the medium due to the swelling of the polymer, in which the drug is released through the mesh to the medium and is influenced by the concentration. In F2, which has a higher concentration, the greater amount of PTX was released during the study period. The maximum concentrations released during the 72 h of the study were 1459.67 and 3128.89 µg/cm 2 for F1 and F2, respectively. This corresponds to 82.72 and 88.46% of the drug in the F1 and F2 films, respectively. Thus, the developed films showed a release rate proportional to the increase in drug concentration, with the F2 film having a greater amount released when compared to the F1 film, presuming a greater amount of PTX available to exert its action during wound treatment [69]. Five different mathematical models were used to determine the drug release mechanism in vitro with linear regression analysis to select the model that best described the mass transfer phenomena. Table 2 shows the results obtained with adjustment. In both stages (t ≤ 2 h and t ≥ 2 h), the Korsmeyer-Peppas model showed r 2 > 0.99 for both formulations F1 and F2 in both release stages. According to these data, the demonstrated fit provides an n < 0.5 which demonstrates a Fickian diffusion-mediated release of PTX in the polymer matrix. There is a rapid release at the beginning that reduces over time, describing a linear relationship between the square root of time and the cumulative amount of PTX released from the films, suggesting that this release was controlled by a usual molecular diffusion mechanism of the drug. This transfer rate per unit area is proportional to a concentration gradient or chemical potential between the two sides of the diffusion layer [70,71]. Five different mathematical models were used to determine the drug release mechanism in vitro with linear regression analysis to select the model that best described the mass transfer phenomena. Table 2 shows the results obtained with adjustment. In both stages (t ≤ 2 h and t ≥ 2 h), the Korsmeyer-Peppas model showed r 2 > 0.99 for both formulations F1 and F2 in both release stages. According to these data, the demonstrated fit provides an n < 0.5 which demonstrates a Fickian diffusion-mediated release of PTX in the polymer matrix. There is a rapid release at the beginning that reduces over time, describing a linear relationship between the square root of time and the cumulative amount of PTX released from the films, suggesting that this release was controlled by a usual molecular diffusion mechanism of the drug. This transfer rate per unit area is proportional to a concentration gradient or chemical potential between the two sides of the diffusion layer [70,71]. In Vivo Study Clinical Aspects and Morphometric Analysis of Skin Wounds The healing potential was established from the daily analysis, which was performed to verify the clinical aspects of the lesions of the different groups (test and control). The wounds were photographed every 2 days (without the films) as shown in Figure 6a,b, which shows the healing process and wound reduction. Injured mice showed a tendency for wound reduction up to day nine; however, the healing rate and effect were different. At the beginning of treatment (D 0 ), all wounds in all groups had a similar appearance (bright red color, reflecting the blood caused by the physical trauma). From the second post-wound day, aspects began to differ between the groups studied. From day 2 to day 4, there was a reduction in the size of the wound without perilesional redness and the appearance of a brown crust with contraction of the edges, which proved to be thicker and stiffer in the groups treated with the films, obtaining individual capacity (percentage) of reduction of up to 60% for F2, a significantly higher value when compared to F1, which reduced by 30%. For CSF, the decrease was only 20%, as seen in Figure 6. It is important to highlight that the occurrence of mild inflammation in the initial phases of healing is beneficial, as it allows the induction of tissue healing [72]. Meanwhile, the positive control group BD + GS O did not show a significant reduction (p < 0.05) in wound size until the first 4 days; only after the 6th day was there a 20% reduction in the wound area. On that day, the reduction process for F2 reached 70%, followed by 50 and 40% for F1 and CSF, respectively. In all groups, a brown crust with greater rigidity can be observed. At the end of the ninth day, F1, F2, and CSF obtained a reduction of 82%, 90%, and 85%, with brown crust detachment, the tissue being pink, undamaged, and healthy. This result differs from the BD + GS O group, which reduced by only 50% with tissue exposure without complete healing and with brown crust and contraction of the edges. Similar observations were found in previous studies in the field of tissue regeneration [72][73][74][75][76][77]. In addition, it is important to highlight that the film, unlike the ointment, has micropores that allow greater circulation of oxygen at the lesion site and greater adherence to the affected site. This allows not only the carrying but also the controlled release of the drug by diffusion, as seen above, in addition to facilitating the removal of exudates present [25]. The CS film with the highest amount of PTX (F2) increased the speed and percentage of wound reduction, due to the dispersion of the drug in the polymeric mesh. PTX has multimechanisms of action, it improves blood flow, and it may be responsible for the reduction of neutrophils, free radicals, and proteolytic enzymes that degrade the extracellular matrix, as well as TNF-α [33]. Furthermore, it is responsible for the increase in genes that express the tissue inhibitor of metalloproteinase (TIMP-1) and a reduction in the gene expression of matrix metalloproteinase 1 (MMP-1) and MMP-3, demonstrated in a study based on a wound healing model in normoglycemic rats, who had an incision wound in the back and administration to the experimental group of 25 mg/kg of PTX, twice daily per seven days by systemic route, and were submitted to polymerase chain reaction (PCR) testing [78]. That said, a study by Aghjani et al. [34] evaluated the healing process of wounds treated with PTX in niosomes, where they used mice; incisions in the neck area were made, and the 500 mg of conventional and niosomal formulations were applied twice a day completely covering the wounds. After 12 days of treatment, the PTX group on niosomes had a significantly greater wound reduction, with a wound surface area of only 1%. In this case, the study histopathological evaluation was performed, demonstrating that for this formulation, there was a decrease in the number of inflammatory cells at the wound site, indicating an early onset. Furthermore, the differentiation of keratinocytes was confirmed by the appearance of a layer of keratin above the layers of nucleated epithelial cells, with the formation of granulation tissue below the layer of neoepidermis, being completely differentiated for this formulation when compared to the other groups without PTX, with the fibroblasts fully stretched and aligned. It is important to emphasize that oxidative stress can damage cells and tissues and comprising healing, and PTX may affect healing, reducing this stress [79]. Some studies demonstrate that PTX reduces oxidative stress by reducing the production of reactive oxygen species (ROS) and increases the activity of antioxidant enzymes in the tissue, such as superoxide dismutase (SOD) and catalase, in addition to modulating the expression of genes involved in oxidative stress and inflammation, which may further contribute to its effects on healing [79][80][81][82][83][84]. This drug, when compared to others such as quinacrine, vitamin C, and vitamin E, demonstrated superior antioxidant activity and was more effective against oxidative stress or more effective in preventing lipid peroxidation, respectively [85,86]. In addition to PTX activity, CS also aids in the healing and antimicrobial process. This polymer assists in the bleeding process by promoting platelet aggregation of erythrocytes and inhibiting the dissolution of fibrin in the hemostatic stage; secondly, it helps eliminate bacteria from the wound, through an antimicrobial activity that interferes with the inflammation phase. Finally, it accelerates the proliferation of the skin, promoting the growth of granulation tissue, and stimulating the proliferation stage [25]. Regarding the antimicrobial activity of CS, there are supposedly two mechanisms of action: the first is through its cationic nature and consists of the interaction between the -NH 3 groups and negatively charged groups on the surface of the bacterial cell wall, preventing the transmission of vital solutes, reducing cell viability. The second is through the diffusion of CS, which diffuses into the cell nucleus, inhibiting the synthesis of bacteria [87]. Additionally, this activity, when compared to other polymers that have been used to prevent bacterial growth, has been shown to have superior activity, particularly against gram-negative bacteria, and it has even been shown to have antibacterial activity comparable or even superior to silver nanoparticles against bacterial species including E. coli, S. aureus, and P. aeruginosa [88][89][90]. It is necessary to understand the process of epithelialization, quantify neutrophils and lymphocytes to determine the tissue regeneration phase, and determine the angiogenic indices and the state of histological maturation, which is a limitation of this present study. Additionally, it is important to note that the use of films of CS with PTX is not necessarily a "solid alternative" to existing commercial wound healing drugs, as commercial products for this purpose may contain a combination of different active ingredients and their effectiveness may depend on the type of wound and patient characteristics. Factors such as wound size, location, and depth, as well as patient comorbidities and medication use, can affect the appropriateness of this treatment approach. Conclusions The PTX was dispersed in the polymeric mesh of the CS, changing the crystalline state of both materials. The choice of solvent (acetic acid) for the formation of the films provided the formation of a semi-crystalline material; that is, the choice of solvent directly influences the performance of the films formed containing CS (CSF, F1, and F2). The interaction between PTX and CS influences the crystallinity of the material, providing different organizational structures. However, all materials showed a positive effect on wound healing with a minimum of 82% and a maximum of 90% wound reduction after nine days of treatment, and the association with the highest concentration (F2) of PTX showed higher healing speed, due to the synergistic effect of these materials, which benefit the treatment and can act through different healing pathways. Thus, the association of CS and PTX offers improved healing when they are combined, being easy to apply and providing a protective barrier that can help keep the wound clean and moist, in addition to the fact that CS is a biocompatible polymer and well tolerated by the body; in this case, it is being used in an unprecedented way associated with PTX. However, more information is needed to understand the potential side effects or irritability in humans that the PTX or chitosan may cause when associated. Overall, the films prepared in this study have an innovative and promising potential for the production of wound dressings; however, more research is needed to fully evaluate their efficacy and safety compared to existing commercial wound healing products.
8,068.2
2023-03-31T00:00:00.000
[ "Medicine", "Materials Science" ]
Degraded Hyaluronic Acid-Modified Magnetic Nanoparticles Superparamagnetic iron oxide nanoparticles (SPIONs) conjugated with hyaluronic acid (HA) functional groups have potential applications as cell targeting materials. However, SPIONs incubated with high-molecular weight HA can result in severe agglomeration. In this work, we found that when modified with degraded HA (hyaluronan oligosaccharides (oHAs)), the nanoparticles were uniformly dispersed with small hydrodynamic sizes, and the oHA-modified SPIONs exerted minimal cytotoxicity. With the same functional groups as HA, the oHA-modified SPIONs may have various biomedical applications. Introduction Superparamagnetic iron oxide nanoparticles (SPIONs) have many uses in biomedical research, such as for drug delivery [1], magnetic resonance imaging [2], cancer cell targeting [3], size-related uses [4], magnetism [5], and optical performance [6]. Several SPIONs have been approved by U.S. Food and Drug Administration (FDA) to be the potential treatment of diagnosis and treatment [7]. Surface coating affects the application potential of the SPIONs, and the molecular structure, modification methods, and modification agent proportions lead to significantly different properties [8,9]. Hydrophilic polymer coating can guarantee the colloidal stability of nanoparticles with electrostatic or steric repulsion, reduce the uptake regulation effect of reticuloendothelial cells, and extend the duration of action of nanoparticles in vitro. As modification of polyethylene glycol (PEG) helps nanoparticles to escape the uptake of the system and then to reach cells and start drug release [10], glutathione (GSH) can lower cytotoxicity and enhance T1 MRI characteristics [11,12]. Hyaluronic acid (HA) plays an important role in cell proliferation, embryonic development, tumor cell migration, and wound repair [13,14]. Currently known cell surface HA receptors include CD44 [15], receptor for hyaluronanmediated motility (RHAMM) [16], lymphatic vessel endothelial HA receptor (LYVE-1) [17], layilin [18], and hyaluronan receptor for endocytosis (HARE) [19]. Special RHAMM receptors are distributed on the cell surface, cytoskeleton, and mitochondria. When the cells are stimulated accordingly, the RHAMM receptors stored in the cells are transported to the cell membrane [19]. Therefore, HA-modified SPIONs are expected to have great potential in cell targeting applications; little research work about the modification of HA on SPIONs was reported. In this work, we found that HA-modified nanoparticles became heavily aggregated; however, degrading highmolecular weight HA via chemical methods [20][21][22] produced hyaluronan oligosaccharides (oHAs), and the oHA-modified SPIONs were uniformly dispersed with small hydrodynamic sizes. oHA is reported to increase the flexibility of the extracellular matrix and increase cell growth and mobility during division and differentiation [23,24]. Therefore, oHA-SPIONs with low toxicity may have applications in various biomedical fields. SPION Preparation and Characterization. The raw materials used and their synthesis were reported in our previously published work [25]. In brief, SPIONs were synthesized by decomposing 0.7 g of Fe(acac) 3 (Tokyo Chemical Industry, Japan) in 15.0 g of polyethylene glycol (PEG; Aladdin, China) mixed with 0.3 g of polyetherimide (PEI; Aladdin) at 260°C for 1 h in an argon atmosphere. The reactants were cooled to 60°C, then washed three times successively with toluene and acetone. The SPIONs were then collected using a magnet placed under the container. 2.2. HA Degradation. HA (80 mg) was dissolved in 10 mL of water, then fully dissolved at 4°C overnight. Next, 1 mL of 16% sodium hypochlorite was added every 6 hours for the next 24 hours; then, the pH was adjusted to approximately 7.0 using 0.1 M HCl. 2.3. Modification of HA and oHA on the SPIONs. HA (20 mg, undegraded) and 10, 20, and 40 mg oHA (hereafter referred to as "m(SPIONs) : m(oHA)") at ratios of 2 : 1, 1 : 1, and 1 : 2 were mixed with 20 mL of 1 mg/mL of the SPION aqueous dispersions (Scheme 1). The reaction was carried out for 5 h in a shaker at 60 rpm and 4°C. After incubating overnight at 4°C in a refrigerator, the mixture was dialyzed against deionized water for 120 h (MWCO 100,000 dialysis bag, SpectrumLabs, USA). Samples were kept in a refrigerator at 4°C for later use. Material Characterization. The samples were characterized using a Zetasizer Nano ZS90 (Malvern Instruments), a Quantum Design MPMS XL-7 superconducting quantum interference device, a PL-GPC 50 gel permeation chromatograph, a JEM-2100F transmission electron microscope (TEM), X-ray photoelectron spectroscopy (XPS), and a thermal gravimetric analyzer. The crystal structure of the nanoparticles was confirmed using a PANalytical X'Pert PRO powder X-ray diffractometer (XRD) with CuKα radiation (λ = 0:15406 nm) with a scanning step of 0.017°in the 2θ range of 20-60°at room temperature. The X-ray photoelectron spectroscopy (XPS) of the dried sample was performed in vacuum on Thermo ESCALAB 250 to further characterize the coating materials. XPS is a technique which detects the organic surface elements of particles up to a 10 nm thickness or inorganic surface elements of particles up to 3 nm. Results and Discussion 3.1. Molecular Weight Measurement. Figure 1 shows the molecular structure of hyaluronic acid. After degrading high-molecular weight HA in sodium hypochlorite, the molecular weight change of the HA was tested under the aqueous environment of PL-GPC 50 ( Figure 2). The molecular weight of HA before degradation was between 200 kDa and 2.5 MDa, the distribution was uneven, and the viscosity was thick. The molecular weight of oHA after HA degradation was concentrated around 26 kDa, and the viscosity was low with good fluidity. Hydrodynamic Size Distribution Profiles and Zeta Potentials. Figure 4 shows the hydrodynamic sizes of the SPIONs, HA-SPIONs, and oHA-SPIONs. The highmolecular weight HA-modified nanoparticles showed severe agglomeration (Figure 4(a) 3 Journal of Nanomaterials XPS. The spectrum decomposition was performed using the XPSPEAK41 program with Gaussian functions after subtraction of a Shirley background; the ratio between the Lorentzian and Gaussian functions is 60%. Figure 5(a) shows the XPS spectra of the oHA-SPIONs-2 : 1, oHA-SPIONs-1 : 1, and oHA-SPIONs-1 : 2, showing peaks at 711.0 eV, 530.0 eV, 400.0 eV, and 286.0 eV, corresponding to Fe 2p, O 1s, N 1s, and C 1s [29], respectively. Figure 5(b) shows the fitted peaks of N 1s for the oHA-SPIONs-2 : 1, oHA-SPIONs-1 : 1, and oHA-SPIONs-1 : 2. The peaks at 399.2 eV, 400.69 eV, and 401.1 eV correspond to the C-N, C=O-N, and C-NH 3 groups, respectively. The XPS results showed that the binding energies of oHA were the same as those of HA, indicating that their structures were not obviously changed after degradation. The C=O-N group indicated that the SPION surface was modified with oHA Journal of Nanomaterials [30,31]. As the weight proportion of the oHA increased during modification, the C-N bond proportion decreased ( Figure 5(b)), and the C=O-N and C-NH 3 group proportions increased accordingly. Conclusion We synthesized superparamagnetic iron oxide nanoparticles conjugated with HA and oHA. Modifying high-molecular weight HA on the nanoparticle surface caused severe agglomeration, but the degraded HA did not affect the Journal of Nanomaterials dispersibility of the nanoparticles. The nanoparticles were uniformly dispersed with high modification and low cytotoxicity, which satisfies the requirements for surface modification of the nanomaterials. This work provides a reference for modifying nanoparticles with functional groups of highly viscous materials, such as HA, while maintaining a small hydrodynamic size. Data Availability All data are provided in full in the results section of the manuscript. Authors can find all the data in the results section for further analyses. Conflicts of Interest The authors declare no competing financial interest. Statistical significance was evaluated using one-way analysis of variance followed by post hoc comparison of the means using Fisher's least significant difference test, n = 6. * P < 0:05, compared with the control group.
1,737.4
2020-02-11T00:00:00.000
[ "Biology" ]
Preparation of 6N,7N High-Purity Gallium by Crystallization: Process Optimization In this study, radial crystallization purification method under induction was proposed for preparing 6N,7N ultra-high purity gallium crystal seed. The effect of cooling temperature on the morphology of the crystal seed, as well as the cooling water temperature, flow rate, and the addition amount of crystal seed on the crystallization process was explored, and the best purification process parameters were obtained as follows: temperature of the crystal seed preparation, 278 K; temperature and flow rate of the cooling water, 293 K and 40 L·h−1, respectively; and number of added crystal seed, six. The effects of temperature and flow rate of the cooling water on the crystallization rate were investigated. The crystallization rate decreased linearly with increasing cooling water temperature, but increased exponentially with increasing cooling water flow. The governing equation of the crystallization rate was experimentally determined, and three purification schemes were proposed. When 4N crude gallium was purified by Scheme I, 6N high-purity gallium was obtained, and 7N high-purity gallium was obtained by Schemes II and III. The purity of high-purity gallium prepared by the three Schemes I, II, and III was 99.999987%, 99.9999958%, and 99.9999958%, respectively. Introduction In the 1970s, compounds comprising gallium and Group IIIA elements were discovered to have excellent semiconductor properties. Since then, gallium (Ga) has been gradually used in the semiconductor industry as a raw material [1]. In recent years, with the continuous development of science and technology and people's pursuit of low-carbon economy and green energy, the application of Ga has been fully developed and it has become one of the important raw materials in the fields of modern semiconductors (approximately 80% of the total consumption of gallium), solar energy (approximately 10% of the total gallium consumption, magnetic materials (~5% of the total gallium consumption), and catalysts, and has been widely applied in defense, optical fiber communication, aerospace, and other fields [2,3]. At present, the production technology of low-grade gallium (purity ≤ 99.99%) has been perfected day by day [4][5][6]. The statistics from the US Geological Survey (USGS) in 2018 showed that [7] the amount of low-grade primary gallium production around the world in 2017 was~315 tons, which increased by 15% compared to the amount in 2016 with 274 tons. According to the survey report of USGC in 2015 [8], the global demand for Ga was estimated to increase by 20 times in 2030, whereas with the development of semiconductor devices with higher performance, the demand of high impurity gallium has been increasing, because even very small amounts of impurities such as Cu, Pb, Fe, Mg, Zn, and Cr, which are always present in current large-scale commercial-quality gallium, can degrade or limit the electrical properties [9]. Conventional refining methods such as electrolytic refining [10,11], regional melting [12], vacuum distillation [13], and drawn single crystal method [14] have been applied for the preparation of high-purity gallium, and the electrolytic refining method is the most widely used high-purity gallium production technology in industry at present. However, these traditional methods had many problems such as high energy consumption, lack of environmental friendliness, low production efficiency, and inconvenient automation control. Therefore, developing an advanced purification technology is of great significance to the development of the contemporary semiconductor and the solar industry. The purification and refining of gallium have been systematically investigated by us [15][16][17]. Based on the traditional crystallization purification method, a radial crystallization purification method was proposed by the seed crystal induced crystallization. The method had the advantages of low energy consumption, simple equipment, convenient operation, and short production cycle. In this study, the crystallization experiment was used to explore the effect of cooling temperature on the morphology of the crystal and the effect of cooling water temperature, flow rate, and the amount of seed crystal added on the crystallization process. The parameters of the purification process were explored in order to optimize the best purification process, determine the crystallization rate control equation, and prepare high-purity (6N and 7N) metal gallium under the process conditions. Figure 1a shows the preparation process of 6N,7N high-purity gallium by radial crystal purification method through seed induction. The main steps and operation methods are as follows: Process Design (1) Cleaning the crystallizer and assembling the purification device First, the crystallizer was rinsed with ultrapure water (resistivity ≥16 MΩ·cm) to remove the dust on the surface. Then, it was cleaned using an ultrasonic cleaning device containing ultrapure water for 2 h to remove residual contaminants on the surface. The purification device was assembled, as shown in Figure 1b. (2) Pretreatment of 4N crude gallium 4N crude gallium along with the package bottle was placed on a hot plate, and the heating temperature was set to 335 K. After the gallium was melted, the molten crude gallium was transferred to a polytetrafluoroethylene beaker and mixed with 200 mL of 3 mol/L HCl at 335 K for 2 h. The hydrochloric acid was suctioned out using a plastic pipette, and then 200 mL of 3 mol/L HNO3 was added to the beaker, followed by mixing and stirring for 2 h. The crude gallium was washed with acid, followed by washing with ultrapure water three times. The hydrochloric acid and nitric acid used in the acid treatment were all high purity grades, and ultrapure water was used for preparation of acid solution. (4) Circulating cooling water was introduced into the water jacket of the crystallizer. The cooling water was supplied using a low constant temperature water tank with a built-in circulating water pump. The temperature range was in the range 263-373 K, the temperature control accuracy was ±0.1 K, and the flow rate of the cooling water was controlled using a glass rotor flow meter. (5) When the temperature of the liquid gallium dropped to the critical point of crystallization, crystal seed was added thereto, and the cooling water was circulated. The seed crystal was prepared before the start of the purification experiment, using 7N gallium as the raw material. The method used is as follows: a polytetrafluoroethylene beaker containing molten 7N gallium was placed in a low constant temperature water bath to cool and crystallize. Liquid gallium was continuously stirred with a Teflon rod to disperse the crystal nucleus and improve the nucleation rate. The crystallization of liquid gallium was observed in the stirring process. When the grain with the desired size (0.5 cm) was formed, PTFE tweezers were used to pick out the crystal for later use. (6) When the liquid gallium was crystallized to a preset crystallization ratio, the introduction of cooling water was stopped, and the residual liquid gallium was discharged out of the crystallizer. (7) A three-way switch was switched, and the circulating hot water was introduced into the cooling/melting zone of the crystallizer. After the solid gallium completely melted, the three-way switch was switched, and the circulating cooling water was reintroduced into the cooling/melting zone; The hot water was supplied using a constant temperature water tank with a built-in circulating water pump. The temperature range was 278-373 K, and the temperature control accuracy was ±1 K. (8) The steps (4)-(7) (as shown in Figure 1c) were repeated to a predetermined number of crystallizations, and after the completion of purification, the product quality was detected. Materials 2019, 12, x FOR PEER REVIEW 3 of 11 (7) A three-way switch was switched, and the circulating hot water was introduced into the cooling/melting zone of the crystallizer. After the solid gallium completely melted, the three-way switch was switched, and the circulating cooling water was reintroduced into the cooling/melting zone; The hot water was supplied using a constant temperature water tank with a built-in circulating water pump. The temperature range was 278-373 K, and the temperature control accuracy was ±1 K. (8) The steps (4)-(7) (as shown in Figure 1c) were repeated to a predetermined number of crystallizations, and after the completion of purification, the product quality was detected. Detection Method In the experiment, the impurity contents of the 4N raw gallium material and the purified 6N,7N high-purity gallium were detected by high-resolution glow discharge mass spectrometry (Evans Materials Technology (Shanghai) Co., China, HR-GDMS), and the purity of the product was calculated by the difference method. Argon was used as the discharge gas for detection. The mass spectrometry parameters are as follows: discharge current, 1.9 mA; discharge voltage, 1 kV; beam current of gallium ion, 1 × 10 −6 mA; insulating layer, aluminum; and resolution ≥ 3600. Before data collection, the ion source of the HR-GDMS was cooled to the temperature of liquid nitrogen (90 K) to reduce the ion interference in the background gas. Then, the surface of the tested sample (0.2 × 2 mm 2 ) was pre-sputtered for 5 min at liquid nitrogen temperature to remove the contaminants from the sample surface. The pre-sputtering conditions were kept constant, and data collection was started. During the data collection process, the integration time was set as 80 ms. Effect of Cooling Temperature on Seed Morphology The appearance morphology of the seed crystal prepared at the cooling temperature in the range 265-295 K is shown in Figure 2, indicating that at 265 K, the solidified structure comprised many fine crystal grains, and the grains were interspersed with a large amount of liquid gallium. When the solidified structure was removed, a large amount of liquid gallium was attached to the surface, resulting in an extremely irregular shape of crystal seed, because at 265 K, the growth rate of crystal nucleus increased after nucleation due to the large degree of supercooling, leading to the emergence of a large number of dendrites. The rapid growth of dendrites not only mixed with a lot of liquid phase inside the solidified structure, but also caused a lot of hollow surface of the solidified structure. Detection Method In the experiment, the impurity contents of the 4N raw gallium material and the purified 6N,7N high-purity gallium were detected by high-resolution glow discharge mass spectrometry (Evans Materials Technology (Shanghai) Co., China, HR-GDMS), and the purity of the product was calculated by the difference method. Argon was used as the discharge gas for detection. The mass spectrometry parameters are as follows: discharge current, 1.9 mA; discharge voltage, 1 kV; beam current of gallium ion, 1 × 10 −6 mA; insulating layer, aluminum; and resolution ≥3600. Before data collection, the ion source of the HR-GDMS was cooled to the temperature of liquid nitrogen (90 K) to reduce the ion interference in the background gas. Then, the surface of the tested sample (0.2 × 2 mm 2 ) was pre-sputtered for 5 min at liquid nitrogen temperature to remove the contaminants from the sample surface. The pre-sputtering conditions were kept constant, and data collection was started. During the data collection process, the integration time was set as 80 ms. Effect of Cooling Temperature on Seed Morphology The appearance morphology of the seed crystal prepared at the cooling temperature in the range 265-295 K is shown in Figure 2, indicating that at 265 K, the solidified structure comprised many fine crystal grains, and the grains were interspersed with a large amount of liquid gallium. When the solidified structure was removed, a large amount of liquid gallium was attached to the surface, resulting in an extremely irregular shape of crystal seed, because at 265 K, the growth rate of crystal nucleus increased after nucleation due to the large degree of supercooling, leading to the emergence of a large number of dendrites. The rapid growth of dendrites not only mixed with a lot of liquid phase inside the solidified structure, but also caused a lot of hollow surface of the solidified structure. When the preparation temperature was 273 K, the solidified structure exhibited the geometric polyhedron shape characteristics, indicating that with decreasing subcooling degree, the growth rate of crystal nucleus decreased, and its growth mode transited from dendrite growth to lamellar growth. When the preparation temperature was 278 K, the solidified structure showed a regular hexahedral shape, suggesting that with increasing temperature, the subcooling degree of the growth front further reduced after the formation of crystal nucleus, and the growth mode changed into vertical layered growth. When the preparation temperature was 295 K, the supercooling degree of the solid-liquid interface further decreased after the crystal nucleus formed, hindering the release of latent heat from crystallization. At this time, in order to release the latent heat of crystallization at a faster rate, the growth direction of crystal nucleus changed to side growth, distorting its geometric shape. By comparing the morphological characteristics of crystal seed prepared at four temperatures, the optimal preparation temperature of the crystal seed was finally determined to be 278 K. When the preparation temperature was 273 K, the solidified structure exhibited the geometric polyhedron shape characteristics, indicating that with decreasing subcooling degree, the growth rate of crystal nucleus decreased, and its growth mode transited from dendrite growth to lamellar growth. When the preparation temperature was 278 K, the solidified structure showed a regular hexahedral shape, suggesting that with increasing temperature, the subcooling degree of the growth front further reduced after the formation of crystal nucleus, and the growth mode changed into vertical layered growth. When the preparation temperature was 295 K, the supercooling degree of the solid-liquid interface further decreased after the crystal nucleus formed, hindering the release of latent heat from crystallization. At this time, in order to release the latent heat of crystallization at a faster rate, the growth direction of crystal nucleus changed to side growth, distorting its geometric shape. By comparing the morphological characteristics of crystal seed prepared at four temperatures, the optimal preparation temperature of the crystal seed was finally determined to be 278 K. Effect of Temperature of Cooling Water on Crystallization Process When the flow rate of cooling water was 40 L·h −1 and the temperature was in the range 288-298 K, 2.9774 kg crude gallium obtained by pickling pretreatment was cooled to the critical point of crystallization, followed by adding crystal seeds for 15 min. The corresponding crystal growth morphology is shown in Figure 3. Effect of Temperature of Cooling Water on Crystallization Process When the flow rate of cooling water was 40 L·h −1 and the temperature was in the range 288-298 K, 2.9774 kg crude gallium obtained by pickling pretreatment was cooled to the critical point of crystallization, followed by adding crystal seeds for 15 min. The corresponding crystal growth morphology is shown in Figure 3. Figure 3 shows that when the temperature of the cooling water were 288 and 290 K, the crystal growth mode of liquid gallium was mainly dendrite growth after adding the crystal seed, and the crystal branches bridged with each other, exhibiting liquid gallium trapped inside the crystal. This was because at lower cooling water temperature, the temperature gradient inside the liquid gallium was higher, and the growth rate of the crystal was faster after adding the seed crystal. Although the positive temperature gradient was formed at this time, the temperature at the front of the solid-liquid interface was higher in the radial direction of the crystallizer, hindering the release of latent heat of crystallization in this direction, and thus decreasing the crystal growth in this direction. However, in order to facilitate the release of latent heat of crystallization, the growth orientation of the crystal changed and grew rapidly in the form of dendrites, eventually bridging the crystal branches and entrapment of the liquid phase. The entrained liquid phase impurities cannot be removed because of the growth mode of crystal, thus affecting the purification process. When the temperature of cooling water was 293 K, the liquid gallium grew into a single crystal after adding the seed crystals. Effect of Temperature of Cooling Water on Crystallization Process When the flow rate of cooling water was 40 L·h −1 and the temperature was in the range 288-298 K, 2.9774 kg crude gallium obtained by pickling pretreatment was cooled to the critical point of crystallization, followed by adding crystal seeds for 15 min. The corresponding crystal growth morphology is shown in Figure 3. In order to further analyze the growth law of the liquid gallium during crystallization, the crystal morphology at different times after adding the crystal seeds was investigated by the dynamic timing observation method, when the cooling water flow was 40 L·h −1 and the temperature was 293 K. The result is shown in Figure 4. Figure 3 shows that when the temperature of the cooling water were 288 and 290 K, the crystal growth mode of liquid gallium was mainly dendrite growth after adding the crystal seed, and the crystal branches bridged with each other, exhibiting liquid gallium trapped inside the crystal. This was because at lower cooling water temperature, the temperature gradient inside the liquid gallium was higher, and the growth rate of the crystal was faster after adding the seed crystal. Although the positive temperature gradient was formed at this time, the temperature at the front of the solid-liquid interface was higher in the radial direction of the crystallizer, hindering the release of latent heat of crystallization in this direction, and thus decreasing the crystal growth in this direction. However, in order to facilitate the release of latent heat of crystallization, the growth orientation of the crystal changed and grew rapidly in the form of dendrites, eventually bridging the crystal branches and entrapment of the liquid phase. The entrained liquid phase impurities cannot be removed because of the growth mode of crystal, thus affecting the purification process. When the temperature of cooling water was 293 K, the liquid gallium grew into a single crystal after adding the seed crystals. In order to further analyze the growth law of the liquid gallium during crystallization, the crystal morphology at different times after adding the crystal seeds was investigated by the dynamic timing observation method, when the cooling water flow was 40 L·h −1 and the temperature was 293 K. The result is shown in Figure 4. Figure 4 shows that after the addition of the crystal seed, the gallium crystal block gradually grew with increasing crystallization time, and the crystal growth mode of liquid gallium exhibited typical layer-by-layer push growth after adding the seed crystals, indicating that the temperature gradient environment formed by the cooling water at 293 K could release the latent heat of crystallization generated during the crystal growth to the front of the solid-liquid interface, and transferred and released it outwardly along the direction of temperature gradient. This kind of layerby-layer crystal growth mode was beneficial to the enrichment of impurity elements from the solidliquid interface to the liquid phase, thereby affording the solid Ga metal with higher purity. The supercooling degree of the growth tip was the largest when the crystal grew, and the liquid gallium atoms at the solid-liquid interface preferentially attached to the growth tip, and the heat was transferred outward from the crystallized solid gallium in the direction of positive temperature gradient in the crystallizer. Therefore, in the crystallization process, crystal growth was always in the form of a pyramind-shaped step-by-layer advancement. According to the crystal growth kinetics and thermodynamics, the layer-by-layer growth method was conducive to increase the surface area of crystals, facilitating the release of latent heat of crystallization and ensuring the continuous and steady growth of crystals during the crystallization process. Moreover, according to the separation and coagulation theory of impurities in the crystallization process, the layer-by-layer growth mode was favorable to the enrichment of impurity elements from the solid-liquid boundary to the liquid state and can avoid the inclusion impurities of liquid-phase envelopment caused by irregular crystal growth direction. Figure 4 shows that with increasing crystallization time, the pyramind tip of crystal became more and more obvious, and the layered step of crystal growth was also more and more obvious, Figure 4 shows that after the addition of the crystal seed, the gallium crystal block gradually grew with increasing crystallization time, and the crystal growth mode of liquid gallium exhibited typical layer-by-layer push growth after adding the seed crystals, indicating that the temperature gradient environment formed by the cooling water at 293 K could release the latent heat of crystallization generated during the crystal growth to the front of the solid-liquid interface, and transferred and released it outwardly along the direction of temperature gradient. This kind of layer-by-layer crystal growth mode was beneficial to the enrichment of impurity elements from the solid-liquid interface to the liquid phase, thereby affording the solid Ga metal with higher purity. The supercooling degree of the growth tip was the largest when the crystal grew, and the liquid gallium atoms at the solid-liquid interface preferentially attached to the growth tip, and the heat was transferred outward from the crystallized solid gallium in the direction of positive temperature gradient in the crystallizer. Therefore, in the crystallization process, crystal growth was always in the form of a pyramind-shaped step-by-layer advancement. According to the crystal growth kinetics and thermodynamics, the layer-by-layer growth method was conducive to increase the surface area of crystals, facilitating the release of latent heat of crystallization and ensuring the continuous and steady growth of crystals during the crystallization process. Moreover, according to the separation and coagulation theory of impurities in the crystallization process, the layer-by-layer growth mode was favorable to the enrichment of impurity elements from the solid-liquid boundary to the liquid state and can avoid the inclusion impurities of liquid-phase envelopment caused by irregular crystal growth direction. Figure 4 shows that with increasing crystallization time, the pyramind tip of crystal became more and more obvious, and the layered step of crystal growth was also more and more obvious, attributed to the fact that as the crystallization continued, impurity elements constantly accumulated in the liquid phase, and the impurity content at the solid-liquid interface increased, which enhanced the probability of impurity elements attaching to the crystal growth tip. Owing to the difference in the atomic radius and electronegativity between Ga and impurity elements, the impurity atoms attached to the growth tip entered into the Ga lattice or lattice gap, causing the growth defect of Ga crystal [18][19][20]. This indicated that the removal of impurity elements decreased with the progress of crystallization and was consistent with the literature data [16]. Effect of Cooling Water Flow on Crystallization Process In a previous study, the effect of cooling water flow on the crystallization process was primarily investigated [16]. The results revealed that when the cooling water flow rate was 30 L·h −1 , the growth rate of gallium crystal near the outlet of the crystallizer was slightly lower than that in other regions. When the cooling water flow rate was 50 L·h −1 , the growth rate of the gallium crystal in the lower part of the crystallizer was slightly larger than that of the upper part, and the growth rate near the crystallizer inlet was the largest. When the cooling water flow rate was 40 L·h −1 , the growth rate of gallium crystals in all the regions of the crystallizer was basically the same, and too fast or too slow local growth phenomenon was not observed. In order to further explore the effect of this process parameter on the crystallization process, the crystal morphology of liquid gallium at different cooling water flow rates was observed, and the results are shown in Figure 5. attributed to the fact that as the crystallization continued, impurity elements constantly accumulated in the liquid phase, and the impurity content at the solid-liquid interface increased, which enhanced the probability of impurity elements attaching to the crystal growth tip. Owing to the difference in the atomic radius and electronegativity between Ga and impurity elements, the impurity atoms attached to the growth tip entered into the Ga lattice or lattice gap, causing the growth defect of Ga crystal [18][19][20]. This indicated that the removal of impurity elements decreased with the progress of crystallization and was consistent with the literature data [16]. Effect of Cooling Water Flow on Crystallization Process In a previous study, the effect of cooling water flow on the crystallization process was primarily investigated [16]. The results revealed that when the cooling water flow rate was 30 L·h −1 , the growth rate of gallium crystal near the outlet of the crystallizer was slightly lower than that in other regions. When the cooling water flow rate was 50 L·h −1 , the growth rate of the gallium crystal in the lower part of the crystallizer was slightly larger than that of the upper part, and the growth rate near the crystallizer inlet was the largest. When the cooling water flow rate was 40 L·h −1 , the growth rate of gallium crystals in all the regions of the crystallizer was basically the same, and too fast or too slow local growth phenomenon was not observed. In order to further explore the effect of this process parameter on the crystallization process, the crystal morphology of liquid gallium at different cooling water flow rates was observed, and the results are shown in Figure 5. Figure 5 shows that at a cooling water flow rate of 40 L·h −1 , the crystal morphology of gallium exhibited a distinct "shell pattern" with the uniform grain spacing. This indicated that under that flow rate, the gallium crystal grew in the layer-by-layer manner and was favorable to remove the impurity. At a cooling water flow rate of 30 L·h −1 , the crystal growth rate at the outlet side of the crystallizer was slightly slower than that in other areas, and its crystal morphology was the same as that at a cooling water flow of 40 L·h −1 , also displaying a distinct "shell pattern". This suggested that under this flow condition, the gallium crystals also grew in the layer-by-layer manner, which was favorable for the removal of impurity; however, the crystal growth rate here was slower than its surrounding region, and thus the possibility of enveloping the liquid phase with the progress of crystallization at this point cannot be ruled out. However, at a cooling water flow rate of 50 L·h −1 , due to the higher heat transfer efficiency at the bottom of the outlet side of the crystallizer, the driving force of gallium crystal growth was larger and the crystal growth rate was faster, changing the crystal morphology and presence of a large number of irregular growth steps. It can be deduced that the crystals at the site were not completely in the way of layer-by-layer promotion growth, and the crystal growth process may be accompanied by dendrite or peritectic formation, leading to envelope liquid phase, entrap impurities, and reduce purification effect in solid gallium. Figure 5 shows that at a cooling water flow rate of 40 L·h −1 , the crystal morphology of gallium exhibited a distinct "shell pattern" with the uniform grain spacing. This indicated that under that flow rate, the gallium crystal grew in the layer-by-layer manner and was favorable to remove the impurity. At a cooling water flow rate of 30 L·h −1 , the crystal growth rate at the outlet side of the crystallizer was slightly slower than that in other areas, and its crystal morphology was the same as that at a cooling water flow of 40 L·h −1 , also displaying a distinct "shell pattern". This suggested that under this flow condition, the gallium crystals also grew in the layer-by-layer manner, which was favorable for the removal of impurity; however, the crystal growth rate here was slower than its surrounding region, and thus the possibility of enveloping the liquid phase with the progress of crystallization at this point cannot be ruled out. However, at a cooling water flow rate of 50 L·h −1 , due to the higher heat transfer efficiency at the bottom of the outlet side of the crystallizer, the driving force of gallium crystal growth was larger and the crystal growth rate was faster, changing the crystal morphology and presence of a large number of irregular growth steps. It can be deduced that the crystals at the site were not completely in the way of layer-by-layer promotion growth, and the crystal growth process may be accompanied by dendrite or peritectic formation, leading to envelope liquid phase, entrap impurities, and reduce purification effect in solid gallium. Effect of Seed Number on Crystallization Process When the cooling water flow rate was 40 L·h −1 and the temperature was 293 K, the liquid gallium cooled to the critical point of crystallization and 3, 4, 5, and 6 crystal seeds were added. When the crystallization reached a certain proportion, its morphological image is shown in Figure 6, indicating that the number of seed added determined the shape of the uncrystallized region. When three crystal seeds were added, the uncrystallized region exhibited a triangle shape. When four crystal seeds were added, the uncrystallized region exhibited a quadrilateral shape. However, when the number of added crystal seed was 3 or 4, the shape and size of the uncrystallized area were not consistent, showing a funnel shape with a large top and a small bottom. This easily led to the intersection of the crystal growth at the bottom of the crystallizer with the continuous progress of crystallization, which caused the envelopment of the liquid phase and entrapment of impurities, thus affecting the purification effect. When five seed crystals were added, the uncrystallized region presented a pentagon shape, and the problem of shape with a large top and small bottom in the uncrystallized region improved. In the case of adding six seed crystals, the uncrystallized region displayed a hexagonal shape with a regular shape and uniform size and was most beneficial to control the overall direction of the crystal during the purification of crude gallium. Therefore, the optimal number of seed addition was determined to be six, when the 4N crude gallium was purified using a self-made crystallizer. When the cooling water flow rate was 40 L·h −1 and the temperature was 293 K, the liquid gallium cooled to the critical point of crystallization and 3, 4, 5, and 6 crystal seeds were added. When the crystallization reached a certain proportion, its morphological image is shown in Figure 6, indicating that the number of seed added determined the shape of the uncrystallized region. When three crystal seeds were added, the uncrystallized region exhibited a triangle shape. When four crystal seeds were added, the uncrystallized region exhibited a quadrilateral shape. However, when the number of added crystal seed was 3 or 4, the shape and size of the uncrystallized area were not consistent, showing a funnel shape with a large top and a small bottom. This easily led to the intersection of the crystal growth at the bottom of the crystallizer with the continuous progress of crystallization, which caused the envelopment of the liquid phase and entrapment of impurities, thus affecting the purification effect. When five seed crystals were added, the uncrystallized region presented a pentagon shape, and the problem of shape with a large top and small bottom in the uncrystallized region improved. In the case of adding six seed crystals, the uncrystallized region displayed a hexagonal shape with a regular shape and uniform size and was most beneficial to control the overall direction of the crystal during the purification of crude gallium. Therefore, the optimal number of seed addition was determined to be six, when the 4N crude gallium was purified using a self-made crystallizer. Effect of Process Parameters on Crystallization Rate In the actual crystallization solidification process of liquid gallium, the crystallization rate (i.e., the crystal growth rate of gallium after adding crystal seed) depended on the supercooling degree of the solid-liquid interface. The supercooling degree of solid-liquid interface was a function of Effect of Process Parameters on Crystallization Rate In the actual crystallization solidification process of liquid gallium, the crystallization rate (i.e., the crystal growth rate of gallium after adding crystal seed) depended on the supercooling degree of the solid-liquid interface. The supercooling degree of solid-liquid interface was a function of temperature and cooling water flow keeping other process conditions constant. In the experiment, the relationships between the crystallization rate and cooling water temperature as well as the flow were measured by the control variable method, and the empirical control formula of the crystallization rate was obtained by analyzing the experimental data. In order to reduce the experimental error, improve the accuracy of empirical control formula and its adaptability to the actual production process, each group of measurement experiment was repeated four times, and the average value was taken. The crystallization rate measured in the experiment was the average rate during the complete solidification process of liquid gallium after adding the crystal seed, and the calculation formula is as follows: where v is the average rate, kg/h; m is the total mass of liquid gallium, kg; t is the time required for the complete solidification of liquid gallium, hour (h). The effect of the temperature and flow rate of the cooling water on the crystallization rate determined by the test is shown in Figure 7. Figure 7a shows that with increasing cooling water temperature, the crystallization rate gradually decreased, and an obvious linear relationship was observed between the two. The empirical control formula of the cooling water temperature on the crystallization rate was obtained by Origin software fitting. where T is the temperature of the cooling water, K; and the linear correlation coefficient of the data fit was R 2 = 0.997. Figure 7b shows that with increasing flow rate of cooling water, the crystallization rate increases, and a significant exponential function relationship was observed between the two. The empirical control formula of the cooling water flow to the crystallization rate was obtained by Origin software fitting. v(Q) = −96.73e −Q 4.94 +0.66 (3) where Q is the flow rate of the cooling water, L/h; and the standard deviation of the data fit was R 2 = 0.997. the relationships between the crystallization rate and cooling water temperature as well as the flow were measured by the control variable method, and the empirical control formula of the crystallization rate was obtained by analyzing the experimental data. In order to reduce the experimental error, improve the accuracy of empirical control formula and its adaptability to the actual production process, each group of measurement experiment was repeated four times, and the average value was taken. The crystallization rate measured in the experiment was the average rate during the complete solidification process of liquid gallium after adding the crystal seed, and the calculation formula is as follows: where v is the average rate, kg/h; m is the total mass of liquid gallium, kg; t is the time required for the complete solidification of liquid gallium, hour (h). The effect of the temperature and flow rate of the cooling water on the crystallization rate determined by the test is shown in Figure 7. Figure 7a shows that with increasing cooling water temperature, the crystallization rate gradually decreased, and an obvious linear relationship was observed between the two. The empirical control formula of the cooling water temperature on the crystallization rate was obtained by Origin software fitting. v(T) = −0.09T + 27 (2) where T is the temperature of the cooling water, K; and the linear correlation coefficient of the data fit was R 2 = 0.997. Figure 7b shows that with increasing flow rate of cooling water, the crystallization rate increases, and a significant exponential function relationship was observed between the two. The empirical control formula of the cooling water flow to the crystallization rate was obtained by Origin software fitting. where Q is the flow rate of the cooling water, L/h; and the standard deviation of the data fit was R 2 = 0.997. Analysis of Purification Results Based on the above studies, the optimum technological parameters for the crystal purification of 4N raw material gallium were determined as follows: the temperature of seed preparation, 278 K; cooling water temperature, 293 K, cooling water flow, 40 L·h −1 , and the number of seed crystals added was six. Combined with our previous research results [16], three purification schemes were Analysis of Purification Results Based on the above studies, the optimum technological parameters for the crystal purification of 4N raw material gallium were determined as follows: the temperature of seed preparation, 278 K; cooling water temperature, 293 K, cooling water flow, 40 L·h −1 , and the number of seed crystals added was six. Combined with our previous research results [16], three purification schemes were determined under the optimized process parameters, as listed in Table 1. The impurity contents in high-purity gallium prepared by the three purification schemes were tested and compared to the raw gallium material, and the removal rate of impurities was calculated. The results are shown in Table 2. Table 2 shows that for Scheme I, after the purification, the mass fraction of Al impurities contained in the raw materials reduced below the detection limit of HR-GDMS, and the other six main impurities were also well removed. The removal rates were: Fe-87.1%, Pb-95.9%, Zn-89.9%, Mg-97.9%, Cu-98.8%, and Cr-93.3%, and the mass fraction of the main gallium metal in the refined high-purity Ga calculated by the difference method was 99.999987%. For Scheme II, the removal rates of the six main impurities were Fe-93.8%, Pb-98.8%, Zn-95.6%, Mg-99.6%, Cu-99.8%, and Cr-97.6%, and the mass fraction of the main Ga metal was 99.9999958%. For Scheme III, the removal rates of the six major impurities further increased, and the removal rates of Mg and Cu exceeded 99.9%. In contrast, although the removal rate of Fe was the lowest, it also reached 97%. The mass fraction of main Ga metal was 99.9999958%. Conclusions In conclusion, the impurity removal of Ga was investigated in detail, and the radial crystallization purification method under the seed crystal induction is proposed. The effect of cooling temperature on the morphology of the crystal as well as the cooling water temperature, flow rate, and the number of crystal seeds added on the crystallization process were investigated. The optimum purification process was obtained; the control equation of the crystallization rate was determined; and the high-purity (6N and 7N) gallium was prepared under the technological conditions. The main conclusions of this study are as follows: (1) The optimum process parameters for the crystallization purification of 4N raw material gallium are as follows: temperature of the seed preparation, 278 K; cooling water temperature, 293 K; cooling water flow, 40 L·h −1 ; the number of seed crystals added six 6; (2) The crystallization rate decreased linearly with increasing cooling water temperature and increased exponentially with increasing cooling water flow. The control formulas of the cooling water temperature T and flow Q on the crystallization rate v are, v(T) = −0.09T + 27 and v(Q) = −96.73e −Q 4.94 +0.66, respectively; (3) The three proposed purification schemes effectively removed the impurity elements. When using Scheme I to purify the 4N crude gallium, high-purity gallium with a purity of 6N was obtained. When adopting Schemes II and III, 7N high-purity gallium was obtained. The purities of the high-purity gallium prepared by Schemes I, II, and III were 99.999987%, 99.9999958%, and 99.9999958%, respectively. The seed crystal induced radial crystallization purification method proposed in the study has the advantages of simple operation, convenient process flow, low energy consumption, environmentally friendly, and easy to realize automatic control of the purification process, providing a new idea for the large-scale industrial production of ultra-high purity gallium.
9,191
2019-08-01T00:00:00.000
[ "Materials Science" ]
Emerging Technologies for Real‐Time Intraoperative Margin Assessment in Future Breast‐Conserving Surgery Abstract Clean surgical margins in breast‐conserving surgery (BCS) are essential for preventing recurrence. Intraoperative pathologic diagnostic methods, such as frozen section analysis and imprint cytology, have been recognized as crucial tools in BCS. However, the complexity and time‐consuming nature of these pathologic procedures still inhibit their broader applicability worldwide. To address this situation, two issues should be considered: 1) the development of nonpathologic intraoperative diagnosis methods that have better sensitivity, specificity, speed, and cost; and 2) the promotion of new imaging algorithms to standardize data for analyzing positive margins, as represented by artificial intelligence (AI), without the need for judgment by well‐trained pathologists. Researchers have attempted to develop new methods or techniques; several have recently emerged for real‐time intraoperative management of breast margins in live tissues. These methods include conventional imaging, spectroscopy, tomography, magnetic resonance imaging, microscopy, fluorescent probes, and multimodal imaging techniques. This work summarizes the traditional pathologic and newly developed techniques and discusses the advantages and disadvantages of each method. Taking into consideration the recent advances in analyzing pathologic data from breast cancer tissue with AI, the combined use of new technologies with AI algorithms is proposed, and future directions for real‐time intraoperative margin assessment in BCS are discussed. rates. [4] However, there is currently no established global standard for real-time and fast intraoperative margin management in BCS. Intraoperative pathologic methods, which include frozen section analysis and imprint cytology, are the traditional choices for intraoperative diagnosis during BCS. [5] These methods have the potential to lower rates of positive margins (Figure 1). [6] However, in order to generalize the method, we need to solve several problems, such as the complexity and time-consuming nature of these pathologic procedures and the demanding workload placed on pathologists. Recently, researchers have been seeking new techniques. Several methods have recently emerged for real-time intraoperative management of breast margins using live tissue. These methods include conventional specimen radiography (SR), [7] intraoperative ultrasonography (IOUSG), [8] radio-frequency spectroscopy (MarginProbe device), [9] bioimpedance spectroscopy (ClearEdge device), [10] microcomputed tomography (micro-CT), [11] optical coherence tomography (OCT), [12] ex vivo magnetic resonance imaging (ex vivo MRI), [13] ultraviolet photoacoustic microscopy (UV-PAM), [14] microscopy with ultraviolet surface excitation (MUSE), [15] chemistry-based fluorescent probes, [16] and a multimodal imaging technique combining tissue autofluorescence and Raman spectroscopy with both macro (tissue-level) and micro (cell-level) detection. [17] In this Review, we describe the traditional pathologic and newly developed techniques. We discuss the advantage and disadvantages of each method. We highlight clinical needs and potential value in terms of margin management in breast cancer. Taking into consideration the recent advances in analyzing pathologic data in breast cancer tissue with deep learning models, [18] we propose the combined use of new technologies with artificial intelligence (AI) algorithms. We also discuss future directions and prospects for real-time intraoperative margin assessment in BCS. Pathologic Methods Frozen section analysis and imprint cytology are the traditional pathologic methods for real-time intraoperative margin assessment in BCS. [5] Many intraoperative examples have been reported for pathologic analysis. These methods have the highest diagnostic accuracy in terms of both sensitivity and selectivity and are currently recognized as the most promising methods for lowering the rates of positive margins during BCS. [6] The reoperation rate for patients who undergo imprint cytology or intraoperative frozen section margin assessment is lower than that of patients who do not receive any intraoperative margin status assessment. [5e,19] In addition, the routine use of intraoperative frozen section analysis can be cost-effective for both the patient and the hospital. [20] 2. 1 . Frozen Section Analysis In rapid frozen section analysis, breast tissue samples, i.e., stumps specimens, from the surgical resection are embedded in the optimal cutting temperature compound, frozen, and cut into slices. The slices are then put on a glass slide and fixed with paraformaldehyde for immunohistochemical staining. Next, light microscopy of sections stained with hematoxylin and eosin (H&E) are used to perform the pathologic evaluation. After a postdoc at Columbia University, he became an assistant professor at Osaka University. In 2012, he moved to RIKEN as an associate chief scientist. He also serves as an adjunct professor at Saitama University, a professor at Kazan Federal University, group director of the Max Planck-RIKEN Joint Center for Chemical Biology, and deputy team leader in the GlycoTargeting Research Team, RIKEN. In 2017, he became a chief scientist, and in 2019, as part of a joint appointment, he was appointed as professor at the Tokyo Institute of Technology. A significant advantage of H&E staining analysis of the frozen section is that this method can determine the presence of cancer in the surgical samples of interest as well as diagnose the type of various cancers. Thus, further surgery could be guided by whether morphological analysis shows that the sample consists of DCIS, invasive ductal carcinoma (IDC), ductal hyperplasia (DH), or normal breast gland (NBG). Nevertheless, currently these methods are not used for BCS in hospitals worldwide. Reasons include complicated sample preparation, which requires an additional skilled technician to cut the specimens for frozen section interpretation and tedious pathologic analysis that requires ≈30 min for a single assessment. [5b,c,e] If additional examination is needed, tremendous efforts and amounts of time are necessary. A worldwide shortage of trained pathologists, which is more severe than the shortage of physicians in general, is another limiting factor for routine frozen section analysis as part of intraoperative margin assessment in BCS. Frozen section analysis has been used for decades as the gold standard during breastconserving surgery (BCS). Despite its reliability, this traditional method is complicated and time-consuming. Researchers have made substantial efforts to develop techniques for quicker and simpler diagnosis of cancerous breast tissue. However, most of the technologies that have recently emerged still have challenges in providing fast, simple, and cost-effective diagnosis, which includes visualization of the morphology of cancerous breast tissue. Meanwhile, very recently, we reported an in-cell reaction-based fluorescent probe for diagnosing cancerous breast tissue morphology in a rapid, simple, and cost-effective procedure. The efficiency of newly emerging methods in the category column (rapid, simple, cost-effective, morphology, margin assessment) was evaluated by comparison with frozen section analysis. Imprint Cytology Imprint cytology analysis, on the other hand, is a simpler method used during BCS in some hospitals. Live tissue samples are rubbed onto a glass slide. The attached cells are immediately fixed with ethanol. Next, H&E or Papanicolaou staining is performed on the ethanol-fixed cells. Imprint cytology analysis is based on the idea that malignant cells will adhere to the slides, whereas adipose cells will not. Imprint cytology analysis examines the entire surface of the resected tissue, unlike the spot checks that occur with frozen section analysis. Many reports on BCS show that imprint cytology can achieve similarly high diagnostic accuracy as frozen section analysis. [5a,d,e] However, this method can only determine the presence of cancer. It cannot analyze morphology. Rapid and accurate interpretation requires a professional trained in cytopathology in the operating room in addition to the regular surgical team. [21] The increase in operative time resulting from the pathologic procedure and the increased workload for pathologists inhibit the broader applicability of this method as an established global standard for intraoperative diagnosis. Thus, although frozen section analysis and imprint cytology are the most reliable methods currently available, only a limited number of hospitals in some developed countries actually use them for rapid intraoperative diagnosis during BCS. H&E Pathologic Assessment by Deep Learning and AI Algorithms To circumvent the shortage of and workload burden for pathologists and improve the diagnostic accuracy of intraoperative margin assessment in BCS, significant attention has been focused on automated deep learning algorithms. An exciting international competition (CAMELYON16) to assess the effectiveness of automated deep learning algorithms in diagnosing the H&E sections of axillary lymph node metastasis was conducted during November 2015 to November 2016. [18a] Thirty-two algorithms and 12 pathologists (with and without time constraints) were tested with 129 whole-slide images (49 with and 80 without metastasis); the task was to classify images as definitely normal tissue, probably normal tissue, equivocal, probably tumor, or definitely tumor. The algorithms developed by Harvard Medical School and Massachusette Institute of Technology achieved the highest score (true-positive fraction, 72.4%), which was comparable to scores from pathologists without time constraints. It is noteworthy to mention that the best algorithms, i.e., those with an area under the receiver operating characteristic curve (AUC) of 0.994, performed significantly better than pathologists when there is time constraint for diagnosis (AUC, 0.810). The top five algorithms had a mean AUC of 0.960, which was comparable with pathologists without time constraints (AUC, 0.966). These data showed that the deep learning algorithms exhibited better diagnostic performance than 12 pathologists taking part in a simulation exercise designed to mimic the routine pathology workflow for axillary lymph nodes. Alternatively, the performance of the algorithms was comparable with an expert pathologist interpreting images with H&E staining under no time constraints. Thus, deep learning algorithms have significant potential to circumvent the problems associated with pathologic methods. However, other than for axillary lymph nodes in breast cancer, there have been no reports to date on using new algorithms to interpret pathologic information for margin assessment in BCS. Alternatively, we could combine these AI algorithms with other newly emerging rapid and convenient techniques that could give the same pathologic information for diagnosis during BCS in the future. Recently, Tsirigos and co-workers developed a deep learning model using publicly available whole-slide images in the Cancer Genome Atlas to accurately and automatically classify histopathologic images of non-small cell lung cancer from different cohorts collected at their institution. [18b] They demonstrated that a convolutional neural network, such as Google's inception v3, can be used to assist in the diagnosis of lung cancer from histopathologic slides, reaching sensitivity and specificity comparable to that of a pathologist. Furthermore, by analyzing only the pathology images, the network was also able to predict the most commonly mutated genes in lung adenocarcinoma. These findings suggest that deep learning and AI algorithms have the potential to predict gene mutations in various kinds of cancers. Conventional Imaging Methods Although BCS has been used as the primary treatment for early-stage breast cancer, more accurate techniques are needed to assess resection margins during surgery to avoid the need for re-excision and reoperation. Intraoperative specimen imaging methods such as conventional SR and IOUSG provide timely information on whether re-excision of a cavity shave margin is indicated during routine BCS. Conventional Specimen Radiography (SR) SR is used for immediate assessment of tissue samples following biopsy or surgical excision. Conventional SR involves X-ray imaging of excised tissue using mammography or a specimen radiography system (Figure 2A). SR is performed on a nonpalpable lesion to exploit the X-ray projection of the imaged tissue and produce contrast based on beam attenuation through the tissue. The standard of care involves using X-ray projections to localize the center of the visible tumor and verify that the specimen contains the observed lesion. Three methods commonly used for preoperative localization of nonpalpable tumors are radioactive seed localization, wire-guided localization, and radio-guided occult lesion localization. [7d,e] Compared to conventional SR, which needs significant time for transporting the surgical specimen from the operating room to the diagnostic imaging room for specimen radiography, intraoperative digital specimen mammography (IDSM), which can occur in the operating room, enables immediate radiography of the specimen. However, one study that compared tumor localization and margin estimation determined using conventional SR and IDSM reported that IDSM did not reduce overall operative times significantly, but it leads to a significant reduction in the positive margin rate. [7c,f ] However, clear margin width for specimen radiography has not yet been defined, and low sensitivity and specificity associated with conventional SR methods remain problematic. Intraoperative Ultrasonography (IOUSG) IOUSG imaging of specimen margins allows for visualization of structural features and associated heterogeneity ( Figure 2B). Surgeons locate the tumor in the breast using ultrasound and compare findings with preoperative digital images. After excision, the surgeon can use ultrasound to examine the specimen ex vivo to confirm that it resembles the candidate lesion targeted preoperatively. In a multicenter, randomized controlled trial, IOUSG-guided surgery significantly lowered the proportion of tumor-involved resection margins compared with palpation-guided surgery, thus reducing the need for re-excision, mastectomy, and boost radiotherapy. [7b,8a,c] Since mammography has difficulty imaging through dense breast tissue, IOUSG may be a better alternative than specimen radiography. Moreover, IOUSG is much faster and more costefficient than more commonly used radiography techniques. [8d] However, the necessity of larger margins, low sensitivity, and a requirement to be scanned make IOUSG unlikely to be a full solution to the margin status problem. Newly Emerging Diagnosis Methods Given the disadvantages of pathologic analysis, newly emerging technologies have been developed that are advantageous in terms of speed, cost, and reliability, in addition to diagnostic accuracy. We describe methods based on computed tomography, MRI, spectroscopy, and chemical approaches. We discuss the advantages and disadvantages of these techniques in improving future real-time intraoperative margin assessment in BCS. Optical Coherence Tomography Optical coherence tomography (OCT) is the optical version of ultrasound imaging, which applies a light wave instead of a sound wave in an entire live BCS specimen. The application of near-infrared light leads to a high-resolution, real-time, multidimensional image of a cancer tissue sample up to 2 mm beneath the tissue surface ( Figure 3A). [12a] The OCT light penetrates the entire live specimen, which is scattered back to the detector. Since cancer typically has a higher nuclearto-cytoplasm ratio, higher cellular density, and higher nuclear density than fibrous and fatty tissue of normal mammary regions, cancer tissue has higher scattering properties. Adipocytes are imaged with depths of up to 2 mm, but tumors are imaged to depths of 200 to 1000 µm. Thus, normal gland tissue and cancer tissue are differentiated with OCT imaging. Boppart and co-workers have tested the surgical margins of lumpectomy specimens using their handmade, needle-based OCT probe so that the depth of the field of the lens (1.47 mm) closely matches the penetration depth of OCT in the entire BCS specimen. [12a] Their apparatus also includes a high-resolution scanner providing enhanced images. When OCT-based breast cancer surgical margin data were compared with data based on pathologic method, the sensitivity was 100% and specificity was 82% (9 true positives, 9 true negatives, 2 false positives, and 0 false negatives). These results show the potential of OCT imaging, but further examples and applications of OCT have not been reported. In addition, surgeons need training to be able to distinguish nonsuspicious from suspicious areas for margin management using OCT images with final histology as the reference standard. [12b,c] Imaging protocols and evaluation criteria need to be standardized for real-time intraoperative margin assessment in BCS. Microcomputed Tomography (Micro-CT) Micro-CT is a promising method for measuring the size of tumors in three dimensions in entire live BCS specimens ( Figure 3B). Smith and co-workers used a tabletop micro-CT device, Skyscan 1173 (Skyscan, Kontich, Adv. Sci. 2020, 7, 1901519 Belgium), to measure the size of tumors in 50 invasive breast cancer specimens from 50 patients (42 IDC, 6 invasive lobular carcinoma (ILC), and 2 other invasive cancer). [11b] To measure accuracy, they compared the micro-CT data with data from preoperative mammography, ultrasound, MRI, and pathologic analysis (H&E staining). Compared with the largest dimension of the tumor on pathologic analysis, micro-CT had the best correlation coefficient (r = 0.82, p < 0.001), followed by MRI (r = 0.78, p < 0.001), ultrasound (r = 0.61, p < 0.001), and mammography (r = 0.40, p < 0.01). In other words, mammography and ultrasound underestimate the largest tumor dimension, while MRI and micro-CT overestimated it more frequently. Moreover, micro-CT could provide 3D shape analysis with sufficient spatial conditions. Thus, it has the potential to be used as a predictor of which margins are most likely to be positive. [11a] Future studies could make micro-CT technology applicable for brief intraoperative margin assessment, although such macrosize analysis cannot diagnose the detailed morphological features of various cancers, which is the most critical factor for which new technologies should be developed for BCS. Ex Vivo Magnetic Resonance Imaging (Ex Vivo MRI) MRI has the potential to reveal characteristic pathologic features of both benign and malignant breast and lymphatic tissue ( Figure 3C). [13b] Agresti and co-workers have reported that ex vivo MRI of entire live BCS specimens is a promising method for intraoperative diagnosis. [13a] They injected gadoliniumdiethy lenetriaminepentaacetic acid (Gd-DTPA) as the MRI contrast reagent in 39 patients with breast cancer at 1 min before skin incision. After BCS was conducted with the support of preoperative MRI, the surgical specimens were further analyzed using ex vivo MRI with a dedicated surface coil and spectral attenuated inversion recovery sequences for suppression of fat signal intensity. All MRI enhancing lesions were included within the surgical specimen and efficiently visualized with ex vivo MRI. A significant advantage of ex vivo MRI is that the signals in the surgical sample were enhanced when compared with preoperative MRI signals. Ex vivo MRI visualized tumor regions more clearly than NBG and benign lesions. It is noteworthy that the 12 malignant tumors, including tumors from breast cancer type 1 susceptibility protein (BRCA1) mutation carriers, were all detected by ex vivo MRI but undetected with conventional preoperative imaging. It should be noted again that MRI is an imaging modality for detecting macrosized cancers. The long scan times for MRI and high cost of the equipment might limit use of this method in settings such as small hospitals or developing countries. Therefore, an established procedure for re-enhancing breast lesions within a surgical specimen would be a powerful technique for intraoperative diagnosis during BCS. Radiofrequency Spectroscopy: MarginProbe Device MarginProbe (Dune Medical Devices Ltd, Caesarea, Israel) was developed to measure the local electrical properties of lumpectomy margins in the radiofrequency range ( Figure 4A). Such electrical properties depend on the membrane potential, nuclear morphology, cellular connectivity, and vascularity of live tissues. Thus, cancerous and normal regions could be efficiently discriminated. The threshold between positive and negative margins has been already determined by comparing data to pathologic results. Therefore, if applicable, surgeons could use this method during routine operations. Practically, six surfaces of the main lumpectomy specimens can be measured with MarginProbe within 20 min (five to eight times for each surface). Many clinical trials have been completed with the MarginProbe device, including the MAST study in Israel, the US Pivotal Study in the United States, and a multicenter study in Germany. [9a,b] Schnabel and co-workers conducted prospective clinical trials on real-time intraoperative assessment of lumpectomy margins in 596 patients with breast cancer. [9c] After removing the margins during surgery, patients were randomized to the device or control arms. In the device arm, MarginProbe was used to examine the main lumpectomy specimen and guide additional direct excision of positive margins. In the control arm, the cancer regions to be removed were evaluated by surgeons without any devices, the current standard of care in most hospitals in America and Europe. The false-negative rates were 24.8% and 66.1% and the false-positive rates were 53.6% and 16.6% in the device and control arms, respectively. Based on this intraoperative analysis, 62% of the main positive specimens were in the device arm compared with 22% in the control arm (p < 0.001). As a result, 19.8% of patients in the device arm underwent a re-excision procedure compared with 25.8% in the control arm. As the authors note, the adjunctive use of the MarginProbe device during BCS would help with intraoperative cancer assessment, thus, reducing the need for re-excision. However, while performing BCS, the surgeon should also consider adverse cosmetic effects. These studies have led to the Food and Drug Administration (FDA) approval of MarginProbe as a device to help surgeons identify positive margins. It is approved for use in the United States to assess the adequacy of surgical breast margins intraoperatively. However, the MarginProbe device, which works based on user-guided spot scanning, also has severe drawbacks such as low sensitivity, low specificity, and high false-positive rates. All these new technologies described to date (OCT, micro-CT, ex vivo MRI, and MarginProbe) can measure the macro-size of tumor tissues, but cannot analyze cancer morphology or provide accurate localization of the lesion in live tissues. Regarding practical use during intraoperative BCS, it is necessary to detect cancer at the micro-size level in lumpectomy margins, which can be accomplished with the standardized pathologic methods. The following sections discuss trials addressing these challenges. Bioimpedance Spectroscopy: ClearEdge Device ClearEdge is a handheld portable imaging device that uses bioimpedance spectroscopy to detect differences in tissue dielectric properties of the resected specimen during BCS ( Figure 4B). This device performs a baseline measurement from the patient's NBG and uses normalized data to scan all margins of the excised tissue specimen. A randomized trial of intraoperative margin status assessment during BCS reported that the re-excision rate was lower for patients treated with ClearEdge versus specimen radiography. In addition, ClearEdge can complete a full scan in less than 5 min. Bioimpedance spectroscopy is promising and straightforward. However, the lack of sensitivity and specificity still needs to be addressed. [10] Raman Spectroscopy Combined with Autofluorescence Microscopy Raman spectroscopy measures the vibrational frequencies of molecules in tissues that could be excited by a laser. Although Raman spectroscopy has been used to diagnose breast cancer with high sensitivity and specificity, one drawback of this method is the amount of time required. Raman spectroscopy cannot image small areas of residual cancer in full tissue sample surface areas of BCS specimens with sufficient accuracy within the limited timeframe possible for intraoperative diagnosis. Notingher and co-workers recently developed a multimodal imaging technique combining tissue autofluorescence (excitation at 405 nm, detection at 450-520 nm) and Raman spectroscopy (excitation at 785 nm, Raman shift detection at 600-1800 cm −1 ), which they called multimodal spectral histopathology (Figure 5). [17b] They extensively optimized the sampling and data processing algorithms to use autofluorescence images to guide Raman measurements and achieve high spatial and spectral information in just 12-24 min, even when analyzing large tissue surfaces up to 4 cm × 6.5 cm. Analysis of 121 surgical marigin specimens from 107 patients, although not under real-time intraoperative assessment conditions, could discriminate IDC and DCIS from NBG with 95% sensitivity and 82% specificity. They reported that cancer lesions smaller than 1 mm 2 could be analyzed. However, these analyses used a lowpower field, and detailed morphological analysis with a highpower field, i.e., micro-sized analysis, has not been reported. Mahadevan-Jansen and co-workers investigated the feasibility of a 3D scanner that relies on Raman spectroscopy to assess all of the margins of a resected specimen within a clinically feasible time. They demonstrated the potential of this device for automated breast tumor margin assessment, which could minimize repeat invasive surgeries. [17a] Ultraviolet-Photoacoustic Microscopy (UV-PAM) and Microscopy with Ultraviolet Surface Excitation (MUSE) Recently, an innovative microscopic technique to image cellular structures and their organization in tissue samples, i.e., Adv. Sci. 2020, 7, 1901519 Figure 4. A-i) Intraoperative use of the MarginProbe device with a breast lumpectomy specimen. Measurement is performed by applying the tip of the probe to a point on the resected lumpectomy specimen. During each measurement, radiofrequency signals are transmitted from the probe to the tissue, reflected, and collected by the console. The reflected signals are analyzed based on an algorithm. The device readings are displayed as "positive" or "negative." A-ii) The MarginProbe device output display for a typical patient. Data accumulate on the screen from left to right and from top to bottom. The most recent measurement is highlighted on the top left. Blue bars represent negative readings and red bars represent positive readings. Yellow frames and labels outline the margins from which readings are obtained. Adapted with permission. [9a] Copyright 2008, Elsevier. B-i) The ClearEdge device is a portable, battery operated, handheld imaging device equipped with a sterile head for use in a single patient. B-ii) ClearEdge color-coded image display. Each scan produces a color-coded image on the device's screen display. Green and yellow pixels are considered to be characteristic of normal tissue and red pixels are considered to indicate abnormal tissue. Adapted with permission. [10] Copyright 2016, Elsevier. at the morphological and cellular levels, has been developed. Photo acoustic tomography is a rapidly growing imaging modality that can provide volumetric images with high resolution. Photoacoustic tomography, which is based on optical absorption contrast with appropriate wavelength illumination, is highly specific for a particular target within cells. Cheng and co-workers reported a multispectral photo acoustic tomography system for breast cancer margin manage ment using fat and hemoglobin as contrasts. The system can analyze tissue depths of ≈3 mm with an axial resolution of ≈125 µm. [14a] Wang and co-workers developed PAM, which was combined with UV laser illumination. [14b] The advantage of UV laser illumination is that it could highlight the nuclei so that their PAM technique could provide images similar to those obtained with H&E staining, a conventional and reliable method used for real-time intraoperative margin assessment in BCS (see Section 2.1). In their housemade photoacoustic microscopy device depicted in Figure 6A, a UV laser beam was designed to focus onto breast tissue specimens. Laserinduced rapid thermoelastic expansion induces acoustic waves that could then be transduced into electric signals, amplified, and recorded. With optimized experimental parameters and automated algorithms, this method could accurately compute and diagnose the size, internuclear distance, and packing density of nuclei. It provides images similar to those obtained with pathologic techniques ( Figure 6B). However, the analysis requires a few hours; further optimization is necessary before use in real-time intraoperative margin assessment in BCS. Levenson and co-workers reported that MUSE could also generate shape and color-contrast information ( Figure 6C). [15] MUSE exploits the low-penetration depth of UV light to excite fluorophores at the surface of stained tissue. MUSE relies on UV wavelengths of ≈280 nm to restrict the excitation of conventional fluorescent stains to tissue surfaces. MUSE has the potential to improve the efficiency of patient care in both stateof-the-art and low-resource settings and to provide opportunities for rapid histology. Chemical Probes Chemists are currently actively investigating chemical probes to diagnose cancer based on fluorescence properties that are highly sensitive and use the optimal wavelength to avoid background signals from live samples based on rational design. In general, these strategies use the fluorescence switching properties of chemical probe structures, which are activated by cancer-specific enzymes. In addition, an alternative approach of applying synthetic transformation selectively in cancer cells has also been reported recently. Adv. Sci. 2020, 7, 1901519 images from the cut surfaces of formalin-fixed invasive lobular carcinoma (ILC) tissues briefly stained with Hoechst stain, rhodamine, eosin, and propidium iodide, captured with a color camera after white balancing. Middle: Images converted to virtual H&E staining. Right: digital images captured using a whole-slide scanner from conventional slides with H&E staining of the same specimens after paraffin embedding and sectioning. Scale bar = 100 µm. Adapted with permission. [15] Copyright 2017, Springer Nature. Fluorescence "Switch-On" Probe by Cancerous Enzymatic Activity Urano and co-workers are pioneers in the development of fluorescence "switch-on" probes. They have designed various innovative fluorogenic probes that could be activated by a cancer-specific enzyme. [16a] For example, γ-glutamyltranspeptidase (GGT) is overexpressed on cancer cell surfaces but not on normal cell surfaces. They developed γ-glutamyl hydroxymethyl rhodamine green (γGlu-HMRG), which is fluorescently inactive but can be selectively activated by an enzymatic reaction on cancer cell surfaces (Figure 7). γGlu-HMRG has been used to discriminate IDC and DCIS from NBG. The sensitivity and specificity of this method was 92% and 94%, respectively. This method could detect cancer regions that are smaller than 1 mm. The fluorescence signals were obtained within 5 min after treatment of live tissues. However, their method relies on a time-dependent increase in fluorescence. Thus, sample-to-sample reproducibility, i.e., cancer selectivity due to high fluorescence background and fluorescent spreading, needs to be improved before use during actual BCS. More importantly, the fluorescently activated molecule is generated on the cell surface, not in the cancer cells. Thus, the fluorescence gradually spreads during the analysis and fluorescence images become ambiguous. They compared their data with those of pathologic analysis, but morphological information has not been deduced from their fluorescence images. These methods are thought to be useful for actual BCS; however, since the method cannot be used to determine cancer morphology in live tissues, it is unlikely to replace pathologic methods used on frozen tissue specimens. Fluorescence "Switch-On" Probe Activated upon Cancerous Protein Binding Fan et al. developed a near-infrared (NIR) fluorescent probe that can be efficiently activated by interaction with neutral cholesteryl ester hydrolase 1 (KIAA1363), which is highly overexpressed in various invasive breast cancers (Figure 8). [16b] The AX11890 compound, which is the ligand of KIAA1363, is linked with Nile blue (NB) to quench the fluorescence of AX11890 with an efficient photoinduced electron transfer (PET) mechanism in aqueous buffer solution. However, when the "silent" probe interacts with KIAA1363, AX11890 becomes separated from the NB dye by a certain distance and recovers NIR fluorescence ("switch-on"). This probe was found to be highly selective for KIAA1363 among the biomolecules investigated. The fluorescence recovery was quick, with a detection limit of 0.58 µg mL −1 (3δ/k). The probe was used to selectively stain human breast cancer tissue within 5 min. Red fluorescent signals could be obtained at a depth of 0-980 µm with excitiation at 635 nm (emission recorded at ≈700 nm). This probe has excellent NIR fluorophore properties for assessing tumors inside thick tissue samples as well as reducing background signals in human tissue samples. However, the ambiguity and generalized expression of the KIAA1363 protein in heterogeneous cancer tissues from human patients does not motivate surgeons to use this method during BCS. It should be noted that, during BCS, various cancerous tissues from patients with breast cancer need to be stained or imaged, not those from cell lines or animal models in experimental laboratories that always express specific tumor-associated enzymes or antigens (in the case of antibodies). Previous trials have failed in immunostaining clinical tumor samples for this reason. Currently, none of the newly emerging fluorescent probes, which rely on cancer-specific proteins, antigens, or enzymes, have been successfully applied to real-time intraoperative margin management in BCS. New Modality Involving Fluorescent Labeling of Live Cancerous Tissues through an "In-Cell" Cascade Reaction with Acrolein As discussed above, the main problem associated with fluorescence-based methods is generalizability to human patients, because the target enzymes or proteins that could activate the fluorescent switch might not be expressed in all type of cancers. In addition, the activated fluorescence on cancer tissues can only be evaluated in a time-dependent manner, which leads to issues with reproducibility. Therefore, unlike conventional H&E sectioning analysis, morphological analysis of cancer, which is the critical analysis in BCS, cannot be performed, although the fluorescent techniques are up-and-coming in terms of sensitivity and specificity. On the other hand, a novel fluorescence-based concept using the acrolein-initiated cascade reactions in live breast cancer tissues has been recently developed. [16c,22] Acrolein, a highly toxic α,β-unsaturated aldehyde, [23] has been reported to be a biomarker associated with various type of disorders related to oxidative stresses. Acrolein is produced through the enzymatic oxidation of polyamines [24] and during reactive oxygen species (ROS)-mediated oxidation of highly unsaturated lipids. [25] The authors developed an "incell" reactivity probe 1 for detecting acrolein based on an acrolein/azide click reaction ( Figure 9A). [26] In contrast to previously reported methods for detecting acrolein, [27] this method can sensitively detect the presence of acrolein within live cells even at the nM level. The azide functionality in probe 1 participates smoothly in the 1,3-dipolar cycloaddition reaction with acrolein, generated by cells, to give triazoline derivatives ( Figure 9B). Under conditions of lower intracellular pH, the triazoline derivatives then decompose into corresponding diazo compounds, which are immediately and nondiscriminately conjugated with cell constituents to anchor fluorescence within the cell. Therefore, in clear contrast to previously developed fluorescence switch-on methods, cancer could be labeled and imaged at the cellular level. This new method is noteworthy for its simplicity. It has been reported that cancer cells are under oxidative stress conditions associated with increased production of ROS. [28] Based on previous report that cancer cells produce acrolein, [27] we have used probe 1 to label different cancer cells (Figure 10). [16c,22] In other words, cancer cells produce a significant amount of acrolein, and this cellular acrolein could be used as a new cancer marker. We then applied the probe 1 to live tissue samples (20 IDC,10 DCIS, 30 NBG, and 5 DH) from patients with breast cancer who underwent breast surgery at Osaka University Hospital, in Osaka, Japan from March 2017 to March 2018. The live tissues were cut into a flat surface, immersed into a 20 × 10 −6 m solution of probe 1 for 5 min, and rinsed with buffer ( Figure 11A). [16c] The resulting tissues were then directly analyzed using a Keyence BZ-X710 fluorescence microscope equipped with an optical sectioning system to obtain both gross images and double fluorescence-stained images. [29] The mean fluorescence intensity of IDC and DCIS samples were statistically significantly higher than that of DH and NBG samples ( Figure 11B), with sensitivity and specificity of 97% and 97% for tumors, respectively. Since all cancerous cells produce a high level of acrolein, probe 1 could label breast cancer tissues with a similar mean fluorescence intensity, regardless of their subtypes such as estrogen receptor (ER) or progesterone receptor (PR) or human epidermal growth factor receptor 2 (HER2) status. Of note, when the fluorescence-labeled IDC and DCIS images were magnified by 200×, the morphology of cancers could be imaged and discriminated ( Figure 11C). This method could visualize the morphology of IDC and DCIS as well as ILC, DH, and papilloma. Blind testing of the fluorescent images by pathologists with an anonymized data set are in good agreement with H&E stained images of the same sections. This notable feature of probe 1, which enables the visualization of cancer morphology, comes from its ability to selectively label the cellular contents only in cancer cells within live tissues ( Figure 9B). Thus, this click-to-sense method, which is Figure 9. A) Phenyl azide reaction with acrolein. B) "Click-to-sense" method and mechanism. Fluorescence-labeled phenyl azide smoothly reacts with acrolein generated by cancer cells through a 1,3-dipolar cycloaddition reaction (azide/acrolein click reaction). The triazole decomposes into diazo compounds, which react with cellular constituents to anchor the fluorescence label within cells. The concentration of acrolein is analyzed based on the fluorescence readout at the whole-cell level. FL = tetramethylrhodamine (TAMRA). not a conventional pathologic method, can accurately identify residual carcinoma by identifying morphology, even at the cellular level. Moreover, this chemistry-based diagnosis method only requires 5 min. With this promising method, live tissue morphology in patients with breast cancer can be easily identified, providing future support for BCS. Meta-Analysis: What Method Is Better? A systematic Review of 106 scientific papers from 1995 to July 2016 concluded that frozen section analysis, imprint cytology, and ultrasound-guided lumpectomy can lower positive margin rates compared with palpation guidance. [6c] However, the effects of specimen radiography on positive margin rates have not been evaluated to date. Cavity shave margins and the MarginProbe device similarly decrease the positive margin rate, but at the same time, we should take into account that these methods might be associated with negative cosmetic effects. [6c] Alternatively, a meta-analysis of 35 studies on intraoperative margin assessment in BCS from a search conducted in January 2016 focused on cancer sensitivity and specificity and AUC with frozen section analysis (n = 9), imprint cytology (n = 11), IOUSG (n = 4), SR (n = 9), and OCT (n = 3). [6b] Frozen section analysis (sensitivity, 86%; specificity, 96%; AUC, 0.96) and imprint cytology (91%, 95%, 0.98) have excellent diagnostic accuracy compared with IOUSG (59%, 81%, 0.78), SR (53%, 84%, 0.73), and OCT (85%, 87%, 0.88). OCT seems to have high diagnostic accuracy, but data are scant (n = 3). A global standard for this method and analysis should be established for wider applicability in intraoperative margin assessment. [6b] Summary and Future Prospects As discussed in the introduction, intraoperative pathologic diagnostic techniques such as frozen section analysis and imprint cytology have gradually gain recognition as conventional tools in BCS that lower the positive margin rate and eliminate repeat surgery, as demonstrated by a number of studies and meta-analyses. However, practically, only a small number of hospitals, mainly in developed countries, routinely use these methods. Unfortunately, these methods cannot be used worldwide with BCS. Tedious pathologic procedures, e.g., making frozen sections and collaborating with pathologists in close proximity to operating rooms, require significant time, ≈30 min for a single analysis. This workload does not motivate surgeons to perform additional intraoperative diagnostic procedures. These problems have been the bottleneck for a long time for BCS in almost all hospitals, regardless of how many patients with breast cancer are waiting for treatment. To circumvent such problems and provide patients with improved surgical options and outcomes and to avoid re-excision, two points should be taken into consideration. One is the development of nonpathologic methods for intraoperative use with consideration of resolution, sensitivity, specificity, speed, convenience, and cost. Another is the development of new imaging algorithms to standardize data for analyzing positive margins, as represented by automatic deep learning and AI algorithms without the need for judgment by well-trained pathologists. Regarding the new imaging methods, in this Review we introduced various techniques and trials involving OCT, [12] micro-CT, [11] ex vivo MRI, [13] MarginProbe, [9] ClearEdge, [10] UV-PAM, [14] MUSE, [15] a multimodal imaging technique combining tissue autofluorescence and Raman spectroscopy, [17] and chemistry-based fluorescent methods. [16] It is essential to mention that analyzing human breast cancer tissues is more complicated compared with other cancers. For instance, gastrointestinal cancers, such as gastric cancer, colorectal cancer, and pancreatic cancer, mainly consisted of adenocarcinoma. By contract, breast cancer consists of multiple histopathologic features and heterogeneous borderline lesions. Analyzing and classifying breast cancer pathology is undoubtedly the most important and challenging than classifying other cancers. Methods that do not evaluate the morphology of breast cancer tissue during BCS, which corresponds to visualizing cancer cells in live tissues, could hardly be used for intraoperative diagnostic purposes. While the apparatus and techniques for visualizing the size of cancer lesions on the macro-level can be used for intraoperative diagnosis of peritoneal metastasis during laparotomy for gastrointestinal cancer, it is difficult to apply them to breast cancer surgery. Thus, despite being significant technological advances, these new techniques and time-dependent switch-on fluorescence probes need improvement. For this prospective, our click-to-sense method can rapidly discriminate between cancer and normal cells, requiring only staining of live tissues for 5 min. Moreover, it can visualize cancer morphology and provide localization in a way almost equivalent to frozen sectioning. [16c] Our method is not affected by sample conditions, fluorescence background, or fluorescence spreading. Moreover, it is not dependent on the timing of enzymatic reactions or enzyme expression by cancer cells since it focuses explicitly on the overexpression of endogenously generated acrolein in various cancer cells. The rapid in-cell cascade reactions selectively anchor the fluorescence label onto the cellular constituents in tumors. Our chemistry-based method has the potential to become a new highly selective margin management method for live tissues; it could be used as a discriminative, low-cost, and easy-to-perform method for cancer sensing during surgery. The technique will be confirmed in a prospective clinical study including intraoperative assessment of resection stumps in patients with breast cancer. The clinical significance of our probe in evaluating morphological and pathologic features deserves further investigation in hospitals worldwide. It would be useful to consider applying automatic deep learning and AI algorithms. As discussed in Section 2.3., AI has shown potential usefulness as a diagnostic tool in pathologic diagnosis; AI outperformed pathologists in diagnosing even small amounts of cancer in frozen sections that had spread to lymph nodes in patients with breast cancer. Once efficient intraoperative techniques have been developed, then AI can automatically and rapidly evaluate cancer margins. Of note, our click-tosense method could be used to diagnose cancer tissues during BCS more quickly than the conventional H&E method. Combining this method with AI, namely, analyzing the fluorescent morphological features by established AI algorithms, would lead to the next generation of real-time intraoperative assessment. While intraoperative diagnosis of breast cancer, a heterogeneous disease, is one of the most challenging endeavors, close collaboration among scientists in different fields is worthwhile for improving our quality of life.
9,461.2
2020-03-17T00:00:00.000
[ "Medicine", "Engineering" ]
Star formation shut down by multiphase gas outflow in a galaxy at a redshift of 2.45 Large-scale outflows driven by supermassive black holes are thought to have a fundamental role in suppressing star formation in massive galaxies. However, direct observational evidence for this hypothesis is still lacking, particularly in the young universe where star-formation quenching is remarkably rapid1–3, thus requiring effective removal of gas4 as opposed to slow gas heating5,6. Although outflows of ionized gas are frequently detected in massive distant galaxies7, the amount of ejected mass is too small to be able to suppress star formation8,9. Gas ejection is expected to be more efficient in the neutral and molecular phases10, but at high redshift these have only been observed in starbursts and quasars11,12. Here we report JWST spectroscopy of a massive galaxy experiencing rapid quenching at a redshift of 2.445. We detect a weak outflow of ionized gas and a powerful outflow of neutral gas, with a mass outflow rate that is sufficient to quench the star formation. Neither X-ray nor radio activity is detected; however, the presence of a supermassive black hole is suggested by the properties of the ionized gas emission lines. We thus conclude that supermassive black holes are able to rapidly suppress star formation in massive galaxies by efficiently ejecting neutral gas. log M * /M ⊙ = 10.9, and is one of the 17 objects that are classified as quiescent according to the rest-frame U V J colors [13].This is confirmed by the JWST/NIRSpec spectrum, shown in Figure 1, which covers the rest-frame range from 3000 Å to 1.4 µm and features several stellar absorption lines and relatively weak emission lines from ionized gas. We fit stellar population models to the observed spectroscopy together with the photometry, which covers a wider wavelength range from the ultraviolet to the mid-infrared.The best-fit spectral model is shown in red in Figure 1, and includes stellar light, absorption by dust, and reemission by dust at longer wavelengths, but does not include the contribution of gas.By analyzing the difference between the data and the model we detect several emission lines due to warm ionized gas: [O II], [Ne III], Hβ, [O III], Hα, [N II], [S II], [S III], He I. We also detect three absorption lines, Ca II K, Ca II H, Na I D, which are not correctly reproduced by the stellar model.Unlike the other absorption lines visible in the spectrum, these three are resonant lines, meaning that they can be produced both by stars and by intervening gas, because they involve transitions out of the atomic ground level.The detection of extra absorption in these lines is thus revealing the presence of cold gas.Since the energy required to ionize Na I and Ca II (5.1 and 11.9 eV respectively) is smaller than that required to ionize hydrogen (13.6 eV), these lines probe gas in the neutral atomic phase (i.e., where hydrogen atoms are neutral). We fit a Gaussian profile to each gas emission and absorption line, obtaining a measure of their flux, line width σ, and velocity offset ∆v with respect to the systemic velocity of the galaxy.We show a selection of lines in the left column of Figure 2: a remarkable diversity of kinematics is apparent, with a wide range of measured line widths and velocity offsets.The presence of blueshifted lines, both in absorption and in emission, with velocity offsets of hundreds of km/s is the unmistakable sign of a gas outflow. We use a simple model of a biconical outflow, shown in the right column of Figure 2, to qualitatively explain the kinematics of all the observed lines.The five rows of the figure illustrate five different types of lines, classified according to their kinematics, which we discuss here from top to bottom. 1) Low-ionization lines such as [O II] have kinematics in agreement with those of the stellar population (i.e., ∆v ∼ 0 km/s and σ ∼ 300 km/s), suggesting that they originate in the galaxy and not in the outflow.2) The neutral absorption lines Na I D and Ca II K are significantly blueshifted with ∆v ∼ −200 km/s, and are therefore tracing the foreground gas that is in the approaching side of the outflow.Since this gas must remain neutral, it is likely farther out compared to the ionized gas, and we assume it has the shape of a thin shell.3) Emission lines with a relatively high ionization energy, such as [O III], are also blueshifted (∆v ∼ −200 to −400 km/s), and are thus likely to originate in the approaching side of the outflow.4) A special case is [S III], which is also a highionization emission line but is observed to be at roughly systemic velocity (∆v ∼ 0 km/s) and with a line width that is too broad to be produced by the gas in the galaxy (σ ∼ 600 km/s); this emission is likely tracing both the approaching and the receding side of the outflow.The difference with the other high-ionization emission lines is due to the redder rest-frame wavelength of [S III], which makes it less prone to dust attenuation and allows us to see the full velocity distribution.The [O III] emission is thus blueshifted not because the outflow is asymmetric, but because its far, redshifted side is hidden by dust attenuation.5) Finally, He I is the only emission line that is redshifted (∆v ∼ +400 km/s) compared to the systemic velocity of the galaxy, implying that we are seeing the receding side of the outflow but not the approaching side.This peculiar behavior (often seen for Lyα) is due to resonant scattering, because the He I transition involves a meta-stable state and can therefore be self-absorbed.Since the meta-stable level can only be populated via recombination, the He I line traces the ionized gas.The overall picture emerging from the observations is that of an outflow that is present both on the foreground and on the background of the galaxy: this could be either a biconical or a spherical outflow. From the observed line profiles we derive high outflow velocities for the ionized gas, reaching up to ∼ 1700 km/s in the case of [O III], which strongly suggests the outflow is driven by an active galactic nucleus (AGN).The presence of an AGN is confirmed by the high [N II]/Hα and [O III]/Hβ line ratios [14]. Following standard modeling for the ionized [15,16] and neutral [17] phases we measure M out , the mass of gas in the outflow.The derived outflow masses are particularly sensitive to the assumptions made in the derivation, such as the electron number density (for the ionized phase) and the dust depletion together with the opening angle (for the neutral phase; see Methods for details).The uncertainties are therefore dominated by systematics, which we estimate to be about 0.7 dex for both the ionized and the neutral outflow mass.Dust depletion is particularly uncertain for calcium, and so we can use the Ca II K line only to derive a lower limit on the outflow mass.The resulting outflow velocities and masses are shown in Figure 3a.The ionized outflow masses derived from four different emission lines are in remarkable agreement, which validates some of the assumptions made in the modeling.Moreover, we find that the neutral outflow is slower but has a substantially larger mass, in qualitative agreement with observations of local galaxies [18]. Knowing the outflow mass and velocity allows us to calculate the mass outflow rate, Ṁout = M out • v out /R out , where R out is the size of the outflow.We estimate R out ∼ 3 kpc from the NIRSpec data, in which we are able to spatially resolve the blueshifted [O III] emission.We then find Ṁout ∼ 35 M ⊙ /yr for the neutral outflow and Ṁout ∼ 1 M ⊙ /yr for the ionized outflow.The ratio between the two phases is large, but within the range measured in local outflows [18][19][20][21][22].However, the mass outflow rate of COSMOS-11142 is an order of magnitude larger than the typical values measured in local star-forming galaxies [23,24].At high redshift, measurements of neutral outflows from Na I D absorption have only been obtained for a few quasars [25,26], but observations of UV or sub-millimeter lines tracing neutral gas reveal 2) Blueshifted absorption Neutral gas in the foreground 3) Blueshifted emission Ionized gas in the foreground (strong dust attenuation) 1) Narrow emission Ionized gas in the galaxy 4) Broad emission Ionized gas in the foreground and background (weak dust attenuation) 5) Redshifted emission Ionized gas in the background (resonant scattering) The Ca II K outflow mass is derived assuming no depletion onto dust, and is therefore a lower limit.The diagonal lines are at constant mass outflow rate assuming Ṁout = M out v out /R out .b) Star formation history (red line) and 95% credible region (shaded area) derived from fitting the spectroscopic and photometric data.The mass outflow rate is shown for neutral and ionized gas at a lookback time of zero (i.e., the epoch at which the galaxy is observed).The mass outflow rate for the neutral gas is substantially higher than the residual star formation rate, implying that the outflow is able to strongly suppress the star formation activity.high mass outflow rates in star-forming galaxies and AGN systems [11]. To understand the role of the outflow in the evolution of COSMOS-11142, in Figure 3b we show the star formation history of the galaxy, derived from our spectro-photometric fit.We find that the system is in a "post-starburst" phase: it formed most of its stellar mass in a rapid and powerful starburst ∼ 300 Myr before the observations, and then experienced a rapid quenching of the star formation rate by two orders of magnitude.These remarkably rapid formation and quenching timescales are not seen in the local universe, but are common among massive systems at z ∼ 1 − 2 [1-3], and represent the only way to form quiescent galaxies at even higher redshift [27] due to the younger age of the universe.According to the star formation history, the rate at which COSMOS-11142 is currently forming stars is between 1 and 10 M ⊙ /yr; we obtain consistent estimates from ionized emission lines and infrared emission.The system is therefore in the middle of quenching, about 1 dex below the main sequence of star formation [28], but still above the bulk of the quiescent population [29,30].By comparing the mass outflow rate to the current star formation rate, we conclude that the ionized outflow is weak, while the neutral outflow is very strong.This comparison shows that the ionized outflow is irrelevant in terms of gas budget, whereas the neutral outflow is able to substantially affect the star formation rate by ejecting cold gas before it can be transformed into stars.We thus conclude that the observed outflow likely plays a key role in the rapid quenching of COSMOS-11142.Given the low outflow velocity, v out ∼ 200 km/s, it is possible that most of the neutral gas is not able to escape the galaxy.In this case, heating of the halo gas by radio-mode AGN feedback is likely required to maintain this galaxy quiescent over a Hubble timescale.However, this does not change our main conclusion, since radio-mode AGN feedback alone is unable to explain the observed rapidity of quenching. Despite the highly effective feedback in action, COSMOS-11142 is not detected in publicly available X-ray or radio observations.This suggests that AGN samples selected at those wavelengths do not necessarily probe the galaxy population in which feedback is being most effective.Such samples are usually biased towards powerful AGNs, which tend to live in gas-rich, star-forming galaxies [31,32].Strong neutral outflows similar to the one detected in COSMOS-11142 may in fact be present in the majority of massive galaxies [33], which often have emission line ratios consistent with AGN activity despite the lack of X-ray emission [7,34].Among massive galaxies, AGNdriven outflows may be particularly important for the post-starburst population, which is very likely to host emission lines with high [N II]/Hα ratio [3,30].Moreover, the detection of blueshifted gas absorption in post-starburst galaxies at z ∼ 1 [35,36] suggests that neutral outflows are frequent during this specific evolutionary phase.The rapid quenching of massive galaxies at z > 1 may thus be fully explained by the AGN-driven ejection of cold gas. JWST Spectroscopy The Blue Jay survey is a Cycle-1 JWST program (GO 1810) that observed about 150 galaxies at 1.7 < z < 3.5 in the COSMOS field with the NIRSpec Micro-Shutter Assembly (MSA).Each galaxy was observed with the three mediumresolution gratings, covering the 1 -5 µm wavelength range at a spectral resolution of R ∼ 1000.The target sample was drawn from a catalog [37] based on Hubble Space Telescope (HST) data.The selection was designed in a way to obtain a roughly uniform coverage in redshift and mass, and the sample is unbiased above a redshift-dependent stellar mass of 10 8.7 − 10 9.3 M ⊙ .COSMOS-11142 (the ID is from ref. [37]) was observed in December 2022 for a total of 13 hours in G140M, 3.2 hours in G235M, and 1.6 hours in G395M.A slitlet made of four MSA shutters (shown in Extended Data Figure 1) was placed on the target and the observations employed a 2-point A-B nodding pattern along the slit.We reduced the NIRSpec data using a modified version of the JWST Science Calibration Pipeline v1.10.1, using version 1093 of the Calibration Reference Data System.Prior to combining the data we visually inspected and excluded any remaining artefacts in the individual exposures.The galaxy 1D spectrum was then optimally extracted.For more details on the Blue Jay survey and the data reduction, see Belli et al. (in preparation). JWST Imaging JWST imaging of COSMOS-11142 is available from the PRIMER survey (GO 1837, PI: J. Dunlop) in several bands: F090W, F115W, F150W, F200W, F277W, F356W, F410M, and F444W with NIRCam; and F770W and F1800W with MIRI.We performed aperture photometry on the MIRI data and applied a point-source correction derived from WebbPSF [38].For the NIR-Cam data, which have a higher resolution and sensitivity, we fit the surface brightness profile of COSMOS-11142, independently in each band, using Forcepho (Johnson et al., in preparation).We model the galaxy using a single Sersic profile, convolved with the point spread function, and explore the posterior distribution via Markov Chain Monte Carlo (MCMC).This yields a multiband set of photometric and structural measurements.Extended Data Figure 1 shows the data and the model for F200W, which is the shortchannel band with the highest signal-to-noise ratio (SNR=212), where we measure an effective (half-light) radius R e = 0.075", corresponding to 0.6 kpc, a Sersic index n = 2.6, and an axis ratio q = 0.5.While the formal errors are small, and the measurements are likely dominated by systematic uncertainties, we can robustly conclude that the galaxy is compact (yet well resolved in the NIRCam data) and elongated.These results are qualitatively unchanged when considering the other NIRCam bands.The residuals are small, implying that a single Sersic profile is a good description of the galaxy morphology and ruling out the presence of a major merger, a close companion, or bright point-source emission from a Type-1 AGN. Spectral Fitting We characterize the stellar population and dust properties of COSMOS-11142 by fitting models to the observed spectroscopic and photometric data.We use the Bayesian code Prospector [39] and follow the approach explained in ref. [3,40].We adopt the synthetic stellar population library FSPS [41,42], the MIST isochrones [43], the C3K spectral library [44], and the Chabrier initial mass function [45].The galaxy stellar population is described by stellar mass, redshift, velocity dispersion, metallicity, and a non-parametric star formation history with 14 bins.The bins are logarithmically spaced except for the youngest one (0-30 Myr), and the oldest, which is a narrow bin placed at the age of the universe, providing the possibility of a maximally old population.We adopt a continuity prior that disfavors abrupt changes in the star formation history (see [46] for details).The model also includes attenuation by dust, described by three parameters (A V , dust index, and extra attenuation towards young stars; [47,48]), and dust emission, implemented with three free parameters describing the infrared emission spectrum [49].We assume that the total amount of energy absorbed by dust is then re-emitted in the infrared.We do not include the contribution from gas or AGN. In order to fit a single model to both the JWST spectroscopy and the multi-wavelength photometry, it is necessary to include important systematic effects, as described in [39].We add one parameter describing the fraction of spectral pixels that are outliers, and one "jitter" parameter that can rescale the spectroscopic uncertainties when necessary to obtain a good fit.The best-fit value for the jitter parameter is 2.05, suggesting that the NIRSpec data reduction pipeline underestimates the spectral uncertainties.In our subsequent analysis of the emission and absorption lines, we apply this jitter term to the error spectrum, in order to obtain a more accurate estimate of the uncertainties on our results. We also adopt a polynomial distortion of the spectrum to match the spectral shape of the template, to allow for imperfect flux calibration and slit loss corrections (particularly important in this case since the shutter covers only a fraction of the galaxy).In practice, this is equivalent to normalizing the continuum and only considering the small-scale spectral features such as breaks and absorption lines.In turn, this procedure yields an accurate flux calibration for the JWST spectrum, if we assume that the emission probed by the MSA shutter is just a rescaling of the emission probed by the photometry (i.e., we are neglecting strong color gradients).This yields a slit-loss correction of ∼ 2, with a small dependence on wavelength.The spectrum shown in Figure 1 has been calibrated in this way, and we adopt this calibration also in subsequent analysis of absorption and emission lines. The model has a total of 25 free parameters, and to fully explore the posterior distribution we employ the nested sampling package dynesty [50].We fit the model to the observed NIRSpec spectroscopy and to the broadband data (shown in Extended Data Figure 2).In the spectrum we mask ionized gas emission lines, including Hα and Hβ, and the resonant absorption lines Na I D, Ca II H, and Ca II K.For the photometry we make use of our measurements from MIRI and NIR-Cam data (excluding F356W and F410M because the data reduction pipeline flagged most of the galaxy pixels as outliers).We also adopt archival photometry from HST, measured from data taken with the ACS and WFC3 instruments [37].These measurements are clearly offset from the JWST NIRCam photometry, likely because they have been measured using a different method (photometry within a fixed aperture).From comparing the HST F160W and the JWST F150W bands, which cover a very similar wavelength range, we determine that the HST fluxes are overestimated by 26%.We correct all the HST points for this offset, and add in quadrature a 26% relative error to the HST uncertainties. The best-fit model is shown in red in Figure 1 and Extended Data Figure 2: the same model is able to simultaneously reproduce the spectroscopy and the photometry.The fit yields a stellar mass 3. The galaxy is relatively dusty, with A V = 1.5 ± 0.1, which is higher than what found in quiescent systems and may be related to the rapid quenching phase in which it is observed.We find that the main results of our analysis do not change when excluding the HST photometry, using a smaller wavelength range for the spectroscopic data, or changing the order of the polynomial distortion. Absorption and Emission Lines To analyze the gas emission and absorption lines, we first mask these features when running Prospector, and then fit the residuals using Gaussian profiles convolved with the instrumental resolution.The results of the Gaussian fits are listed in Extended Data Table 1. We show the resonant absorption lines Ca II H, K and Na I D in Extended Data Figure 3.These lines are also present in the best-fit stellar model, but the observed absorption is both much stronger and clearly blueshifted, making it easier to study the neutral gas.We also note that the Ca II H line lies on top of the Balmer Hϵ absorption line, which however is not resonant and is therefore present only in the stellar spectrum. Since the effect of neutral gas absorption is multiplicative, when fitting Gaussian profiles we consider the ratio of the observed spectrum to the stellar model.We model Ca II H and Ca II K with a Gaussian profile each, assuming the same width and velocity offset.Due to the faintness of Ca II H it is difficult to measure the doublet ratio, so we fix it to 2:1, appropriate for optically thin gas [51], which seems to reproduce well the data.We independently fit the Na I D doublet, which is unresolved, using two Gaussians with the same width and velocity offset, and fixing their equivalent width ratio to the optically thin value of 2:1.We verified that the results do not change in a substantial way when leaving the doublet ratio free. The observed line profiles do not warrant the modeling of multiple kinematic components; however, we consider the possibility that the observed absorption consists of the sum of a blueshifted and a systemic component.For example, this could be the case if our model underestimates the stellar absorption.We use the Alf stellar population synthesis code [52,53] to assess how the observed absorption lines vary when changing the abundance of individual elements in the stellar populations.We conclude that this effect is negligible: for a Na abundance that is twice the solar value, the extra absorption in Na I D would be only 8% of the equivalent width we observe in COSMOS-11142.A systemic component could also arise from neutral gas that is in the galaxy and not in the outflow.We find this unlikely because of the increased importance of the molecular phase, at the expense of the neutral phase, for the gas reservoir of galaxies at high redshift [54] Jay survey, where neutral gas is mostly associated with outflows and not with the gas reservoir, even in star-forming galaxies [33].Nonetheless, we repeat the fit for Ca II K, which is well detected and not blended, adding a Gaussian component fixed at systemic velocity.In this case we find that the equivalent width of the blueshifted component would be reduced by (41 ± 9)%.Finally, an additional source of systematic uncertainty could be the presence of resonant redshifted emission from Na I D, which would fill up the red side of the Na I D absorption line and thus yield a slight underestimate of the equivalent width and a slight overestimate of the outflow velocity.Resonant Na I D emission is rarely seen in the local universe [19], but the presence of resonant He I emission suggests this could be a possibility in COSMOS-11142. Similarly to what was done for the absorption lines, we fit Gaussian profiles to all the detected emission lines, which are shown in Figure 2 and Extended Data Figure 4. Given the additive nature of the emission, here we consider the difference between the observed spectrum and the stellar model.The emission line kinematics present a wide diversity and thus require a sufficiently flexible model.For this reason we try to fit each line independently, imposing physically motivated constraints only when necessary because of blending and/or low signal-to-noise ratio.The most complex case is the blend of Hα with the [N II] doublet.We model all three lines simultaneously, assuming that they have identical velocity offset and dispersion, and we also fix the [N II] doublet ratio to the theoretical value of 3:1.The resulting best fit, shown in Extended Data Figure 4 (panel a) approximately reproduces the data, even though some discrepancies in the flux peaks and troughs are visible.If we add a broad Hα component to the model we obtain a marginally better fit (panel b).The broad component has a velocity dispersion σ ∼ 1200 km/s, and may be due to a faint broad line region [27].However we obtain a nearly identical fit if we adopt a broad component for the [N II] lines as well, in which case the broad component has σ ∼ 800 km/s and would be due to the outflow.We conclude that the lines are too blended to robustly constrain their kinematics (while the line fluxes remain consistent across the different fits, within the uncertainties), and we adopt the simplest model, consisting of three single Gaussian profiles.The velocity obtained with this model is large (465 km/s), suggesting the presence of an outflow component in these lines.Other blended doublets in our spectrum are [O II] and [S II]; for each of them we fix the doublet flux ratio to 1:1, which is in the middle of the allowed range, and assume that the two lines in the doublet have identical kinematics.Finally, given the low signal-to-noise ratio of the Hβ and [S II] lines we also decide to fix their kinematics to those derived for Hα and [N II], since they all have similar ionization energy.We note that if we instead fit the Hβ line independently we measure a blueshift, which would be consistent with the emission originating in the approaching side of the outflow, but the line is too faint for drawing robust conclusions.To summarize, we assume the same kinematics for the low-ionization although the latter has a much lower velocity shift.This discrepancy may simply be due to the low signalto-noise ratio of [Ne III]; we also try to fix its kinematics to that of [O III] but the fit is substantially worse.Interestingly, for [S III] we do not measure a statistically significant blueshift, but the velocity dispersion is very large, likely tracing the contribution of both the receding and the approaching sides of the outflow.Finally, He I presents a clear redshift, which is the sign of resonant scattering.We note that [S III] and He I have similar wavelength and require similar ionization energy, meaning that they probe the same type of gas conditions.One important difference is that the blue side of He I experiences resonant scattering; however, the red side is not affected by this phenomenon.The red side of [S III] and He I must therefore trace almost exactly the same type of gas, which is found in the receding outflow.Our interpretation of the observed kinematics for these two lines is thus self-consistent. Clearly, the emission lines in COSMOS-11142 present a remarkable complexity.Our goal is to estimate the properties of the ionized outflow in order to understand its role in the evolution of the galaxy; a detailed study of the ionized emission lines is beyond the scope of this work.For this reason we avoid a decomposition into broad and narrow component for the brightest emission lines, and choose to fit each line independently, when possible.We have also tested a global fit, where all lines except for He I and [O III] have the same kinematics; and also a model with separate kinematics for high-and low-ionization lines.The result of these tests is that the fluxes for the lines used in the calculation of the ionized outflow mass ([O III], [Ne III], [S III], Hβ) are remarkably stable, with different assumptions causing differences in the fluxes that are always smaller than the statistical uncertainties. Star Formation Rate The star formation rate of COSMOS-11142 is a key physical quantity, since it is used both to confirm the quiescent nature of the system and as a comparison to the mass outflow rate.We employ several methods to estimate the star formation rate, obtaining consistent results.The Prospector fit gives a star formation rate of ∼ 3 M ⊙ /yr in the youngest bin (0-30 Myr), with an uncertainty of a factor of 3; considering the average star formation over the last 100 Myr we obtain an upper limit of 10 M ⊙ /yr.An alternative, independent method relies on the hydrogen recombination emission lines; because of the contribution from the outflow to the observed flux, this method can only yield an upper limit on the star formation rate.We first correct the measured emission line fluxes for dust attenuation using the result of the Prospector fitting (including the extra attenuation towards young stars); we then use the standard conversion [55] applied to the measured Hα flux, obtaining an upper limit of 10 M ⊙ /yr.A similar upper limit is obtained using the observed Hβ flux.It is possible that this method misses heavily dust-obscured regions hosting a starburst, as it is sometimes found in local post-starburst galaxies [56].We check for this possibility by using redder hydrogen emission lines, which can be used to probe deeper into the dust: while we do not detect any of the Paschen lines in emission, we can place a 3-σ upper limit to the Paγ flux of 2.2 × 10 −18 erg s −1 cm −2 , yielding an upper limit on the star formation of 28 M ⊙ /yr.The absence of substantial star formation in dust-obscured regions is also confirmed by the lack of strong mid-infrared emission.COSMOS-11142 is not detected in the Spitzer 24µm observations from the S-COSMOS survey [57], yielding a 3-σ upper limit of 38 M ⊙ /yr, according to the measurements and assumptions detailed in [58].However this estimate assumes that dust is heated exclusively by young stars -a bias known to lead to an overestimate of the star formation rate for quiescent galaxies, in which most of the heating is due to older stars [59].The more sensitive JWST/MIRI observations detect the galaxy at 18 µm, and the measured flux is fully consistent with the best-fit Prospector model, as shown in Extended Data Figure 2, thus confirming the lack of substantial star formation hidden by dust. Ionized Outflow We detect clear signs of ionized outflow as blueshifted emission in [O III] and [Ne III], and as a broad emission in [S III].We can independently estimate the outflow properties using each of these emission lines.For a generic element X, the outflow mass can be written as a function of the observed line luminosity L [15,16]: where m p is the proton mass, n e is the electron density, [X/H] is the logarithmic elemental abundance in the gas relative to the solar value (n X /n H ) ⊙ (which we take from ref. [60]), and j is the line emissivity.In this relation we neglect a factor ⟨n 2 e ⟩/⟨n e ⟩ 2 , and we assume that all the atoms of the element X are found in the ionization stage responsible for the observed line (this is consistent with the observation of strong [O III] emission in the outflow but nearly absent [O II]; the other outflow-tracing emission lines have comparable excitation energy).We calculate the emissivity for each line using pyNeb [61] with standard assumptions (density 500 cm −3 and temperature 10 4 K).We further assume that the gas in the outflow has solar metallicity, given the high stellar mass of the galaxy and the result of the Prospector fit.The electron density cannot be reliably derived from the flux ratio of the poorly resolved [S II] and [O II] doublets.We make the standard assumption of n e = 500 cm −3 [16], however we note that this value is highly uncertain.Since local studies using different methods find a wide range of values, log n e ∼ 2 − 3.5 [62], we assign a systematic uncertainty of 0.7 dex (a factor of 5 in each direction) to the assumed value of n e .For the blueshifted lines we multiply the observed luminosity by two to account for the receding side hidden by dust.The resulting outflow masses are listed in Extended Data Table 2 (see also Figure 3a).We also include an estimate of the outflow mass based on the Hβ line, assuming it is entirely originating in the outflowing gas.The four estimates of the outflow mass are in agreement with each other to better than a factor of two, which is remarkable given that the mass derived from Hβ does not depend on assumptions on the ionization stage and the gas metallicity.However, all four lines depend in the same way on n e .The uncertainty on the outflow mass measurement is therefore dominated by the assumed n e . In order to calculate the mass outflow rate we assume that the ionized gas is distributed, on each side of the outflow, in a mass-conserving cone that is expanding with velocity v out , independent of radius [17].In this case the mass outflow rate can be easily derived to be Ṁout = M out v out /R out .If the outflow is in a narrow cone directed toward the observer, then the velocity shift observed in blueshifted lines ([O III] and [Ne III]) corresponds to the outflow velocity: v out = |∆v|.On the other hand, if the emitting gas spans a range of inclinations, then the intrinsic outflow velocity corresponds to the maximum observed value, which is often defined as v out = |∆v| + 2σ [10,17].Since we do not know the opening angle and inclination of the outflow, we adopt an intermediate definition: v out = |∆v| + σ, and take σ as a measure of the systematic uncertainty, so that both scenarios are included within the error bars.For emission lines that are not blueshifted ([S III] and Hβ), instead, we simply take v out = 2σ and use σ as a measure of the systematic uncertainty. To constrain the outflow size R out we analyze the 2-D NIRSpec spectrum around the location of the [O III]λ5008 line.We first construct the spatial profile by taking the median flux along each spatial pixel row; then we subtract this profile, representing the stellar continuum, from the data, revealing a clearly resolved [O III] line (see Extended Data Figure 5).The emission line morphology is complex, consisting of a fast component in the galaxy center and two slower components extending several pixels along the spatial direction on either side of the galaxy; all components are blueshifted, and are therefore not associated with Extended Data Hα emission lines, confirming the presence of ionized gas ∼ 0.4" from the center.We interpret this as the maximum extent of the ionized outflow, and we thus adopt R out ∼ 0.4" ∼ 3 kpc.The line morphology in the 2-D spectrum is consistent with our physical model of the outflow: most of the emission comes from the central regions because they are denser (in a mass-conserving cone the gas density scales inversely with the square of the radius [17]), and the line blueshift is progressively weaker in the outer regions due to projection effects [16].However, it is also possible that we are seeing one fast, small-scale outflow together with a separate slow, large-scale outflow.If the size of the fast outflow is smaller than our estimate, the ionized outflow rate would then be correspondingly larger.Also, if the outflow is strongly asymmetric, then the slit loss correction derived for the stellar emission may not be appropriate, which would introduce a bias in our line flux measurements and gas masses.With the adopted value for the outflow size we are able to derive ionized outflow masses, which are in the range 0.2 -1 M ⊙ /yr.The outflow properties estimated from the four different lines are shown in Figure 3a: it is clear that the different tracers give consistent results, particularly when considering the large systematic uncertainties in the outflow mass.Also, the order of magnitude of the mass outflow rate appears to be nearly independent of the exact definition of the outflow velocity: for any reasonable value of the velocity, the mass outflow rate of the ionized gas is unlikely to go substantially beyond ∼ 1M ⊙ /yr. Despite the large size of the outflow, most of the emission comes from the central, denser regions, and can therefore be heavily attenuated by the dust present in the galaxy.We can estimate the effect of dust attenuation on different lines, using the modified Calzetti extinction curve with the best-fit parameters from spectral fitting.It is difficult to estimate the extra attenuation towards nebular emission because of the complex morphology of the outflow; we consider a wide range from 1 (i.e., stars and gas are attenuated equally) to 2, which is the canonical value for starburst galaxies.At the [O III] wavelength, the emission from gas behind the galaxy is attenuated by a factor of ∼5-27, explaining why we do not observe a redshifted component.The attenuation decreases to a factor of ∼3-9 at the Hα wavelength, and is only a factor of ∼2-3.5 at the [S III] wavelength (which indeed has an asymmetric profile, visible in Figure 2, with some flux missing in the redshifted part likely due to differential dust extinction). Neutral Outflow We use the observed Na I D and Ca II K lines to constrain the properties of the neutral outflow.The first step is to derive the Na I and Ca II column density from the observed equivalent widths (listed in Extended Data Table 1), which can be done easily in the optically thin case [51], yielding N Na I = 2.2 × 10 13 cm −2 and N Ca II = 3.6×10 13 cm −2 .These should be considered lower limits, as even small deviations from the optically thin case can substantially increase the column density corresponding to the observed equivalent width.If the outflow is clumpy, and its covering fraction is less than unity, then the true equivalent width would be larger than the observed one.The observed depths of the two Ca II lines can be used to constrain the covering fraction [63]: in our case the data are consistent with a covering fraction of unity, but with a large uncertainty due to the low signal-to-noise of Ca II H.A hard lower limit can also be obtained from the maximum depth of any of the neutral gas absorption lines; thus, the covering fraction must be larger than 50%.For simplicity, here we assume a covering fraction of unity; if the true value were smaller by a factor of two, then the column density would be larger by a factor of two, and the mass outflow rate, which depends on their product, would remain the same. The next step consists in inferring the hydrogen column density.For Na I we can write [19,64]: where y is the sodium ionization fraction, and 10 b represents the depletion of sodium onto dust.For consistency with local studies we make the standard assumption 1 − y = 0.1 [64], meaning that only 10% of the sodium is in the neutral phase; it is likely that the true value is even lower [19], which would increase the derived column density and therefore the outflow mass.We also assume solar metallicity for the gas, and take the canonical values [65] for solar abundance, log(n Na /n H ) ⊙ = −5.69,and dust depletion in the Milky Way, b = −0.95.We obtain a hydrogen column density N H = 9.6×10 20 cm −2 .The systematic uncertainty on this result is dominated by the observed scatter in the dust depletion value, which is 0.5 dex [66]. The calcium column density is more difficult to interpret because calcium, unlike sodium, presents a highly variable depletion onto dust as a function of the environment [67,68].Observations of Milky Way clouds show very high dust depletion (up to 4 dex) for calcium at the high column density that we measure, which would imply a hydrogen column density that is ∼ 20× higher than what measured from Na I D. This discrepancy is probably caused by the presence of shocks in the outflow, which can destroy dust grains and decrease the depletion of calcium.Thus, we can only derive a lower limit on the hydrogen column density by assuming that calcium is not depleted at all, and we obtain N H > 1.8 × 10 19 cm −2 .We have neglected the ionization correction since most of the calcium in the neutral gas is expected to be in the form of Ca II [69]. In order to derive the outflow mass we assume that the neutral gas forms an expanding shell outside the ionized outflow.This is consistent with local observations [19] and with the idea that the neutral gas should be further out from the ionizing source.In principle, a direct measurement of the neutral outflow size would require a background source extending far beyond the galaxy size; a source which does not exist in our case.However, this rare configuration has been observed for J1439B, a galaxy with similar mass, redshift, and star formation rate to those of COSMOS-11142, which happens to be located near the line-of-sight to a background luminous quasar [70].Neutral gas associated to J1439B and likely belonging to an AGN-driven outflow has been observed in absorption in the spectrum of the quasar, which is 38 projected kpc from the galaxy.This rare system suggests that AGN-driven neutral outflows from quenching galaxies can extend significantly beyond the stellar body of their host galaxies. Given the large size of the outflow compared to the size of the stellar emission, we are likely detecting neutral gas that is moving along the line of sight, as shown in the cartoon in Figure 2, and therefore we assume v out = |∆v|.For a shell geometry, the outflow mass and mass rate are [17]: where Ω is the solid angle subtended by the outflow.Based on the results of local studies [17] and on the incidence of neutral outflows in the Blue Jay sample [33], we assume an opening angle of 40% of the solid sphere, i.e.Ω/4π = 0.4.We consider the systematic uncertainty on Ω to be a factor of 2.5 in each direction, ranging from a narrow opening to a spherical and homogeneous outflow.Combined with the uncertainty on the calcium dust depletion, this gives a total uncertainty on the outflow mass of ∼ 0.7 dex, similar to that on the ionized outflow mass (but due to a different set of assumptions). The neutral mass outflow rate estimated from the Na I D line is 35 M ⊙ /yr, between one and two orders of magnitude larger than what measured for the ionized phase.In addition to the 0.7 dex uncertainty due to the opening angle and the dust depletion, there are two additional assumptions that could influence this result.First, we ignored a possible contribution to Na I D from a systemic component of neutral gas not associated with the outflow, which would decrease the measured column density by 41% (adopting the value derived for Ca II K).Second, we assumed that the radius of the neutral outflow coincides with that of the ionized outflow; in principle, the neutral outflow radius could be as small as the galaxy effective radius, which is 0.6 kpc (it cannot be smaller than Extended Data Figure 6: Emission line diagnostics.Line ratio diagrams [14,71] comparing COSMOS-11142 (red symbol) to a sample of local galaxies (from the Sloan Digital Sky Survey [72], black points), high-redshift galaxies (1.9 < z < 2.7, from the MOSDEF survey [73], blue points) and a sample of high-redshift AGNs observed with JWST/NIRSpec (3 < z < 3.7, from the GA-NIFS survey [74], orange points).The uncertainty on the [N II]/Hα ratio of COSMOS-11142 includes a systematic contribution from the adoption of a specific profile when fitting the emission lines.Red curved lines represent the theoretical maximum starburst locus [75], while the red straight line in panel b) shows an empirical separation between Seyferts and LINERS [76]. this because the covering fraction must be larger than 50%, meaning that the neutral gas must be in front of at least 50% of the stars in the galaxy).If the neutral outflow were this small, the gas would be detected in the full range of inclinations, and the outflow velocity would be v out = |∆v| + 2σ.If we make the most conservative choice on both the systemic component of Na I D and on the outflow radius we obtain a mass outflow rate for the neutral phase Ṁout = 11 M ⊙ /yr.This is still one order of magnitude larger than the ionized mass outflow rate. Physical Nature of the Outflow and Properties of the AGN Due to the low star formation rate of COSMOS-11142, it is unlikely that the outflow is by star formation activity.We can also rule out a starformation driven fossil outflow: the travel time to reach a distance of R out ∼ 3 kpc at the observed velocity is less than 10 Myr, which is much smaller than the time elapsed since the starburst phase, according to the inferred star formation history.Moreover, the ionized gas velocity we measure is substantially higher than what observed in the most powerful star-formation driven outflows at z ∼ 2 [77].Another possibility could be that the outflowing material actually consists of tidally ejected gas due to a major merger [78,79].However, the lack of tidal features or asymmetries in the near-infrared imaging, together with the high velocity of the ionized gas, rule out this scenario. The only reasonable explanation is therefore that the observed outflow is driven by AGN feedback.This is confirmed by the emission line flux ratios, which we show in Extended Data Figure 6.COSMOS-11142 occupies a region on the standard optical diagnostic diagrams [14,71] which is exclusively populated by AGN systems, both at low and high redshift.We note that some of the measured emission lines (notably [O III] and Hβ) trace the outflow rather than the galaxy, which may complicate the interpretation of the line ratios.Nonetheless, the line ratios measured in COSMOS-11142 are fully consistent with those measured in the outflows of high-redshift quasars [25,80]. Despite the rather extreme line ratios, the AGN activity in 11142 is relatively weak, leaving no trace other than the ionized gas emission lines.We do not detect AGN emission in the UV-to-IR broadband photometry (which is well fit by a model without AGN contribution), in the mid-infrared IRAC colors (using the criterion proposed by ref. [81]), in the rest-frame optical morphology (no evidence for a point source in the NIRCam imaging), in X-ray observations [82] (L X < 10 44 erg/s at 2-10 keV), or in radio observations [83] ( S < 3 • 10 23 W/Hz at 3 GHz).We estimate the bolometric luminosity of the AGN from the [O III] luminosity [84], obtaining log L bol /(erg s −1 ) ∼ 45.3.Using AGN scaling relations [85,86], we find that the upper limit on the radio emission is above the expected level for a typical AGN of this luminosity; while the X-ray upper limit is about equal to the value we would expect for a typical AGN of this luminosity.However, the scaling relations present a large scatter, which may be due to the fact that AGN activity is highly variable, and different tracers respond on different timescales [87,88].We thus conclude that, at z > 2, current radio and X-ray surveys can only probe the brightest AGNs and are not sensitive to the emission from less extreme systems such as COSMOS-11142.Finally, we point out that the bolometric luminosity we derived from [O III] should be considered an upper limit, since part of the observed line luminosity may come from shock excitation.The actual bolometric luminosity may be an order of magnitude lower; even so, the mass outflow rate that we measure for the ionized phase would not be substantially off from the known AGN scaling relations [10].Interestingly, the scaling relations would then predict a molecular outflow with a mass rate similar to what we measure for the neutral phase.This suggests that the neutral and molecular phases may be comparable.However, with current data we are not able to directly probe the molecular gas in COSMOS-11142, and we point out that this system is substantially different from most of the galaxies used to derive the scaling relations, due to its gas-poor and quiescent nature. Figure 1 : Figure 1: JWST/NIRSpec spectrum of COSMOS-11142.The best-fit model, which includes the contribution of stars and dust, is shown in red.Data and model are inverse-variance smoothed using a 2pixel window.Important absorption lines due to hydrogen (violet) and metals (black), and emission lines due to ionized gas (blue) are marked.The discrepancy between the stellar model and the data reveals the presence of substantial neutral gas (absorption by Ca II and Na I) and ionized gas (emission by O II, O III, N II, and other species). [Figure 2 :Figure 3 : Figure 2: Kinematics of gas emission and absorption lines.Each row illustrates a different kinematic component of the gas, with the observations on the left and a cartoon on the right showing which spatial regions (highlighted in color) are probed by the observations.In the left panels, Gaussian fits are shown in purple, with fits to the individual lines in doublets ([O II] and Na I D) shown in a lighter color.For the emission lines, the difference between the observed flux and the best-fit stellar model is shown, while for the absorption line the ratio of the observed flux and the best-fit stellar model is shown.The zero of the velocity scale corresponds to the redshift of the stellar component measured from spectral fitting, and the dashed vertical lines mark the expected rest-frame location of the emission or absorption lines. 2 : Extended Data Figure 1: Observed and modeled surface brightness distribution.a) HST F160W data used for designing the observations, with the footprint of the four open MSA microshutters.b-d) Data, model, and residual for the F200W NIRCam observations.The model is a single Sersic profile obtained with ForcePho, and is able to reproduce the data well.Spectral energy distribution.Points show the observed photometry from the ACS and WFC3 instruments onboard HST (circles), and from the NIRCam and MIRI instruments onboard JWST (pentagons).The observed NIRSpec spectrum is shown in blue, and the red line represents the best-fit model from Prospector, with the shaded red region marking the central 95% credible interval. 3 : ; and also based on a study of Na I D absorption in the Blue Absorption lines from neutral gas.a-b) Observed spectrum (blue) and best-fit stellar model (red).c-d) Ratio of the observed spectrum to the stellar model (blue) and best-fit Gaussian components describing absorption by neutral gas (purple).The expected position of the resonant lines (at the systemic velocity of the galaxy) are shown in gray, while the position of the Balmer lines, which are only present in the stellar spectrum, are shown in green.The absorption by neutral gas is clearly blueshifted. Extended Data Figure 4 : Emission lines from ionized gas.a) Fit to the Hα and [N II] lines, using three single-Gaussian profiles.b) Fit to the Hα and [N II] lines, using three single-Gaussian profiles and one additional broad Hα component (shown in red).c-e) Fit to the [Ne III], Hβ, and [S II] emission lines.In all panels, the spectra shown are obtained by subtracting the best-fit stellar model obtained with Prospector from the observed spectrum.Gaussian fits are shown in purple, with lighter lines showing fits to the individual lines when multiple components are fit simultaneously.The dashed vertical lines mark the expected rest-frame location of the emission lines.Other emission lines are shown in Figure 2. lines Hα, Hβ, [N II], [S II]; obtaining a dispersion consistent with the presence of an outflow.The other low-ionization line [O II] represents an exception, since it has a much smaller line width which does not show evidence of outflowing material; however we analyze the [O II] morphology in the 2D spectrum and find the clear presence of a faint outflow component (as discussed below).For the high-ionization lines [O III], [Ne III], [S III] we leave the kinematics free when fitting.We measure a blueshift for [O III] and [Ne III], 5 : Two-dimensional JWST/NIRSpec spectrum centered on the [O III]λ5008 emission line.a) Observed trace, which is mostly due to stellar emission.b) Spatially resolved [O III] emission after subtraction of the median stellar continuum.The vertical dashed line marks the expected position of [O III] at the systemic velocity. Table 1 : Extended Data Measured properties of absorption and emission lines.Quantities marked with † have been fixed during the fit.The kinematics for Hα and [N II] are assumed to be identical.
12,105.8
2023-08-10T00:00:00.000
[ "Physics" ]
Comparison of Acoustic Characteristics of Date Palm Fibre and Oil Palm Fibre This study investigated and compared the acoustic characteristics of two natural organic fibres: date palm fibre and oil palm fibre, these materials eligible for acoustical absorption. During the processing stage, both fibre sheets are treated with latex. The two fibres are compressed after latex treatment Circular samples (100 mm in diameter and 28 mm, based on the measurement tube requirements) are cut out of the sheets. The density of the date palm fibre sheet is 150 kg/m 3 for a 50 mm thickness and 130 kg/m 3 for a 30 mm thickness. In contrast, the density of oil palm fibre is 75 kg/m 3 for a 50 mm thickness and 65 kg/m 3 for a 30 mm thickness. An impedance tube was used to test the thicknesses of both samples based on international standards. The results show that the date palm fibre exhibits two Acoustic Absorption Coefficient (AAC) peaks: 0.93 at 1356 Hz and 0.99 at 4200-4353 Hz for the 50mm-thick sample. In contrast, the 30-mm-thick sample has a single AAC peak of 0.83 at 2381.38-2809.38 Hz. However, the 50-mm-thick oil palm fibre has an AAC peak of 0.75 at 1946.88-2178.13 Hz and the 30-mm-thick oil palm fibre has an acoustic absorption coefficient peak 0.59 at 3225-3712.5 Hz. Thus, the date palm fibre has a higher acoustic absorption coefficient for high and low frequencies than does oil palm fibre. Both fibres are promising for use as sound absorber materials to protect against environmental noise pollution. INTRODUCTION There is increasing interest in organic natural fibres for various uses in many applications, such as insulation materials and barriers.Iraq and Malaysia have large amounts of date palm fibre and oil palm fibre agricultural waste products, respectively.These trees are identical in nature but differ by filament fibre, as shown in Fig. 1.Date palm fibres have thin, smooth filaments, unlike oil palm fibres, which have thick, rough filaments.The advantages of these fibres are their renewability, abundance and low cost.These fibres are also more effective than industrial materials in terms of their reduced health hazards and protection during processing.Acoustic applications have widely used mineral fibres, such as rock wool, glass fibre or asbestos, especially in power plant to insulate tubes.However, this material is no longer used extensively and is associated with any health risk.Asbestos is more hazardous than other mineral fibres; thus, protection must be used during use or handling. Agricultural wastes using barrier panels have been manufactured and have received the attention of only a small number of investigators (Davern, 1977).These fibres consist of cellulose and can very easily be made (Ingard, 1994).Furthermore, this crude fibre can be used to reduce the noise emitted from power plants, e.g., as barriers in boiler steam.The testing was conducted via impedance tubes to obtain the acoustic absorption coefficient.The tube was fabricated based on international standard ISO 10534-2 and ASTM E1050-98.This study shows that the local natural fibres in Iraq and Malaysia have good potential to compete with manufactured products.Many researchers have succeeded in using agricultural wastes to produce absorption materials. New particle boards manufactured using durian peel and coconut coir fibres have been created to achieve the lowest thermal conductivity to decrease heat transferred into space (Khedari et al., 2003).In terms of heat reduction, these agricultural wastes are an economical and interesting potential insulator for ceiling and walls.A year later, the same researcher discovered a particle board of low thermal conductivity manufactured using a mixture of durian peel and coconut coir (Khedari et al., 2004).Zulkifh et al. (2008) studied the transmission loss index and acoustic absorption coefficient and compared them using natural organic fibre perforated panels with or without filler.Meanwhile, Ersoy and Küçük (2009) investigated the sound absorption of industrial tea leaf waste developed into three different layers with or without a single backing layer of woven textile cloth to test the experimental properties of sound absorption.Other researcher (Yang et al., 2003) studied the absorption coefficient of four fibre assemblies, cashmere, goose down and kapok, which are both natural and acrylic fibres.The natural fibres had distinctive internal structures that would influence the sound absorption coefficient. The improvement in the acoustic characteristics and perforation plate design are used in the panel structure.The density of porous substances and the porosity of the perforated panel would significantly alter acoustical impedance and the acoustic absorption coefficient (Puglia et al., 2005).For porous materials, sound absorption is less a function of the material type and more a function of airflow resistivity and how well the material construction can be executed to achieve desirable properties for sound absorbers (Hong et al., 2007). In addition, the organic fibre is used in applications to decrease noise transmission in the space interior and external transmission (Zulkifli et al., 2009;Ayub et al., 2009). This research was conducted to understand the potential use of date palm fibre and oil palm fibre instead of industrial fibres in acoustic absorption applications.This study validated the acoustic absorption coefficient of two organic fibre samples to understand their acoustical properties.The most important parameters to determine acoustic characteristics are the acoustic absorption coefficient.To determine these parameters by using the tests conducted in the impedance tube, SCS software programme was used to determine the coefficient of absorption for two types of materials. MATERIALS AND TEST METHODS In this study, the two types of fibres used for the sound absorption in samples were of the same thicknesses, 50 and 30 mm, as shown in Fig. 2 and 3, respectively.Crude palm fibres are prepared for the testing of the material sound absorbing capability.The fibres were compressed to form the sample using Therefore, the sample contains almost the same ingredients (including the matrix granular part) as when it was collected.The samples were produced as a large rectangular panel and then cut into suitable circular shapes to fit into the impedance tube.For this purpose, the preliminary processing, which includes several sequential steps, such as fibre chopping, which cuts up the raw materials from natural fibres, produces a mixture of small pieces of palm fibre filaments and small pieces of wood and fibre powder that resemble powder dust.Next, the fibre extraction process is used to separate the fibre filaments, isolating them from the rest of the materials and removing impurities.The fibre compression process is performed manually to obtain fibre filaments, which are then arranged in the mould.Each layer is manually pressed to ensure that the fibre distribution is evenly distributed over the entire mould in terms of density. The palm fibres formed properly when a latex treatment was used.The latex treatment involves coating all surfaces well using a spraying gun retain the same shape, known as spraying.Finally, after the latex spraying, the sample is spread over the plate of a highprecision moulding machine hot compress after the latter was sprayed with a chemical treatment (RS 33 silicone release agent).The machine was run for 30 min before operation to reach 100°C.The piece is surrounded on both sides with rods whose height corresponds to the thickness required.Next, the sample is covered with a plate.The pressing period was exactly 5 min with a pressure of 170 kg/cm².The sheets for each type of fibre are 400×400×50 mm and 400×400×30 mm in dimensions, as shown in Table 1 the properties of each type. Therefore, to provide cohesion and arrangement between rectangular fibres, circular patterns were used for the latex testing.The latex had no effect on the acoustic absorption characteristics but did affect the physical properties.Date and oil palm fibres are not hazardous and non-degradable substances. Test: Acoustic Absorption Coefficient (AAC): Empirical investigation using an impedance tube was performed to calculate the acoustic absorption coefficient AAC.The test was performed using the ISO 10534-2 and ASTM E1050-98.The impedance tube consists of two steel tubes, 100 mm in diameter for low frequency and 28 mm in diameter for high frequency, SCS 9020 B/K, for measuring system the calibration, two ¼" GRAS-26AK microphones, a G10 white noise generator and a 4206 SCS 9020 B/K loud speaker.The calibrator was a GRAS type 42 AB Cal 21 01 dB.For testing, the small-diameter tubes are connected and can be mounted for high-frequency measurements, which may also stand alone with its own loudspeaker case.The microphones are connected to the PC, which also includes a random noise generator.The other instruments used include a loudspeaker case and audio amplifier, microphones and power supply sets.Alternatively, the larger-diameter tube can be mounted for low-frequency measurements.For translating sound waves to digital signals, SCS 8100 software was used to save the output signal data. Figure 4 shows the impedance tube setup, and Fig. 5 shows photograph of two types of materials with circular shape diameters of 100 and 28 mm. Figure 6 compares the two thicknesses (30 and 50 mm, respectively) of date palm fibre, whereas Fig. 7 shows the same but for oil pump fibre.Figure 8 and 9 compare the two types of fibres with 30 and 50 mm thicknesses. RESULTS The comparison between the two sample thicknesses (30 and 50 mm, respectively) of date palm fibre and oil palm fibre are shown in Fig. 6 and 7, respectively.The maximum values of the acoustic absorption coefficient AAC are illustrated in Table 2.We can see that the date palm fibre gives a higher acoustic absorption coefficient value than the oil palm fibre. DISCUSSION Effect of fibre thickness: In Fig. 6 and 7 show the acoustic absorption coefficient of date palm fibre and oil palm fibre for different thicknesses.Increasing the fibre layer thickness increases the absorption and moves absorption peak towards lower frequencies for both cases.Increasing the fibre thickness enhanced the acoustic absorption in lower frequencies, as shown in Table 2.This finding indicates that the absorption increases because the impinged wave must travel a long distance through the fibre and losses its energy. According to the absorption phenomena inside the porous material, long dissipative processes for the viscosity and thermal conditions in the fluid inside the material due to the increased thickness increase absorption (Kino and Ueno, 2007). Effect of bulk density: In Fig. 8 and 9 compare the two sample types for each thickness.The absorption coefficient for the 30 mm thickness with various densities is shown in Fig. 8 to illustrate its effect on the acoustic absorption of two panels.This figure shows that the increasing density of the porous material enhances the absorption of fibre and moves the peaks toward lower frequencies, as shown in Table 2, so that the density of date palm fibre is 130 kg/m 3 and that of oil palms fibre is 65 kg/m 3 .Absorption enhancement occurs due to the greater flow resistivity with increasing bulk density of date palm fibre, which is greater than that of oil palm fibre (Nor et al., 2004).Similarly, the 50-mm-thick sample is shown in Fig. 9.This results illustrate a considerably variability in the AAC due to the enormous difference between the density of each type; thus, increasing the density enhances the absorption considerably due to increased flow resistivity. Therefore, the effect is greater considering its porosity due to the similar fibre layer thickness.Figure 9 also shows a significant decline in high frequency absorption with increased density, but this value is at slightly less high frequencies in Fig. 8 and the additional bulk density increases the absorption.Thus, increasing the bulk density to increase the absorption coefficient is considered.As long as there is no additional layer (perforated plate or air space) with an existing layer, no additional change occurs in the absorption value.The outcomes indicate that the absorber ability with a small density has a small effect on the absorber with high density, which may enhance absorption, but according to boundaries of the sound wave that distributes the material (Li et al., 2007). On the other hand, a sound wave might be reflected by the surface area of this material, rather than being absorbed by this material.Both materials are porous, although the flow resistivity of date palm fibre is greater than that of oil palm fibre due to the flow resistivity of the porous material being inversely proportional to the fibre filament diameter for a given porosity (Hosseini et al., 2010). CONCLUSION This research was conducted to investigate acoustic absorption characteristics of two fibres materials as an agricultural waste material in Iraq and Malaysia.This study indicates that the use of an impedance tube to determine the acoustic absorption coefficient AAC between date palm fibre and oil palm fibre has been successful.The results illustrate that date palm fibre has an acoustic absorption coefficient, which is higher than that of oil palm fibre due to the differences between the thicknesses and densities. Experimental testing, using thicknesses of 30 and 50 mm for each panel, revealed different densities for each thickness and different densities for fibre type. The effect of physical elements, such as density and thickness, are described, illustrating the variations in the acoustic behavior of two fibres with changes in these parameters.The date palm fibre has good performance ability and is a promising industrial product in acoustic absorption applications to absorb various ranges of frequencies.The two types of fibres are eco-friendly, are plentiful in agricultural waste and have the advantages of renewability, abundance and low-cost.Therefore, these organic materials are alternatives to industrial products used in barriers and muffler.Finally, the properties of two types of fibres suggest that they are more likely to be used than synthetic materials are to protect the environment from noise pollution.However, the use of green technology for absorption sound via organic materials is the best current approach. Fig. 1 : Fig. 1: Photographs of date palm tree in Iraq, (a) illustrate the frond, (b) illustrate the fibreinto chip particles rather than wood-based raw substances(Ingard, 1994).Furthermore, this crude fibre can be used to reduce the noise emitted from power plants, e.g., as barriers in boiler steam.The testing was conducted via impedance tubes to obtain the acoustic absorption coefficient.The tube was fabricated based on international standard ISO 10534-2 and ASTM E1050-98.This study shows that the local natural fibres in Iraq and Malaysia have good potential to compete with manufactured products.Many researchers have succeeded in using agricultural wastes to produce absorption materials. Fig. 4 : Fig. 4: Impedance tube instrument and set-up for acoustic properties Fig. 9 : Fig. 9: Comparison between date palm fibre and oil palm fibre of thickness 50 mm Table 1 : The properties of date palm fibre and oil palm fibre Table 2 : Illustrate the maximum value of acoustic absorption coefficient AAC for each sample thicknesses at the range of frequencies
3,369.8
2014-02-27T00:00:00.000
[ "Materials Science" ]
ChangeofParametersof theKoiwa – HasigutiDynamicDislocation Relaxation in Nanostructured and Polycrystalline Zirconium after Severe Plastic Deformation and Annealing *e temperature dependences of acoustic properties of nanostructured and polycrystalline zirconium are investigated in the temperature range of 100–340K. *e effect of severe plastic deformation and subsequent annealing on key parameters of the Koiwa–Hasiguti acoustic relaxation in zirconium is studied in detail. It is established that, due to intensive plastic deformation, the relaxation strength considerably increases, and the temperature and the width of the corresponding relaxation peak systematically decrease with reduction of the mean grain size in the samples. Annealing leads to a partial recovery of the relaxation strength and the peak temperature back to the initial values in undeformed samples, but the width of the relaxation peak shows an additional decrease. *e majority of the effects observed can be explained by changes in dislocation subsystems of the samples during intensive plastic deformation and annealing. An influence of a random scatter of the relaxation time on the main parameters of the Koiwa–Hasiguti peak is established using the statistical analysis based on the lognormal distribution. It is shown that the parameter β of the lognormal distribution determines the width, height, and asymmetry of the peak and also allows estimating the relaxation strength from the peak height. An algorithm for retrieving the parameter β from experimental data is presented. Introduction e study of physicomechanical properties of high-fragmented polycrystals with the mean grain size of order 100 nm is of interest both from the fundamental and technological points of view.Fragmentation of polycrystalline metallic materials and their transition into the nanostructured (NS) state change essentially such important physical and technological characteristics as elastic moduli, strength and plasticity, the Debye temperature, the Curie temperature, corrosion resistivity, etc.In some cases, NS metals have much higher operational characteristics in comparison with coarse-grained polycrystalline metals and may be regarded as perspective constructional materials. Severe plastic deformation (SPD) is one of the simplest and accessible methods of obtaining high-fragmented metals.High levels of plastic deformations may be achieved using equal-channel angular pressing, high-pressure torsion, accumulative roll-bonding, hot and cold rolling, drawing, hydroextrusion, forging, and the variety of their combinations [1,2].SPD methods allow obtaining large and practically pore-free bulk samples that are not accessible when using other techniques, for example, by the compaction of superfine powders.However, samples prepared with the help of the SPD techniques are not in thermodynamic equilibrium due to a huge number of deformation defects, first of all, high density of dislocations.is can be considered as one of the main reasons for the significant change of physicomechanical properties of polycrystals during formation of the NS states as well as at subsequent thermal and/or mechanical treatments.Instability of the NS materials is a serious restriction for their wide application as elements of constructions which are subjected to extreme working loads and temperatures.erefore the study of changes in microstructures of the NS metals and alloys at various stages of their preparation and post-SPD treatments is an important and actual problem in modern materials science and technology. Experimental study of elastic and inelastic properties of the NS metals in a wide temperature range using acoustic spectroscopy methods may provide important information on dynamic properties of dislocations in metals.ese methods offer nondestructive way of obtaining elastic and inelastic characteristics of materials, possess high structure sensitivity, selectivity and reproducibility.In the present work, acoustic properties of intensively deformed zirconium are investigated in temperature range 100-340 K. anks to a number of important physical parameters and their optimal for practical applications combination (resistance to radiating damages, small cross section for scattering of thermal neutrons, plasticity and strength, high corrosion stability) zirconium finds wide application in nuclear power engineering.It was established that the SPD fragmentation of Zr polycrystals essentially improves physicomechanical characteristics of pure zirconium, especially, in the range of low temperatures [3]; however, the evolution of parameters of dislocation structure at different stages of the preparation and post-SPD processing of NS zirconium was not investigated in detail. e main purpose of the present work is a comparative study of dislocation structure evolution in the pure zirconium subjected to various SPD manufacturing schemes and post-SPD annealing by investigation of changes in the main parameters of the low-temperature Koiwa-Hasiguti (KH) dislocation acoustic relaxation. Experimental Samples of polycrystalline iodide zirconium were studied. e starting material was subjected to double electron-beam remelting.e grain size in the initial ingots was ∼1 mm, and the integral purity of the material was characterized by the relative change of the resistivity between 293 K and 4.2 K ρ 293 /ρ 4.2 ≈ 40.Fragmentation of the grain structure of the samples was achieved during SPD on several technological schemes. e value of plastic deformation at extrusion, drawing, upsetting, and squeezing was characterized by the value of the "true" plastic deformation e � ln (S 0 /S), where S 0 and S are the initial and final cross sections of the samples.e use of the multistage SPD schemes was aimed both at reducing the mean grain size d and achieving a more uniform grain-size distribution and a higher degree of the grain equiaxiality.e homogeneity of the grain-size distribution was characterized by the coefficient of variation k v � s/d, where s is the standard deviation from the value of d. e mean grain size and other characteristics of the structure of samples were determined by means of histograms.Details of the sample preparation procedure and methods for determining their structural characteristics are described in [4][5][6], and the main characteristics of the samples studied are given in Table 1. Acoustic measurements were carried out by the twocomponent composite vibrator technique with piezoelectric excitation [8].Longitudinal standing waves were excited in the samples at the fundamental frequency f ∼ 73 kHz and also at the 3rd and 5th harmonics of the quartz transducer (∼220 and ∼365 kHz, resp.).e samples were cut from the initial billets using spark erosion cutting.en the end faces of the samples were polished with abrasive materials to achieve the required length (∼30 mm), flatness, and parallelism.e measurements were carried out in the temperature range of 100-340 K. e rate of the temperature change was about 1 K/min.Near the acoustic anomalies, the temperature variation steps were from 1 to 3 K, and at other temperatures, they were 5 K. e temperature dependences of the logarithmic decrement δ(T) and the resonant frequency f(T) of the composite vibrator were measured.e resonant frequency was used to determine the dynamic Young's modulus E(T) of the samples.e measurements were carried out in the amplitude-independent region at the ultrasonic strain amplitude of ε 0 ∼ 1 • 10 −7 . To obtain the information on structure stability of the samples, they were annealed in vacuum at 425 K for 1 hour. Results and Discussion 3.1.Physical Nature of the Dynamic Relaxation.Temperature dependences of the logarithmic decrement δ(T) measured at frequencies of f ∼ 73 kHz are shown in Figure 1.For comparison, the results obtained earlier for the well-annealed coarse-grained polycrystalline zirconium at the frequency f ≈ 87 kHz [7,9] are also presented.A pronounced peak of internal friction is found at the peak temperature T P ≈ 255 K.A means annealing (annealing temperatures and its duration are given in parentheses), HE means "hot" extrusion at the temperature 795 K, D means drawing at room temperature, CD means cyclic deformation at room temperature, CE means cryoextrusion at 77 K, and k v is the coefficient of variation, characterizing homogeneity of a grain-size distribution.In parentheses, the values of the "true" plastic deformation e are presented (Section 2 in the text). 2 Advances in Materials Science and Engineering It is established that the peak shifts towards higher temperatures when increasing the frequency, which indicates the thermoactivated relaxation nature of the peak. In the simplest description of thermally activated dynamic relaxations, it is assumed that all the elementary relaxation entities (relaxators) are identical and, at each temperature, are characterized by an unique relaxation time, whose temperature dependence τ(T) is described by the Arrhenius expression: where U 0 is the activation energy of the relaxation, τ 0 is the attempt period, and k is the Boltzmann constant.According to the Debye approximation, the frequency-temperature dependence of the relaxation component of the logarithmic decrement δ D r (ω, T) has the form where ω 2πf is the angular oscillation frequency, Δ D M (M U − M R )/M U is the maximum modulus defect (relaxation strength) associated with the Debye relaxation, and M U and M R are the nonrelaxed and relaxed elastic moduli, respectively.In view of (2), the Debye peak should be observed when the condition is satis ed.Notice that the maximum value of the relaxation component of the decrement δ D r max (the Debye peak height) and the relaxation strength Δ D M are related by e activation energy U 0 and the attempt period τ 0 can be determined from the peak shift with the frequency change: where T D P (ω 1 ) and T D P (ω 2 ) are the peak temperatures in the temperature dependences measured at the frequencies ω 1 and ω 2 , respectively.e activation energy obtained in this way turned out to be U 0 ≈ 0.32-0.37eV and the attempt period τ 0 ∼ 10 −13 s. e experimentally determined values of the activation parameters allow to attribute the peak to a family of so-called Koiwa-Hasiguti peaks (KH peaks) [10] caused by thermally activated unpinning of dislocations from point defects (impurities and/or vacancies).en, the parameter U 0 has the meaning of the binding energy, and the parameter τ 0 is the period of oscillations of the microscopic element of the dislocation line directly interacting with the point defect. e values of U 0 and τ 0 obtained according to the simple Debye approximation (5 and 6) should be considered only as preliminary estimates.In fact, the frequency-temperature dependence of the relaxation component of the decrement δ KH r (ω, T) near the KH peaks is described by a more complicated expression than the Debye peak [10]: where Δ KH M is the maximum value of the modulus defect (relaxation strength) associated with the given dynamic relaxation with δ KH r max being the height of the experimentally observed KH peak.It should be noted that, in this case, the maximum modulus defect Δ KH M is very large in comparison with the peak height δ KH r max unlike the Debye peak (cf.( 4) and (8).It was noted in [10] that the di erence between these two relaxations consists in the fact that, in the case of the KH relaxation, the behavior of pinned dislocations is asymmetric with respect to the application and the release of external stresses.us, the solid that contains pinned dislocations as elements of inelastic strain does not behave like standard linear solid for which the Debye approximation is valid.According to [10], in order to observe the KH peak in the temperature dependence of the decrement, it is necessary that in contrast to the condition ωτ(T D P ) 1 for the Debye relaxation.Within the single relaxation time approximation, the activation energy U 0 can still be determined from the Advances in Materials Science and Engineering frequency shift of the peak temperature using ( 5), but for the estimation of the attempt period τ 0, (6) should be modi ed as follows: us the use of ( 9) instead of (3) does not a ect the evaluation of the activation energy U 0 and only slightly increases the estimate of the attempt period τ 0 .is allows us, as before, to consider the thermally activated unpinning of dislocation segments from point defects as the basic microscopic mechanism of this low-temperature relaxation. In uence of SPD on the Parameters of the Dynamic Relaxation.As a result of the intensive plastic deformation, the temperature dependences of the decrement δ (T) underwent a number of essential changes: 2. It is assumed that the background losses depend exponentially on temperature [7]: where δ BG (5 K) are the background losses at 5 K, A, and B are adjustable parameters. e Peak Height. Figures 1 and 2(a) show that, at the rst stages of SPD application, the peak height increases by more than an order of magnitude compared to undeformed and annealed samples (Figures 1 and 2(a)).Complicating the SPD schemes and increasing the total plastic deformation e (and, correspondingly, decreasing in the mean grain size d) did not lead to a further increase in the height of the relaxation peak.Moreover, there was a tendency to some decrease in peak height for the most fragmented samples. e height of the relaxation peak δ KH r max is determined by a number of elementary relaxators being excited in the material and by the individual contribution of each relaxator to inelastic strain of the crystal.e signi cant increase in the peak height in the deformed samples may indicate a substantial increase in the number of relaxators and/or an increase in their individual contributions.e elementary act of the dynamic KH relaxation is the thermally activated unpinning of dislocation segments from weak pinning centers (impurities and/or vacancies) under the joint action of the external alternating stress and thermal activation.e signi cant increase in the dislocation density Λ (the total length of dislocation lines in the unit volume) as a result of the SPD may lead to an increase of the number of successful acts of unpinning of dislocation segments from local pinning centers.e contribution of the individual unpinning act to ultrasound absorption is determined by the additional inelastic strain provided by each unpinning.While analysis of the magnitude of this contribution is a rather di cult task even for coarse-grained metals or metallic single crystals [11], it becomes even more complicated in the case of the NS metals obtained by SPD methods.When the value of plastic deformation reaches several hundred percent, all possible modes of plastic deformation act in metals: dislocation sliding, twinning, processes at the grain boundaries are involved, and a dynamic recovery is observed.A complex 4 Advances in Materials Science and Engineering subsystem of crystal defects with a high level of internal stresses is formed in the samples in this case.e dislocation density reaches values of Λ ∼ 10 14 -10 15 m −2 .Along with the density of intragranular, the density of grain-boundary dislocations considerably increases.e situation in HCP metals is even more complicated in view of the presence of essentially different slip systems: basic, prismatic, and two pyramidal [12].Under these conditions, it is difficult to assess the degree of influence of SPD on the contribution of the individual relaxator to the ultrasound absorption.We can only state a significant increase in the total relaxation strength as a result of the application of the SPD schemes chosen.It should be emphasized that, even in the most fragmented samples with the mean grain size of about 100 nm, there is a sufficiently large number of dislocation segments capable of thermally activated unpinning from local pinning centers and contributing to the dynamic KH relaxation. It should be noted that a use of SPD does not always lead to an increase in the background absorption and the relaxation strength.In the work [13], some lowering of the damping capacity was observed at a comparative study of coarse-grained and ECAP-refined UFG samples of Fe-13Cr-2Al-1Si ferromagnetic alloy. is was attributed to the pinning effect introduced by SPD crystal defects (dislocations, subgrain boundaries, etc.) on the movement of magnetic domain walls. e Peak Temperature. e most interesting effects that have been registered in this work are the systematic lowering of the peak temperature T KH P and the decrease of the peak width Δ(1/T ∓ 0.5 ) KH (peak narrowing) with the decrease of the mean grain size in the samples (Figures 2(b) and 2(c) and Table 2).Combining (1) and ( 9), the KH peak position along the temperature axis can be determined: Equation (12) shows that, at a constant oscillation frequency ω (i.e., almost the case in our experiment), the peak shifts towards low temperatures as a result of the SPD can be attributed to a decrease in both the activation energy U 0 and the attempt period τ 0 .It should be noted that the dependence of T KH P on τ 0 is logarithmic and is much weaker.Some additional experimental evidences and arguments are required in order to make the final choice between these two possible factors affecting the peak temperature (see below). e Peak Width. As it was noted, the width of the experimentally observed KH peak Δ(1/T ∓ 0.5 ) KH decreases systematically with decreasing mean grain size in the samples. e value of Δ(1/T ∓ 0.5 ) KH in the single relaxation time approximation can be obtained by equating the right-hand side of (7) to 0.5δ KH r max (8) and finding roots of the equation 10.08 where y 1,2 ≡ (ωτ) 1,2 are the values of ωτ at which the half height 0.5δ KH r max of the peak is reached.e numerical solution of (13) shows that the half height of the KH peak is reached at ωτ 1 � 6.178 and ωτ 2 � 0.405.Assuming that the values of ω, U 0 , and τ 0 in the interval between ωτ 1 to ωτ 2 remain unchanged, it is easy to obtain e analogous expression for the width of the Debye peak can be obtained analytically under the same assumptions and that differs from (14) only by a numerical coefficient [14]: It follows from ( 14) and ( 15) that the widths of the relaxation peaks of both types are inversely proportional to the activation energy U 0 .According to (12), the peak temperature T KH P is directly proportional to this value.In our experiments, both the peak temperature and peak width decreased with decreasing the mean grain size.It is clear that these effects cannot be explained simultaneously by a change (decreasing or increasing) of U 0 .On the other hand, according to (15), the peak width does not depend on τ 0 .is allows us to suppose that a decrease in τ 0 may be the reason of both effects. Relaxation Time Distribution. e registered changes in the parameters of the KH peak due to SPD are apparently caused by changes in the effective values of the activation Advances in Materials Science and Engineering parameters of the KH relaxation process. e important consequence of SPD is the appearance of signi cant internal stresses in the metal which may either facilitate or hinder unpinning of dislocation segments from the pinning centers thus changing the e ective values of the activation parameters U 0 and/or τ 0 and, hence, the relaxation time τ. ese changes may be di erent for relaxators acting at di erent locations in the sample.is should lead to the formation of spectra (discrete or continuous) of the activation parameters.e large width of the relaxation peaks may serve as an evidence of the existence of the relaxation time spectra. e temperature dependences of the normalized relaxation components of the decrement δ KH r /δ KH r max for three samples are presented in Figure 3. e dashed curve is drawn using (7) for the case of a single relaxation time approximation.It is clearly seen that the experimentally measured internal friction peaks are much broader than that predicted by (7). In the literature, several ways of taking into account the distribution of the relaxation time were proposed for describing the broadened relaxation peaks.e use of the normal Gaussian distribution for the value of ln(τ) (lognormal distribution) we consider is most reasonable [15,16].In [15], such analysis was used when investigating the in uence of the relaxation time distribution on the width and height of the Debye peaks.In this paper, we applied the same approach to the analysis of properties of the KH peaks. e relaxation component of the logarithmic decrement when considering the distribution in ln(τ) can be written as where Ψ(ln(τ)) d(ln(τ)) is the relative number of elementary processes contributing to δ KH r , for which the logarithm of the relaxation time falls within the interval between ln(τ) and ln(τ) + d(ln(τ)).e function Ψ(lnτ) is a normalized distribution function: Let us change the absolute value of τ to the normalized one and introduce the variable where τ m is the mean value of the relaxation time at the given temperature.en the Gaussian distribution function for z can be written as where β is the Gaussian distribution parameter, which determines the half-width of the distribution at the level Ψ(z)/Ψ(z m ) 1/e, Ψ(z m ) is the maximum value of Ψ, and z m ≡ z(τ m ) 0 is the average of distribution.Substituting (19) in ( 16) and introducing variables one obtains the expression for the relaxation component of the decrement taking into account the Gaussian distribution in ln(τ/τ m ): where is latter expression uses the notation introduced in [15].e quantity β in (23) determines the change of the main characteristics of the KH peak in the presence of the relaxation time distribution.For β 0, the Gaussian function goes into a Dirac δ-function and (23) degenerates into (7) for the case of a single relaxation time.In Figure 4, the normalized curves for values of β 0, 2, 5, and 10 are presented. An analysis of (23) allows us to draw some conclusions about the changes of the basic parameters of the KH peak in the presence of the distribution of relaxation times (in other (1/T) (for better clarity, the data for three samples are given).e dashed line shows the normalized dependence of the relaxation component of the decrement obtained for the sample Zr-02CE in the single relaxation time approximation (7).6 Advances in Materials Science and Engineering words, about their dependence on the distribution parameter β).First of all, a considerable increase in the width of the relaxation peak with increasing β is observed.It was demonstrated in [15] that the ratio is a useful characteristic of the peak width (here Δ(1/T ∓ 0.5 ) KH (β) and Δ(1/T ∓ 0.5 ) KH (0) are the widths of the peaks with β > 0 and β 0, resp.).e value Δ(1/T ∓ 0.5 ) KH (0) can be obtained using (14).Numerical calculations give an empirical relation between the values r 2 and β.In Figure 5, the dependences of β KH (r 2 ) for the KH peak and β D (r 2 ) for the Debye peak are presented.Solid lines show the corresponding approximations for β KH and β D ese approximations can be used for obtaining the distribution parameter β from the experimental data by calculating r 2 in accordance with ( 14), (15), and (24).Figure 6(a) shows the dependences of the experimentally measured (β > 0) and calculated, according to (14) (β 0), KH peak widths Δ(1/T ∓ 0.5 ) KH on the mean grain size d.Analysis of (23) shows that, along with broadening of the KH peak with increasing β, a signi cant decrease in the height of its relaxation component f KH 2 (0, β) takes place (Figure 7(a)).e relationship f KH 2 (0, β) may be approximated by the expression f KH 2 (0, β) 0.03 + 0.37 1 + 0.21β 1.65 . (27) e latter approximation is shown in Figure 7(a) by a solid line.e relaxation strength Δ KH M can be derived from the height of the relaxation peak δ KH r max e value Δ KH M can also be obtained independently from the measurements of the elastic moduli which are usually carried out together with the internal friction measurements where E KH U and E KH R are the values of the unrelaxed and relaxed elastic moduli, respectively.e applicability of this method is limited by the correct determination of E KH U and E KH R from the experiment.e value of E KH R should be taken in the temperature range where all the relaxators of the given type already contribute to the modulus defect (a hightemperature limit).Likewise, it is not easy to obtain the value of E KH U since it assumes that, in the same specimen and in the same temperature range, all the relaxators of this type are completely immobilized (or absent) and do not contribute to the modulus defect at all.In most of the cases, it is impossible to ful ll simultaneously these concurrent conditions, rst of all, due to superposition of two (or several) relaxation processes with close activation parameters.Usually, the value of E KH R is taken in the temperature range T >> T P .When studying relaxation processes initiated by plastic deformation, the values of elastic moduli in wellannealed samples are usually taken as values of E KH U .With such a choice of E KH U and E KH R , the risk of underestimation of Δ KH M remains quite high. In Figure 8, the dependences of Δ KH M on the mean grain size d obtained by both methods are shown.As the [14] were processed).e points correspond to numerical calculations; the solid lines show the approximations β KH (r 2 ) and β D (r 2 ) (( 25) and (26), resp.). Advances in Materials Science and Engineering unrelaxed E KH U (T), the temperature dependence E (T) in the well-annealed Zr-CG sample [17] was used in (29).Qualitatively, the dependences obtained by two independent methods correlate with each other, although the values Δ KH M obtained from the measurements of the dynamic Young's modulus appeared to be smaller than those obtained from the peak height.It should be noted that, despite the underestimation, the values Δ KH M obtained from the Young's modulus measurements are more than twice (from 2.2 to 2.8 times) higher than the peak height δ KH r max .e signi cant magnitude of the modulus defect is an additional evidence of the fact that this peak is the KH peak [10]. Figure 8, as well as Figure 2(a), shows that the relaxation strength of the KH relaxation decreases with increasing a degree of fragmentation of the samples.is means that, in the most fragmented samples, the total number of successful acts of thermally activated unpinning of dislocations from the pinning centers gradually decreases that can be explained by a decrease in the mean length of dislocation segments L c with a decrease in the mean grain size in submicrocrystalline samples. e Shape of the Peak. A more detailed analysis of the behavior of the KH peak in the presence of the relaxation time distribution shows that an increase in β signi cantly changes the shape of the KH peak.Contrary to the Debye peak, the KH Advances in Materials Science and Engineering peak at β 0 is asymmetric.e slope of its high-temperature branch is noticeably steeper than that of the low-temperature one.As a measure of the peak asymmetry, the absolute value of the ratio K of the derivatives at the in ection points on both sides of the peak may be used [19]: While the Debye peak in the coordinates chosen is symmetric with respect to x 0 and Κ D 1 regardless of the value of β, the KH peak at β 0 has a noticeable asymmetry and Κ ΚΗ 1.464.When β increases, the quantity Κ ΚΗ decreases rather rapidly and tends to unity in the limit of large β; that is, the KH peak becomes almost symmetrical (Figure 7(b)). is result does not agree with the conclusion made in [19] where it is stated that the shape of the KH peak remains unchanged (i.e., K const), and the peak retains its asymmetry when increasing a dispersion of the activation energy. Despite the changes in the width, height, and shape of the KH peak, the area under the peak ∞ −∞ f KH 2 (x, β)dx does not depend on the distribution parameter and remains unchanged for any β (Figure 7(c)). is result agrees with the analogous result obtained in [15] for the Debye peak. E ect of Annealing. To test the stability of the structures created in the samples during SPD and to detect a possible recovery of acoustic properties, the samples were annealed at the temperature T ann 425 K for 1 hour in vacuum.e annealing temperature was chosen close to the lower boundary of the primary recrystallization temperature T cryst ≈ 0.2 T m (T m 2098 K is the melting point of zirconium). It is well known that signi cant changes can occur in the structure of the samples during annealing.Even at temperatures under T cryst , the parameters of the dislocation structure begin to change; in particular, the dislocation density Λ decreases and the mean length of dislocation segments L c changes.When T cryst is reached, growth of small grains is occurred, and the mean grain size increases. at may be accompanied by a formation of annealing textures (a predominant growth of grains of certain orientations is observed).Moreover, signi cant changes of the internal stresses can occur at elevated temperatures due to appearance of additional channels for their relaxation and the increase in the rate of di usion processes.All these changes should a ect the main parameters of the internal friction peaks observed. Changes in the Peak Temperature. e peak temperature T KH P increased after annealing; that is, a partial recovery of this KH peak parameter is observed (Table 3).e largest recovery is observed in the most fragmented samples.It means that the structures of the NS samples created with the application of more complicated SPD schemes proved to be the least resistant to the post-SPD heat treatment.According to (12), the value T KH P is determined by a combination of values U 0 and τ 0 .Hence, the observed recovery of T KH P as the result of the recrystallization may be regarded as evidence of an in uence (direct or indirect) of the average grain size on the activation parameters of the KH relaxation.However, the nature of such in uence still remains unclear, and a further study of this problem is required. Changes in the Peak Width.Unlike the peak temperature, the peak width (or the distribution parameter β) did not undergo a recovery upon annealing and continued to decrease (Table 3).It indicates that, at least at the initial stage of recrystallization, a scatter of the activation parameters of the dynamic KH relaxation does not increase and even somewhat decreases with increasing fragmentation of the samples.Obviously, the reason for this decrease is di erent from that leading to a decrease of β due to fragmentation of the samples.While the decrease in β when decreasing the average grain size d may be associated with a narrowing the dislocation segment length distribution at the expense of the longest segments with the lengths L > d, the decrease in β during annealing may indicate further homogenization of this distribution due to a decrease in the dislocation density and a partial relaxation of internal stresses. Changes in the Relaxation Strength. e values of the relaxation strength obtained both from the peak height and from the dynamic Young's modulus measurements decrease after annealing in all the samples (28, 29, and Table 3); that is, this parameter of the KH relaxation, like the peak temperature, has a tendency to recovery.e decrease in the relaxation strength is apparently due to a decrease of the number of successful elementary relaxation acts as a consequence of the decrease in the total dislocation density.28) and from the measurements of the dynamic Young's modulus (29). Advances in Materials Science and Engineering zirconium, we provide some examples showing that the normalized theoretical dependence f KH 2 (x, β)/f KH 2 (0, β) agrees rather well with the experimental dependences δ KH r (x, β)/δ KH r max (0, β) obtained on di erent samples, both in the deformed and annealed states (Figure 9).To perform this procedure, another variable was chosen as the abscissa axis [20] instead of the variable x given by (20): Equation ( 20) can be used in the analysis of the experimental data obtained at a constant temperature (i.e., when τ m const) and changing oscillation frequency f ω/2π.However, in our experiment, another limiting case was realized: at practically constant oscillation frequency (f ≈ const), the temperature T was varied.Correspondingly, the relaxation time τ m varied exponentially with T within a rather wide range.e values of U 0 and T P in (31) were taken from Table 2. Values β ΚΗ were obtained according to ( 14), (24), and (26) and data from Table 2. Equation ( 31) is valid under the assumption that the activation parameters and β ΚΗ of the given relaxation process do not depend substantially on temperature. Figure 9 shows a satisfactory agreement between the theoretical and experimental curves, in particular, in what concerns the width of the peak.It should be noted, however, that there is a signi cant asymmetry in the experimentally observed peaks, which is larger than that predicted by (30). is could be caused by various reasons.e main of them is the presence of other relaxation processes in Zr in the temperature range T < T KH P (see [6,18] for more details) which deform the low-temperature branch of the KH peak. In addition to the distortion of the peak shape, the lowtemperature relaxation processes of a di erent nature may lead to an overestimation of the values Δ(1/T ∓ 0.5 ) KH , r KH 2 , 2. 10 Advances in Materials Science and Engineering β ΚΗ , and Δ KH M determined from the height of the peak.We established that annealing leads to a substantial decrease of the contribution of the low-temperature relaxation processes.Apparently, this can explain the fact that, in the annealed samples, the best agreement between the theoretical and experimental dependences is observed (Figure 9). Conclusions In this work, the detailed experimental study of influence of severe plastic deformation (SPD) and post-SPD heat treatments on the parameters of the low-temperature Koiwa-Hasiguti dynamic relaxation in coarse-grained and nanostructured Zr samples is carried out.Experimental data on changing the relaxation peak temperature T KH P , the relaxation strength Δ KH M , and the relaxation time distribution parameter β ΚΗ with the mean grain size d are obtained.e analysis of our results allows conclude the following. (1) As a result of SPD, the sound absorption in all samples increases significantly both the background decrement δ BG (T) and the values of the KH peak height δ KH r max . is effect is due to a significant increase in the dislocation density in samples during the SPD processes. e largest increase in the decrement is observed at the first stages of SPD.With further accumulation of the total plastic deformation and decreasing the mean grain size, a tendency to a certain decrease in the peak height is observed, which may be caused by decreasing the probability of dislocation unpinning owing to the significant fragmentation of the grain structure. (2) e experimentally measured peak width Δ(1/T ∓ 0.5 ) KH turned out to be much larger than that predicted by the theoretical consideration of this relaxation under the assumption of a single relaxation time for all the relaxators.e broadening of the peak is due to a dispersion of τ around a mean value of the relaxation time τ m (T).Based on the lognormal distribution of the value ln(τ), the statistical analysis of the possible influence of the distribution on the main parameters of the KH relaxation peak is made for the first time.It is shown that the distribution parameter β determines the width, height, and asymmetry of the peak and also allows us estimate the relaxation strength from the peak height.In the paper, the algorithm of determining β from the experimental data is given. (3) When decreasing the mean grain size d, the peak temperature T KH P and the peak width Δ(1/T ∓ 0.5 ) KH in the highly fragmented samples systematically decreased.e first effect is due to a decrease of the relaxation time τ at each temperature given, in other words, due to a decrease in the activation energy U 0 and/or the attempt period τ 0 .As the peak width should be inversely proportional to the activation energy U 0 , the reason for the decrease of Δ(1/T ∓ 0.5 ) KH should be a decrease inτ 0 when reducing the mean grain size. (4) As a result of annealing, the peak temperature T KH P increases, i.e., a partial recovery of this parameter is observed.is effect is more pronounced in the most fragmented samples. is indicates that the structures of the submicrocrystalline samples prepared by the application of more complicated SPD schemes are less resistant to the heat treatment used. (5) e relaxation strength values Δ KH M decrease after annealing in all the samples, i.e., this parameter of the KH relaxation, like the peak temperature, has a tendency to recovery.e decrease in the relaxation strength, apparently, is a consequence of the decrease in the dislocation density and, correspondingly, in the number of the operating relaxators.(6) Unlike the peak temperature, the peak width does not show a recovery during annealing.Moreover, the value of β continues to decrease.It means that, at least at the initial stage of recrystallization, the dispersion of relaxation times does not increase but even somewhat decreases, especially in the most fragmented samples.e decrease in β with the mean grain size lowering may be explained by a narrowing of the dislocation length distribution at the expense of the longest dislocation segments L > d. e next decrease in β during annealing indicates further homogenization of the dislocation structure because of reducing the total dislocation density and lowering the level of internal stresses. ( 1 ) Background values of the decrement δ BG (T) and the height of the relaxation component of the peak δ KH r max (δ − δ BG )(T KH P ) increased signi cantly.(2) e peak temperature T KH P systematically decreased.(3) e peak width Δ(1/T ∓ 0.5 ) KH 1/T − 0.5 − 1/T + 0.5 dened as the di erence between the inverse temperatures corresponding to the peak half-height level 0.5δ KH r max slightly decreased.e changes in the indicated relaxation peak parameters are shown in Figures 2(a)-2(c) and in Table Figure 2 : 5 KH Figure 2: Dependences of (a) the peak height δ r max , (b) the peak temperature T P , and (c) the peak width Δ1/T ∓ 0.5 KH in Zr on the mean grain size d.Full symbols are the data of this work; empty symbols are taken from [7]. Figure 3 : Figure3: e normalized dependences of the relaxation components of the decrement on the inverse temperature δ KH r /δ KH r max (1/T) (for better clarity, the data for three samples are given).e dashed line shows the normalized dependence of the relaxation component of the decrement obtained for the sample Zr-02CE in the single relaxation time approximation(7). Figure 6 ( b) shows a change of the distribution parameter β when decreasing the mean grain size d. Figure 4 : Figure 4: e in uence of the distribution parameter β on the width and height of the KH peak (Equation 23). 2 Figure 5 : Figure5: e relationship between the value of r 2 and the distribution parameters for the Koiwa-Hasiguti peaks β KH (the present work) and for the Debye peaks β D (the data from[14] were processed).e points correspond to numerical calculations; the solid lines show the approximations β KH (r 2 ) and β D (r 2 ) ((25) and (26), resp.). Figure 6 : 5 KHFigure 7 : Figure 6: Dependences of (a) the experimentally measured width Δ1/T ∓ 0.5 KH of the KH peak and calculated according to (14) for the case β ΚΗ 0 and (b) the distribution parameter β ΚΗ on the mean grain size d.Full symbols are the data of the present work; open symbols are the processed data of the work [7]. 3. 4 .Figure 8 : Figure 8: Dependences of the relaxation strength Δ KH M on the mean grain size d obtained from the KH peak height (Equation28) and from the measurements of the dynamic Young's modulus (29). Figure 9 : Figure9: Experimental (points) and theoretical (solid lines) dependences of the normalized relaxation components of the decrement in the samples shown in Figure3.e variable x (U 0 /k)(1/T − 1/T P ) is chosen as the abscissa axis for the experimental curves (31); the values of U 0 and T P are taken from Table2. Table 1 : Preparation schedules and basic characteristics of the samples. Table 2 : e main parameters of the KH peak in the as-prepared samples. is the mean grain size; T P is the peak temperature; δ is the measured value of the decrement; δ r max � (δ − δ BG ) max ; δ BG is the background decrement; U 0 is the activation energy; Δ(1/T ∓ 0.5 ) KH is the peak width; β is the parameter of the relaxation time distribution. d Table 3 : Change of the KH peaks parameters after annealing for 1 hour at 425 K.
9,574.6
2018-03-19T00:00:00.000
[ "Materials Science", "Physics" ]
Moth (Lepidoptera: Heterocera) diversity of Bhubaneswar, Odisha, India: a preliminary study A preliminary checklist has been compiled to study the moth diversity of Bhubaneswar, Odisha, an eastern state of India. The present study has recorded a total of 154 species belonging to 129 genera and 19 families. The highest diversity of moths was recorded in the family Crambidae (48 species, 38 genera), followed by the families Erebidae (42 species, 37 genera), Geometridae (15 species, 12 genera), Noctuidae (13 species, 11 genera) and others. The study was conducted over a period of 18 months from May 2019 to October 2020. Here we present an illustrated checklist of 154 moth species from Bhubaneswar which improves our insight into the lesser-known lepidopterans from the state of Odisha. This shall further help us strengthen our knowledge about the importance of moths in our environment and contribute towards its conservation at large. Introduction Introduction Introduction Introduction Moths are biologically, economically (Sharma and Bisen, 2013) and aesthetically a very important group of insects (Devoto et al., 2011;Le Croy et al., 2013;Dey et al., 2015). They are one of the most heterogeneous groups of insects (Soggard, 2009) consisting of around 1,27,000 species identified around the world as estimated by Hamlyn in 1969 (Alfred et al., 1998) and around 12,000 species reported from India alone (Chandra and Nema, 2007). India lies in the Indo-Malayan biogeographic realm of the world and is listed amongst the 17 mega biodiverse countries. It consists of four biodiversity hotspots which indicates the uniqueness of its flora and fauna. It shelters around 6.5% of the species known across the globe on 2.4% of the world's total area (Faunal Diversity of India, 2020; http://www.zsienvis.nic.in/ ). Odisha is unique in its geographic location with major part of the state falling in the Deccan Peninsula including Chhota Nagpur Province and Eastern Highlands while it is guarded by a 480 kms long coastline on its east. Since a considerable part of the Eastern Ghats falls within the territory of Odisha, it is speculated that the diversity of moths will be unique and interesting to investigate. In Odisha, the earliest works on moths have been contributed by Hampson (1892Hampson ( , 1894Hampson ( , 1895Hampson ( , 1896 in the Fauna of British India. The State Fauna of Odisha (Part-III) by ZSI (Mandal and Maulik, 1991) reported 87 species under 3 families. There have been several records of moths as pest insects from various studies done in the crop fields. Of these some prominent works are those of paddy (Arora, 2000;Rath et al., 2020), brinjal (Kar et al., 2020), tomato (Sridhar and Srinivas, 2019) and teak (Tripathy et al., 2018); but no compiled work on the diversity of moths has yet been done in the present study area from the capital city of Odisha. However, in a recent work, Jena et al. (2018) reported 30 species from Gupteswar of Koraput district. In the present study, we have investigated the moth diversity primarily of Bhubaneswar city and adjoining urban areas under Khordha district, Odisha, India. A preliminary checklist containing 154 species under 19 families is presented here from the survey of ten study sites over a period of 18 months from May 2019 to October 2020. Materials and Methods Materials and Methods Materials and Methods Materials and Methods The biodiversity documentation of moths has been primarily done in the urban areas of Bhubaneswar (20.2961 N, 85.8245 E), and its outskirts from May 2019 to October 2020 ( Figure 1). The state lies in the tropical region and experiences a tropical savanna climate. It witnesses an average annual rainfall of about 1451.2mm (Envis Centre of Odisha, 2020; http://www.orienvis.nic.in/). The district of Khordha has mostly open forests with some moderately dense forests and scrub vegetation. Bhubaneswar is enveloped on one side by Chandaka with semi-evergreen forests and surrounded mostly by dry deciduous forests on its other boundaries. The selected sites for the study were namely, Acharya Vihar (S1), Jaydev Vihar (S2), BJB Nagar (S3), Saheed Nagar (S4), Khandagiri (S5), Pokhariput (S6), Ghangapatna (S7), Dhauli (S8), Dalua (S9) and Raghunathpur (S10) as detailed with GPS locations in Table 1, Figure 2. The regions prominently have urban habitat with fragmented vegetation. Khordha district has a geographical area of 2813 sq. km. of which 456 sq. km. has forest cover (Envis Centre of Odisha, 2020; http://www.orienvis.nic.in/). Figure 2. Study sites photographs (S1-S10) The moths have been found by random sampling, opportunistic sightings and by setting up of light traps in some of the mentioned locations. The study areas have been searched extensively in the morning (6:00 hrs-8:00hrs) and evening (16:00hrs-19:00 hrs). Net sweeping was done with a standard-sized butterfly net for the day-flying moths and during the evenings for suitable photography from closer angles. Each study site was visited for around 20 days in every season. The light traps had been set in selective study sites using 100-Watt bulbs, which were placed in front of a 15ft × 5ft white cloth supported by the wall, for about 15 nights in every season ( Figure 3). Standard tungsten bulbs were used for moth trapping. Efforts were made to create the least disturbance for the creatures in their natural environment while resting, feeding etc. except for instances when it was required to be caught for photography. Moths were photographed using DSLR cameras (Nikon D5300, 18-55mm and 70-300mm lens; and Canon EOS 80D, Tamron 90mm lens) and smartphone cameras. Identification was done by referring to the available literature (Hampson, 1892(Hampson, -1896Bell and Scott, 1937;Holloway, 1985Holloway, -2011Shubhalaxmi et al., 2011;Kononenko and Pinratana, 2013;Dey et al., 2018). Some online sources like Moths of India database (Sondhi et al. 2021; http://www.mothsofindia.org/); India Biodiversity Portal database (Vattakaven et al., 2016; https://indiabiodiversity.org/), Natural History Museum database (HOSTS, 2020; https://www.nhm.ac.uk/), National Bureau of Agricultural Insect Resources database (Insect Pests, 2020; https://databases.nbair.res.in/) and iNaturalist database (iNaturalist, 2020; https://www.inaturalist.org/), were quite helpful in the process of identification apart from the published references. Museum collections in Lepidoptera section from Regional Museum of Natural History, Bhubaneswar were also referred for identification of some of the macrolepidoptera moths. For the present study, none of the moths was collected or killed and therefore live photography of the moths was done as presented in the image plates. Due to several constraints, the identification was primarily done based on external morphological characters and no sophisticated methods such as genitalia dissection, DNA barcoding etc. were used to identify the moth species. The system of classification detailed by Van Nieukerken et al. (2011) has been followed for identifying moths to the families. This method mostly follows the classification by Kristensen (1999), Kristensen et al. (2007) as well as the recent developments by Zahiri et al. (2010Zahiri et al. ( , 2011. A few of the moths have been assigned only to the genus as the morphological identification was not enough for many individuals to designate them to species level. There have been repetitive observations of the same moth species in different survey sites. In such cases, only one observation has been taken into consideration. The map has been created in ArcGIS, using reference from NIC (Khordha Web Portal, 2021; https://khordha.nic.in/). Results and Discussion and Discussion and Discussion and Discussion We examined major studies on moths from the eastern region of India in the post-Victorian era. Saha and Raychaudhuri (1998) reported about 31 moths from West Bengal while Gurule and Nikam (2013) reported that Ghosh in 2003 documented 260 moths only in the family Geometridae from the same state. Further, Sanyal et al. (2012) also compiled 707 moths from West Bengal. Chandra and Nema (2007) reported 142 moths from Madhya Pradesh and Chhattisgarh, Singh and Ranjan (2016) added 23 new species from the superfamily Noctuoidea to the list of 138 moths from Dalma wildlife sanctuary. Singh et al. (2018) have reported 140 species of moths from Koderma, Jharkhand. From the information available about the moth fauna of Odisha state, it is understood that scanty studies have been done and few species reported till date about non-pest moths from the state. Studies done by (Mandal and Maulik, 1991) reports 87 species of moths in the Fauna of Orissa (Part-III) by ZSI out of which only six moth species were found in the present study. Seven moths found in this study were also reported by Jena et al. (2018). Although the moth Glyphodes bicolor has been reported by Jena et al. (2018), it appears to be a case of misidentification, which as per the pictures provided in the paper, suggest the same to be Glyphodes bivitralis Guenée, 1854. This was identified from various online resources like Moths of India database (Sondhi et al., 2021; http://www.mothsofindia.org/), iNaturalist database (iNaturalist, 2020; https://www.inaturalist.org/) and confirmed from other available literature. Although there have been some scattered works on the pest moths of various crops from the state, the present study is an attempt to come up with a compiled checklist to enlist the diversity of moth fauna from Bhubaneswar. In the present study, a total of 154 moths have been identified out of the several individuals recorded, belonging to 19 families and 12 superfamilies from surveys in ten different study sites across Bhubaneswar city and its outskirts as presented in Table 2, Plates 1-5. All the photographs have been contributed by the authors unless credited otherwise. Here in this study, we have recorded 19 moth families being reported from the state of Odisha which includes 154 species under 129 genera. Out of these, 34 moths have been identified only up to the genus level, while the rest have been identified up to species level as indicated in Table 2. In the study, family Crambidae dominated in species diversity, composing 31.2% of the total species (48 species, 38 genera), followed by Erebidae composing 27.3% (42 species, 37 genera), Geometridae making up for 9.7% (15 species, 12 genera) and Noctuidae at 8.4% (13 species, 11 genera). The other families found in less numbers were in the following order of species diversity namely, Sphingidae with seven species in six genera (4.5%), Pyralidae with seven species in five genera (4.5%) and Nolidae with six species in five genera (3.9%). Further, the families Limacodidae, Tortricidae and Uraniidae were represented by two species in two genera each while, Eupterotidae had one genus with two species and the rest eight families (Tineidae, Scythrididae Lasiocampidae, Attevidae, Thyrididae, Bombycidae, Hyblaeidae and Lecithoceridae) were found with a single species in each ( Figure 4). Fig Fig Figure 4. ure 4. ure 4. ure 4. Graph denoting genus and species diversity in observed moth families Fig The study reveals a specific pattern of presence of moth families across various months in a year. Moths from the family Crambidae were found all throughout the year, followed by Erebidae and Geometridae which were recorded in around ten months across the year. Noctuidae, Pyralidae and Sphingidae were observed in seven different months of the year. Nolidae and Bombycidae moths were seen in around four to six months in different seasons. The families of moths which were less found were reported in one or two months in the whole year. These were Uraniidae, Eupterotidae, Lasiocampidae, Scythrididae, Lecithoceridae, Thyrididae, Tineidae, Hyblaeidae, Attevidae, Limacodidae and Tortricidae (Table 3). While most moths that have been found were crepuscular in their time of activity and presence, day-flying moths like Episteme sp. and Dysphania militaris were also recorded amongst macrolepidoptera. The month of August recorded the highest diversity of moths from 11 different families out of all 19 families reported in the study. July and October recorded a considerably higher number of moths with ten families reported in each month. Moths from families Crambidae, Geometridae and Erebidae were found across most seasons while others like Limacodidae, Thyrididae and Lecithoceridae were only seen during autumn and winters. The families represented by a greater number of moth species were mostly found around monsoon ( Figure 5). Hence from the study, it can be said that the diversity of moths is quite rich in Odisha. Since the present inventory relied mostly on opportunistic findings and seasonal surveys of 18 months yet reports a diversity of 19 families with 154 species from a single district of Khordha, it is contemplated that further studies in detail with intensive light trapping sessions can reveal the actual diversity of the eastern state of Odisha. Conclusions Conclusions Conclusions Conclusions The study compiles a preliminary moth diversity of the city of Bhubaneswar and adjoining outskirts, recording a total of 154 moths in 19 families. It can be said that the presence of various moth species in any particular landscape is related to the different types of vegetation of a region, cropping seasons, the flowering of plants and various other factors controlling their diversity and abundance. Hence, this suggests that the moth diversity of the state is quite rich as evident from a preliminary survey in a single district and needs to be extensively studied, to gather more information about their present status for further conservation. Many species found in the study could be keyed only till the genus level while many other unidentified moths await proper taxonomic studies and documentation. As the state of Odisha is rich in forest cover and has diverse biogeographic zones from the East Coast to Deccan Peninsula including tropical dry deciduous and semievergreen forest types, therefore it can be easily speculated that the moth fauna of the state is unique and rich as found from the present sample study of one district. It is evident that with further intensive studies in the other parts of the state, the moth diversity can be explored in greater detail in relation to the biogeographic regions and vegetation types across the state. The results of the present survey indicate a diverse population of moths present in the landscape of Odisha with 19 families reported, characterized majorly by Crambidae, Erebidae, Geometridae and Noctuidae. The presence of families less encountered like, Tineidae, Attevidae, Lecithoceridae and Hyblaeidae also indicate that moths can be easily considered as bioindicators for particular regions when correlated to their presence in particular forest types or habitat. We also suggest that since inventorying is necessary for conservation of a taxon, more biodiversity assessments need to be done on these largely nocturnal lepidopterans. Along with natural history documentation, scientific records of the same can also reveal more information about interactions with plants and their vital role which they play in the ecosystem as indicators, pollinators and pests, other than the usual importance given to few silk moths for economic benefits. It would be further interesting to compare the diversity from urban areas like the present study locations with forested areas which stand unaffected by the city light pollution, which affects moths and their natural navigation in a huge way.
3,430
2021-09-02T00:00:00.000
[ "Biology", "Environmental Science" ]
Analysis of ADCP data above a bottom observatory A 300-kHz ADCP was set on GEOSTAR, a six-m deep-sea observatory. It was operated with cells of 80 cm during a three-week test experiment at 42-m water depth in the Northern Adriatic sub-basin. Although it provided valuable data on the horizontal current field over most of the water column, it also specified the wake disturbances induced by the observatory. These disturbances are characterised by vertical velocities that are significant up to ~20 m above seafloor (echo intensity data suggest that the wake can even reach the surface), and by inclinations of the bottom nepheloïd layer (as deduced from differences in echo intensities from beam to beam). Our analysis is validated by consistent relationships between the horizontal current direction and speed on one side and the characteristics of both dynamic (vertical velocity) and non-dynamic (echo intensity) parameters on the other. It is in good agreement with the simulations from a numerical model, and hence specifies the sensitivity (especially with respect to echo intensity) and accuracy of an instrument usually operated within fields of current and scatterers not disturbed by the device supporting it. In addition, the error velocity parameter displays specific characteristics that easily specify the thickness of the layer disturbed by the observatory, thus providing a technique to validate the quality of data acquired in similar conditions. Mailing adress: Dr. Claude Millot, Laboratoire d’Océanographie et de Biogéochimie (LOB), BP 330, F-83507 La Seyne-sur-Mer, France; e-mail<EMAIL_ADDRESS> Introduction ADCPs have now proven their capabilities to sample phenomena ranging from surface waves, to small-scale internal waves, to mesoscale eddies and to general circulation.However, ADCPs are generally operated in current fields that can be assumed as homogeneous at a given level and at beam-spacing scale; hence, they generally work at best, which does not reveal their utmost performances.As shown hereafter, less favourable conditions, for instance when a relatively huge structure prevents from having a homogeneous mean current field around it, reveal unexpected aspects of their performances. The GEOSTAR group has developed a prototype observatory aimed at being deployed with a carrier as deep as ∼6000 m mainly to study geophysical phenomena (Beranzoli et al., 2000, fig. 1).The observatory is a three-ton, semi-cubic (2.5×2.5×1.0 m 3 ) structure on which is set, among other oceanographical and geophysical instruments (CTD, seismometer, gravimeter, etc.), a 300-kHz ADCP (Workhorse Sentinel from RD Instruments) whose beams (20°angle) are unobstructed by the docking/undocking tetrahedral armature on top of the observatory.This relatively short-range (∼150 m) ADCP was set instead of the long-range (∼500 m, 75 kHz) initially planned because of funding cuts, even though the observatory is obviously too massive to allow the accurate study of near-bottom dy-namic phenomena.Although wake disturbances induced on the near current field by such an observatory were anticipated, we did not expect the ADCP to be able to specify them and only expected to get data noisier than usual. During a three-week test, from August 13 to September 1, 1998, at a depth of ∼42 m in the Northern Adriatic sub-basin, eastern basin of the Mediterranean Sea, the ADCP was checked for oceanographic purposes (since we did not imag-Fig.1.The GEOSTAR Observatory during deployment.The conical upper part, the base of which is protected by a black rubber fender, is the mobile docker that is removed when the observatory is set on the bottom; it masks the upper part of the docking/undocking tetrahedral armature.Then, the docker is used only for recovery, hence as a female, to clamp the observatory via a pin system set on top of the four inclined tubes.The two vertical arms terminated by yellow spheres that contain magnetometers will be lowered on the bottom, i.e. away from the observatory.During the experiment, the ADCP is thus nearly on top of the observatory.Also shown is the deployment location. ine being able to specify the wake disturbance).Hence, it was set to record Earth co-ordinates velocities calculated from 200 pings evenly transmitted during 10 min every hour (were it operated to specify the wake disturbances around the observatory, it would have been set to record single-ping values).Figure 2a specifies the beams numbers and the orientation of the ADCP with respect to the observatory and to magnetic north; the ADCP axis was roughly vertical (pitch and roll ∼0.2°).Profiles were set to 50 cells 80 cm thick.Due to the ADCP location and to the blank zone, cell #1 was ∼4.4 m above seafloor (asf).Due to the secondary lobe reflection on the sea surface, data are available up to a depth of ∼5 m (cell #42).Figure 2b provides a sketch of the general experimental conditions. The overall circulation in the study area is usually thought to be southwards along the largescale isobaths (Orlic et al., 1992).The circulation encountered during the experiment in the deep layer below the seasonal pycnocline, as illustrated by the progressive vector diagrams from cells #1-30 (fig.3), was thus unexpected and rather complex.Diagrams from cells #31-42 are As indicated by vertical profiles from a shiphandled transmissometer operated at the beginning and end of the experiment and the time series from a transmissometer mounted on the observatory, a bottom nepheloïd layer several metres thick was present in permanence during the experiment.However, for sake of simplicity, we just consider hereafter that the concentration in suspended sediments decreases upwards.As schematised in fig.2b, this layer is obviously uplifted by the station; just above the ADCP, one can thus easily expect that the layer will generally be inclined, so that echo intensity must differ from beam to beam in a way related to the current direction and speed. These expected characteristics of dynamic (vertical velocity) and non-dynamic (echo intensity) parameters will be analysed thoroughly.In addition, we have analysed the error ve-locity parameter that represents the heterogeneity of both horizontal currents and the sums of beam-paired vertical velocities.We show that the error velocity distribution displays characteristics that allow specifying the depth range over which the current field is made significantly heterogeneous by the observatory or by a similar bottom irregularity.We thus provide criteria to check the current homogeneity, hence to validate bottom ADCP measurements. To understand the data analysis in an easier way, the current field around the observatory simulated with a numerical model is described first (Section 2).ADCP characteristics and the speed data screening steps are presented in Section 3. The dynamic and non-dynamic data as well as the error velocity parameter are analysed in Sections 4 and 5, respectively. The numerical current field around the observatory Numerical simulations of the current field around the observatory were performed using the commercial CFD (Computational Fluid Dy-Fig.4a.2D current in a plane through the centre of the observatory with a far-field speed of 300 mm/s along a diagonal.Note that current is coming from the left.namics) software Fluent (2000).The effect of turbulence was considered by means of the RNG (Re-Normalization-Group) code for a κ-ε model derived from the instantaneous Navier-Stokes equations.The analytical derivation results in a model with constants different from those in the standard κ-ε model, and additional terms and functions in the transport equations for κ and ε.The near-wall layer was approximated by a standard logarithmic function approach.The observatory was schematised by a block of 2.5×2.5×1.0 m 3 and by the 4 booms of the docking/undocking frame on top, which gives a total height of ∼2.7 m asf. Several sets of experiments were performed: two values of the mean speed (100 and 300 mm/s), two configurations for the observatory orientation (mean current parallel to a side or to a diagonal) and two domains (30 m/x -direction of the mean current -, 20 m/y and 20 m/z; 45 m/x, 20 m/y and 40 m/z) were investigated.Parameters were adjusted (i.e.seafloor roughness down to 0) to obtain vertical velocity values as large as possible (i.e. more similar to the observed ones).Although the simulated vertical velocities were still lower than the observed ones, they reached the surface, consistently with the measured echo intensity data. The results illustrated in fig.4a,b represent the smaller domain (to provide more visible de-tails of the current field around the observatory) and a mean current of 300 mm/s parallel to a diagonal.Although the ADCP is then located at ±∼1 m from the observatory centre in both x and y, simulations are considered in a vertical plane through the centre of the observatory.Although the current vectors at some distance from the observatory do not seem in fig.4a to be strongly modified (due to the large difference between the vertical and horizontal components), significant vertical velocities are in fact induced over the whole water depth (fig.4b).Maximum vertical velocity values that can be compared with those from the ADCP, i.e. at ∼4-5 m asf (cell #1), reach ∼10 mm/s, when values of ∼1 mm/s are still encountered at ∼15 m asf.Slopes of the simulated current lines (i.e.lines of iso-concentration in suspended sediments) are thus maximum (∼1/30) at ∼4-5 m asf.There, they correspond to a difference in depth along two opposite beams of ∼6 cm that is much lower than the cell size (80 cm), and thus a priori hard to be shown.This is a fortiori valid for the upper cells since W decreases more rapidly than the beam separation. The ADCP data processing To verify the basic assumption for ADCP technique, the 3D current field must be homoge- neous horizontally, i.e. over ∼2 m for cell #1 and ∼15 m for cell #40.From the four beam velocities, each pair of opposite transducers (1-2 and 3-4) gives one horizontal and one vertical components.The two horizontal components are directly associated with U and V.With a homogeneous current field, the two vertical components (W 12 and W34) are identical; hence, the 3D current profile is defined with some redundancy.Practically, a mean vertical velocity W=(W 12+W34)/2 is computed, as well as an error velocity EV=C ⋅ ⋅(W34−W12) where C=1/(sqrt(2).tan(20°)) is a normalisation constant (RDI Technical Booklet, 1998).Another parameter to deal with is the echo intensity (EIb in dB) for each beam (#b) and cell, which represents the amount of energy echoed at given distances. The Workhorse Sentinel model uses the broadband method based on the propagation delay computed from correlation between one ping and its echo (Gordon, 1996).In fact, single-ping beam velocities are generally not accurate enough so that ensemble-averages are performed (we used 200 pings emitted during 10 min every hour).In the Earth-coordinates mode, and for each single ping, there are three kinds of screening on velocity data: the correlation test, the fish rejection algorithm and the error velocity test. The correlation test compares the correlation level and results to thresholds.Beam velocities either associated with a poor signal-tonoise ratio (that occurs in waters too much depleted in scatterers) or larger than a pre-defined ambiguity velocity are rejected.These problems were not often encountered. The fish rejection algorithm detects an abnormal beam velocity for a specific cell or an echo much greater from one beam than from the others, which would make the strong echo from the «fishy» beam heard by the others, and the same velocity computed for all beams.The large concentration and gradient of suspended particles in the bottom nepheloïd layer, and the sometimeslarge inclination of this layer, often led to such problems.In some cases, more than 80% of single-ping beam velocities were rejected. In a heterogeneous current field, W 12 and W34 are markedly different in general, so that EV is large.This often occurred for the lower cells, resulting in uncertainties larger than usual on the horizontal components.However, it is «frustrating» to eliminate all horizontal current values associated with an EV larger than a threshold, and not convenient to have gaps in a time series (i.e. for a given cell; we used the option «3-and-4beam solution»).An analysis of the EV characteristics will show that it is possible to appreciate, in a statistical way, the thickness of the layer significantly disturbed by such an observatory. The dynamic parameters Up to 15-20-m asf, W (fig. 5) reached relatively large upward values (up to 40-50 mm/s) when EV (fig.5) displayed variations that were similar both in time and on the vertical.The similarity between large W (i.e.large W 12+W34) and large EV (i.e.markedly different W12 and W34) is consistent with what was expected. The relationship between W and the horizontal current speed M=[U 2 +V 2 ] 1/2 is also noteworthy.The regression lines for some specific cells and all directions (fig.6) indicate that W∼0 when M=0 everywhere, that W increases with M, and that this increase diminishes with the cell number (i.e.distance from the observatory).For cells numbers >15, W values are relatively small and no more depend on M.These features are consistent with the deformation of the schematised (fig.2b) and simulated (fig.4) current field.Simulated W's can also be compared with the measured ones (fig.7).Obviously, the measured W's are more scattered than the simulated ones, due to the ranges in both speed and direction of the in situ currents, and to the inaccuracies in the W computations, especially those due to the heterogeneity of the current field.Therefore, only mean measured W's were plotted, hence showing that they are ∼3 times larger than the simulated ones.However, the mean curves have a roughly similar shape and they all reach ∼0 at ∼20 m asf.We are unable to specify which kind of curves is the most realistic one.Furthermore, vertical velocities are in fact induced up to the surface, as shown hereafter (although not just above the observatory, i.e. non-measurable by the ADCP and not in the domain considered for the simulations).Since inaccuracy in W computation does not necessarily mean large W, we are tempted to think that measured Ws are, on average, more realistic than simulated Ws. The non-dynamic parameters As schematised in figs.2b and 4a,b, the nepheloïd layer being uplifted and inclined should induce a marked heterogeneity in the echo intensity EIb along every beam and for every cell.In the open sea, EIb mainly provides information on the amount and distribution of zooplankton.In the present case, this information mainly concerns i) the zooplankton diurnal migration on the vertical, and ii) the concentration and thickness of the bottom nepheloïd layer above the observatory, which partially depends on the current speed and direction.Plots It can be assumed that «natural» EIb variations (due to both the nepheloïd layer per se -i.e.undisturbed by the observatory -and the zooplankton vertical migrations) are horizontally homogeneous.In order to eliminate them, we consider EIb variations along a specific beam relatively to variations along the other beams (EIbs must first be normalised since transducers are not of equal quality).Figure 8 illustrates, for the 4 beams (b), the difference DIb between a given EIb and the average over the 3 other EIbs as a function of the current direction (north is upward) and depth (represented by the cell #1-42).DIb thus represents relative echo intensity. Mainly for cells close to the observatory (cells #1-15) and for upper cells, a specific DIb appears to be larger or lower than the others depending on the current direction.For instance, DI4 is relatively large when the current is direct-ed towards south to west.For most cells, this corresponds to a nepheloïd layer denser along beam #4 than along the other beams above the observatory.As expected from figs.2a,b and 4a,b, the nepheloïd layer is higher along beam #4 than along the other beams for currents towards south to west.Similar features occur for the other beams.The analysis of relative echo intensity signals thus allows showing slopes of the bottom nepheloïd layer that were not thought, according to simulation, to be measurable. Especially for DI4 and DI2, large values are encountered over the whole water depth (∼42 m), hence showing that the wake of the observatory sometimes reached the surface, thus probably inducing changes in particles concentration, i.e. in colour, at the surface, eventually downstream from the zone sampled by the ADCP.The reverse (DIb relatively low) occurs, for all beams, in the opposed and perpendicular directions (i.e.where one of the others DIbs is relatively large).Also to be noticed is that the negative parts of DI2 and DI4 are more negative than those of DI1 and DI3.These asymmetrical features are related to the fact that more beams are in the wake of the observatory (i.e.EIbs are more similar and DIbs are lower) when the current is towards east than when it is towards west.This is due to the fact that the ADCP is located on the eastern side of the observatory and is consistent with the distribution of the data computed from either 4 beams (P4) or 3 beams only (P1) as indicated in fig.9a-c for a given cell (#15).Indeed, more data are computed from the 4 beams when the current is towards west, hence not strongly disturbed above the AD-CP, than when it is towards east, hence the ADCP being in the wake of the observatory.It can thus be concluded that: i) the wake induced by the observatory can extend several tens of metres above it, and ii) the slightly out of centre location of the ADCP (∼1 m) is consistently reflected on the wake signature (over several tens of metres). The concentration in particles within the nepheloïd layer per se increases downwards and varies with time.Hence, it is impossible (without any measurement in distance to the observatory) to identify and separate the influences, on the DIbs, of either the concentration in particles or the inclination of the nepheloïd layer.In order to deal with a somehow synthetic parameter repre-sentative of the EIbs heterogeneity, we computed, for each cell and over time, the EIb standard deviation [1/4⋅Σb(EIb−EIm) 2 ] 1/2 , whereas EIm is the average of the EIbs over b.It is clear that the EIb standard deviation (fig.5c) and W (fig. 5a) have very similar distributions, especially in the 15-20-m bottom layer where both can be large.This direct correspondence between a priori independent (one non-dynamic and one dynamic) parameters clearly validates fig.2a and 4a,b and shows that the larger the inclination of the nepheloïd layer, the larger the vertical velocity. It must be emphasised that the EIb standard deviation, and consequently the inclination of the surfaces of equal concentration in suspended particles, is computed from differences between EIb's in cells that are separated horizontally by a few metres only (e.g., ∼2 m for cell #1 and ∼15m Fig. 10.Spectra of U (red), V (blue), W (black), EIm (orange) and DI4 (green) at cell #5 averaged over 2 pieces to give 4°of freedom.The confidence interval is the 90% one. for cell #40).Whatever the current field homogeneity is, and thus whatever the quality of the current computations, this analysis demonstrates that the echo intensity is a very sensitive signal that provides significant information (not perceptible with a homogeneous current field). Spectral analysis Figure 10 displays dynamic (U, V, W) and non-dynamic (EIm, DI4) parameters spectra at cell #5 (representative of the 15-20-m bottom layer).Classically, the U and V spectra show a similar amount of energy at the inertial frequency (clockwise-polarised circular oscillation at ∼0.06 cph) while the semidiurnal (∼0.08 cph) and diurnal (∼0.04 cph) frequencies are shown mainly on the V spectrum (north-south rectilinear currents in the Northern Adriatic).Since EIm and W are purely integrated (over beams) parameters, they do not change markedly according to a circularly polarised current (the inertial frequency will not appear on the spectra) while they change according to a rectilinear current (at twice the corresponding tidal frequency).Both EIm and W are also sensitive to the zooplankton diurnal migration, which is not a purely diurnal signal so that energy is also introduced at the semidiurnal frequency (and higher harmonics as well).Therefore, the fact that only tidal frequencies, i.e. not the inertial frequency, are shown on both EIm and W is consistent with our analysis. In addition, all peaks (diurnal, inertial, semidiurnal) must appear on the spectrum of a parameter such as DI4 (insensitive to zooplankton diurnal migration) since tidal currents and inertial currents induce an inclination of the nepheloïd layer at corresponding frequencies.As shown by fig. 10, a peak (at the inertial frequency) on the spectrum of a non-dynamic variable (DI4) coming from a purely dynamic phenomenon (the inertial oscillations) definitely validates our analysis. More on the error velocity parameter Theoretically, a homogeneous current field leads to an error velocity EV=0.Practically, EV≠0 indicates that the data are not accurate and/or that the current field is heterogeneous.Let us consider an ADCP suitably mounted (nearly centred) on a symmetrical (with respect to current direction) structure, and collecting data at best (in terms of e.g., pings averaging).The analysis below shows that the EV characteristics allow specifying which cells are markedly influenced by the structure, hence validating the data from a statistical point of view. With a heterogeneous current field, and for a given cell, Ub and Wb (b=3, 4) define the actual current components from beams 3 and 4, and Vb and Wb (b=1, 2) the actual current components from beams 1 and 2 (fig.11a).F being the angle from the ADCP axis, the radial velocities Rb and the computed vertical velocities are (for more details see van Haren et al., 1994) In case the actual current field is homogeneous, U3=U4=U, W3=W4=W34, V2=V1=V, and W1= =W2=W12, with the consequence that the U-V-W field is specified with measurement inaccuracies that are quantified by EV/C=(W34−W12).In case that the current field is heterogeneous, with DU =U3−U4 and DV=V1−V2, it results in EV/C=(DU−DV)tanF+(W3+W4)−(W1+W2). Assuming a simplified cubic observatory oriented northwards (fig.11b), and an ADCP located on its eastern side, two relationships can be expected.For currents of similar speed towards directions A and Al that are symmetrical with respect to the east-west axis, identities between the Ub and Vb of both situations lead to EVA=−EVAl.For currents towards either east or west, W2=W4, W1=W3, U3=V1 and U4=V2; hence EV=0 (also obvious from the previous re- lationship).If the ADCP were centred, EV would have been = 0 for currents towards either north and south, and EVA= EV−A for currents towards opposed directions (A and −A).In a given data set, any indication of such relationships will account for a significant heterogeneity of the current field.Conversely, if EV does not display any specific distribution (i.e. is only noisy), the current field can be considered homogeneous. Considering these relationships over depth can specify, from a statistical point of view, the thickness of a significant wake disturbance.Verifying these EV specific relationships with our data set can be hampered by the fact that both the speed and the direction are not homogeneously distributed.To get a distribution as homogeneous as possible, we considered all couples speed-direction from cells #1-20 for the whole experiment (fig.12a) and their associated EV (fig.12b) and W (fig. 12c) values.We have neglected the difference between north and the 3°actual orientation.Figure 12b shows that, whatever the horizontal speed is, the EV sign changes when crossing the eastwest axis, which validates the corresponding theoretical relationship (EV= 0) for currents towards east or west.Also, for all directions and speeds, the theoretical relationship EVA=−EVAl is clearly validated.To be noticed is the (sole) other sign change near directions towards southeast and northeast, i.e. the north-south axis is clearly not a symmetry axis as would have been the case for a ADCP located in the centre of the observatory.In such a case, the relationships (EV=0 for north and south directions, and EVA=EV−A or opposed directions) are clearly not validated.Therefore, the EV distribution in fig.12b clearly shows that i) the current field is heterogeneous and ii) the ADCP is not centred. Although the interpolation procedure is rough for very large speeds due to the low number of points, it can be considered that the larger/lower the speed, the larger/lower the EV absolute values.For what concerns W, most values are positive and, the larger the speed the larger W. Also, W values are larger for currents towards west than for currents towards east.These results are consistent with what was intuitively thought and previously demonstrated with the simulation (fig.4b) and the data analysis (fig.6). It can thus be concluded that, in case heterogeneity of the current field is induced by some semi-cubic structure set on the bottom, the EV distribution actually displays those characteristics that are expected from the algorithms.Analysing this parameter's distribution can thus allow specifying the layer thickness modified by such a structure.Obviously, these remarks also apply to a structure set at intermediate depths and to the current field above and below it.They can also be extended to structures that are parallelepipedic and currents that are symmetric with respect to the ADCP location. Conclusions A 300-kHz ADCP set on a semi-cubic (2.5× ×2.5×1.0 m 3 ) bottom observatory has allowed specifying the wake disturbances induced on the current field. With a far field general current of 20-40 cm/s, upward vertical velocities of 40-50 mm/s were computed in a 15-20-m bottom layer.Although these experimental values are noisier than usual, mainly due to the heterogeneity of the current field above the observatory, they are consistent with (although larger than) simulated values, and hence considered significant.In such conditions, the wake of the observatory can probably extend ∼40 m above seafloor at least (even if not directly above the observatory). Differences in echo intensities along the beams have demonstrated that surface of isoconcentration in suspended particles were inclined over the observatory, consistently with what was expected from the general current direction and speed. The relationships between the horizontal current direction and speed on one side and the characteristics of both dynamic (vertical velocity) and non-dynamic (mean echo intensity as well as relative echo intensities along the beams) parameters on the other side are also consistent with the oscillatory character of mesoscale (i.e.inertial and tidal) currents.Indeed, spectral characteristics expected for all these parameters, in association with currents polarised at the inertial frequency as well as with rectilinear currents at diurnal and semidiurnal frequencies, are clearly retrieved. Finally, we have shown that the error velocity parameter displays specific characteristics that specify the thickness of the layer disturbed by such a semi-cubic observatory from a statistical point of view.This parameter can thus be used to check the validity of ADCP data collected in such a way, hence providing a criterion for qualifying this or that time series. More generally, our analysis demonstrates how sensitive (especially for what concerns echo intensity) and accurate is an instrument usually operated in more favourable conditions (such as on a mooring line or directly on the bottom with a profiled mooring frame), which does not allow its utmost performances to be appreciated. Considering the importance of induced vertical velocities compared to that of undisturbed horizontal currents (up to O(10 −1 ) within a depth range corresponding to a few times the observatory height), such a massive observatory is obviously not adequate to study finely currents close to the bottom.Such an observatory can efficiently host long range ADCPs devoted to the study of the general circulation and of mesoscale processes within bottom layers several 100s-m thick. Fig. 2a,b.a) Actual orientation of the ADCP with respect to the observatory and magnetic north.b) Beams configuration and tens of cell numbers as seen when looking towards north; the upper and lower blank zones are delimited by dashed lines; the arrows represent an homogeneous current flowing towards east while the dotted lines schematise the current lines close to the bottom, and thus the structure (in both shape and concentration of suspended particles) of the nepheloïd layer as disturbed by the observatory. Fig. 4b . Fig. 4b.Vertical velocity in a plane through the centre of the observatory with a far-field speed of 300 mm/s along a diagonal.Note that current is coming from the right. Fig. 5 . Fig. 5. Vertical velocity W (up), absolute value of the error velocity EV (middle) and standard deviation of the echo intensity EI in arbitrary units (down) as a function of depth (i.e. cells #) and time (Julian days). Fig. 7 . Fig.7.Simulated Ws (+) for mean currents of 100 (black) and 300 (red) mm/s parallel to a diagonal are those obtained in a prism centred on the observatory and having a 4-m side at ∼20 m asf.To get a sufficient number of measured Ws to compare with, we considered altogether currents towards sectors of 45°roughly aligned with the diagonals (∼45°, 135°, 225°and 315°) and in the ranges of 80-120 mm/s (solid black) and 250-350 mm/s (solid red).Since measured values are much more scattered than simulated ones, we just reported the corresponding averaged profiles.The blue dashed line represents the averaged Ws whatever the mean current direction and module. Fig. 8 . Fig. 8. Differences DIb between the normalised EIb and the average over the 3 other EIbs versus the current direction (north is upward) and cell number.The diagrams can be analysed as if the observatory were in between them. Fig. 11a,b.a) Beams and current components configuration in case of an heterogeneous current field; b) horizontal components of currents towards directions A and Al symmetrical with respect to east-west, and components correspondence. Fig. 12a - Fig. 12a-c.a) Distribution of samples for cells #1-20 as a function of the current direction and speed up to 200 mm/s (north is upward).b) Distribution of the error velocity (scale on the left) for cells #1-20 versus current direction and speed.This diagram displays neither the number of samples (shown in a) in a specific area of the diagram, nor the variance in that specific area.The error values that are indicated are interpolated from the errors associated with each sample, so that errors for areas with few samples are less significant.c) The same for the vertical velocity (scale on the right).
6,839
2006-12-25T00:00:00.000
[ "Physics", "Environmental Science" ]
Protective Effects and Mechanisms of Dendrobium nobile Lindl. Alkaloids on PC12 Cell Damage Induced by Aβ25-35 Background Aβ deposition abnormally in the mitochondria can damage the mitochondrial respiratory chain and activate the mitochondrial-mediated apoptosis pathway, resulting in AD-like symptoms. Objective To observe the protective effects of Dendrobium nobile Lindl. alkaloids (DNLA) on Aβ25-35-induced oxidative stress and apoptosis in PC12 cells explore its possible protective mechanisms. Methods PC12 cells were treated with DNLA with different concentrations (0.035 mg/L, 0.3 mg/L, and 3.5 mg/L) for 6 h, followed by administration with Aβ25-35 (10 μM) for 24 h. MTT assay and flow cytometer observe the effect of DNLA on Aβ25-35-induced cytotoxicity and apoptosis of PC12 cell. Based on the mitochondrial apoptosis pathway to study the antiapoptotic effect of DNLA on this model and its relationship with oxidative stress, flow cytometer detected the level of reactive oxygen species (ROS), and ELISA kits were used to detect superoxide dismutase activity (SOD) and glutathione (GSH) content in cells. The JC-1 fluorescent staining observed the effect of DNLA on the mitochondrial membrane potential (MMP) with inverted immunofluorescence microscopy. Western blot was used to detect the levels of mitochondrial apoptosis pathway-related protein and its major downstream proteins Bax, Bcl-2, cleaved-caspase-9, and cleaved-caspase-3. Results DNLA can significantly improve the viability and apoptosis rate of PC12 cell damage induced by Aβ25-35. It also can restore the reduced intracellular ROS content and MMP, while SOD activity and GSH content increase significantly. The expression of apoptosis-related protein Bax, cleaved-caspase-9, and cleaved-caspase-3 decreased when the Bcl-2 protein expression was significantly increased. Conclusion These findings suggest that it can significantly inhibit the apoptosis of PC12 cell damage induced by Aβ25-35. The mechanism may reduce the level of cellular oxidative stress and thus inhibit the mitochondrial-mediated apoptosis pathway. Introduction Alzheimer's disease (AD) is the most common type of dementia, accounting for 60%-70% of the overall incidence of dementia [1]. It is a severe neurodegenerative disease caused by the degeneration and loss of neurons in the brain [2]. AD pathological features are senile plaque (SP) caused by abnormal deposition of β-amyloid (Aβ) outside the cell and neurofibrillary tangles (NFTs) caused by hyperphosphorylation of tau protein inside the cell [3,4]. However, the pathogeny and mechanism of AD have not yet been fully clarified, and the Aβ cascade hypothesis is currently widely accepted in many hypotheses about the onset of AD. Aβ is a common pathway for many factors that induce AD and a critical factor in the pathogenesis and development of AD [5,6]. Studies have found that Aβ 1-40 and Aβ are the main polypeptides that form SP, which are produced by the amyloid precursor protein (APP) under the cocutting action of β-secretase and γ-secretase [7][8][9]. However, Aβ 1-42 is more hydrophobic than Aβ and more accessible to aggregate, leading to the formation of senile plaques (SP) [10]. The neurotoxic effects of Aβ are mainly related to mitochondrialmediated apoptosis, disturbance of intracellular calcium homeostasis, and abnormal metabolism of reactive oxygen species (ROS) [10,11]. Aβ [25][26][27][28][29][30][31][32][33][34][35] is a synthetic fragment containing the recognized Aβ toxic amino acid sequence often used as a model drug in experiments. Therefore, it can be incubated with traditional methods to cause fiber aggregation to simulate the AD model [12,13]. AD, as a neurodegenerative disease, is characterized by long-term neuronal death. Cell death mainly includes two forms of necrosis and apoptosis [14,15]. Mitochondria are the place for energy metabolism and oxidative stress in nerve cells, which mediated apoptosis is the way for internal cell signals to trigger apoptosis [16]. Therefore, mitochondriamediated apoptosis is likely to be an important part of the formation of AD. Spuch [17] found that the production of ROS and expression of Bax increased, while the expression level of Bcl-2 decreased in the AD model induced by Aβ. Aβ can damage the mitochondrial respiratory chain and increase ROS production to activate the mitochondrialmediated apoptosis pathway, resulting in neuronal function decline and neuronal death. It can also cause AD symptoms such as memory loss, mental disorders, and movement disorders through the corresponding cascade reaction [18,19]. Hence, Aβ plays a vital role in the early onset of AD, which is of great significance to use Aβ as the target to find effective prevention and treatment measures in the early onset of AD [20]. Aβ plays an important role and essential to find effective prevention and control measures as a target in the early stage of AD. Dendrobium nobile Lindl. (DNL) is a precious Chinese medicinal material that was first published in "Shen Nong Materia Medica" and recorded in "Compendium of Materia Medica," which has the functions of nourishing the stomach and kidney, strengthening brain and eyesight, and nourishing yin and lungs. The main components of DNL are alkaloids, polysaccharides, amino acids, sesquiterpenes, phenols, volatile oils, etc. among which Dendrobium nobile Lindl. alkaloids (DNLA) are the main active ingredients. Previous animal studies found that DNLA can effectively improve the learning and memory function of APP/PS1 transgenic mice, which also protect the neuron and prominent loss of Aβ [25][26][27][28][29][30][31][32][33][34][35] induced model by promoting the expression of neurotrophic factors [21]. We also found that DNLA has a neuroprotective effect that can improve Aβ 25-35 caused neuronal damage and reduces amyloid protein aggregation by inducing autophagy [22,23]. Therefore, the use of Aβ 25-35 -induced PC12 cells to establish an AD model is to study the protection mechanism of DNLA. 2.4. Cell Culture of PC12. Used DMEM/HIGHGLUCE medium containing 10% fetal bovine serum and 1% penicillin-streptomycin mixture for PC12 cell culture, planted in 25 cm 2 cell culture flask, and placed in 37°C with 5% CO2 constant temperature incubator. The cells were subcultured or seeded when they grew to 80% in the bottom of the bottle. When passaging, the culture medium should be discarded first, washed once with autoclaved PBS, and discarded. After digestion with 0.25% trypsin for 2 min, stop the digestion with serum-containing culture medium and blow the cells into the culture medium. Centrifuge at 1000 rpm for 5 min, discard the supernatant, and add serum-containing medium to resuspend and inoculate in a cell culture flask. Behavioural Neurology 30 h at different concentrations. After five replicates were made for each treatment, cell viability was evaluated by MTT assays as previously described [24]. The cell viability is expressed as an OD percentage of cells with the indicated treatments in cells treated with the DMSO control treatment. 2.7. Assessment of Cell Apoptosis by Flow Cytometer. The PC12 cell apoptosis assay was performed using the Annexin V-FITC/PI kit following the manufacturer's protocol. Transfer the medium to the centrifuge tube separately after each group of Aβ 25-35 acts for 24 h. In the culture plate, add 200 μL of trypsin (without EDTA) to each well to digest the cells. After the digestion is complete, add the original medium back to the corresponding culture well to terminate the digestion. Transfer the cells to a centrifuge tube and centrifuge at 1400 rpm for 4 minutes, then aspirate the medium. Resuspend in PBS and centrifuge at 1400 rpm for 4 min and discard the supernatant twice. Resuspend the cells with 400 μL 1× binding buffer and add 5 μL Annexin V-FITC to mix well, then incubate for 15 minutes at room temperature in the dark. Add 10 μL of PI staining solution into each tube to mix well and incubate for 5 minutes on ice (2°C to 8°C) in the dark. The mixtures were incubated for 10 min. The samples were then filtered through a nylon mesh into a tube. The samples were tested by using the flow cytometer. Assessment of ROS Level by Flow Cytometer. The total ROS was measured by flow cytometry according to the ROS kit manufacturer's protocol. DCFH-DA in the kit is diluted 1 : 1000 with a serum-free medium, and the concentration of the working solution is 10 μM. Use Aβ 25-35 for 24 h after administration according to groups, wash away the original medium, and wash once with sterile PBS. Add 1 mL of working solution to each well of the culture plate, incubate in the cell incubator for 30 minutes, and shake every 10 minutes. Wash three times with serum-free medium and add 0.25% trypsin to digest and stop digestion by adding medium. All the samples were centrifuged at 1400 rpm for 4 min, and the supernatants were resuspended in PBS. The fluorescent intensities were measured using a flow cytometer. Assessment of SOD Activity and GSH Level by ELISA Kit. The cells were digested with trypsin and added the original medium to terminate the digestion. The cell was centrifuge at 1400 rpm for 4 min and aspirated the supernatant to add PBS to resuspend the cells three times. The samples were disrupted by ultrasound for four times (6 s each time), 1 min interval each time. The samples were tested by using the SOD and GSH content kit following the manufacturer's protocol. Assessment of MMP by JC-1 Fluorescent Staining. Aspirate and discard the original medium of the cells to be tested, add sterile PBS, and wash once. The preprepared working solution was added (1 : 1) to the serum-free medium and incubated for 30 min in the cell incubator. The plate's medium was discarded and washed twice with a diluted JC-1 staining buffer (1×). The sample was added serum-free medium and observed with an inverted fluorescence microscope. Effect of DNLA on PC12 Cell Apoptosis Induced by Aβ In this experiment, the Annexin V/PI staining method was used to detect each group of cells by flow cytometry. The results showed (Figure 3(a), P < 0:05) that the apoptosis rate of the model group given Aβ [25][26][27][28][29][30][31][32][33][34][35] was significantly higher than the control group, and the level of apoptosis induced by Aβ 25-35 could be reduced by giving DNLA in advance, which was concentration-dependent. The statistical results showed that the 0.35 mg/L and 3.5 mg/L DNLA dose groups could significantly inhibit Aβ 25-35 -induced apoptosis, which difference is statistically significant as compared with the model group ( Figure 3(b), P < 0:05). The MMP was significantly improved, which effectively Behavioural Neurology inhibited the apoptosis rate of cells. These results suggested that DNLA has a protective effect on apoptosis and oxidative stress for Aβ 25-35 -induced cells. DLNA was given in advance to explore its possible mechanism of action. Since it is sometimes impossible to determine the impact of a drug at a concentration, we used the MTT experiment to determine the cell survival rate to determine the effective range. In this study, we used three concentrations within the effective range to observe whether the drug is dose-dependent [22,24], and the MTT experiment was used to observe the cytotoxicity of Aβ [25][26][27][28][29][30][31][32][33][34][35] to PC12 cells. The results showed that the cell viability of the MTT experiment in the model group decreased significantly, suggesting Aβ [25][26][27][28][29][30][31][32][33][34][35] can produce cytotoxicity to PC12 cells and success-fully established as the experimental model. However, the cell viability of 0.35 mg/L and 3.5 mg/L DNLA dose groups was significantly higher than the model group in the MTT assays, which suggests DNLA inhibited the cytotoxicity induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] . The experimental results found that the 0.035 mg/L DNLA dose group did not reduce the cytotoxicity, indicating that the 0.035 mg/L DNLA dose group did not reach the threshold dosage under the experimental conditions. Abnormal deposition of Aβ can trigger neuronal apoptosis and participate in the formation of AD as the main cause of neuronal loss [25,26]. In this study, the Annexin V/PI double staining method was used to detect the apoptosis of PC12 cells by flow cytometry to explore the reasons why Aβ 25-35 induced PC12 cells to produce cytotoxicity. Annexin Behavioural Neurology V binds with high affinity and specificity to phosphatidylserine that appears on the cell membrane surface during apoptosis. At the same time, PI can stain cells that lose cell membrane integrity in the late stage of apoptosis [25]. The results showed that the apoptosis rate of PC12 cell damage induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] increased significantly, suggesting that Aβ 25-35 caused PC12 cell apoptosis is consistent with a previous study [26]. The apoptotic levels of 0.35 mg/L and 3.5 mg/L DNLA dose groups was significantly lower than that of the model group, but the apoptosis level of the 0.035 mg/L DNLA dose group did not significantly improvement, which indicated that the effective dose of DNLA can inhibit Aβ 25-35 -induced apoptosis under this experimental conditions. Similarly, previous studies have also shown that DNLA can improve Aβ 25-35 -induced axonal degeneration of hippocampal neurons in rats and ameliorate ER dilation and swelling in the hippocampal neurons [22,27,28]. Oxidative stress is the mediator of apoptosis caused by Aβ [29,30]. Oxidative stress refers to the imbalance between oxidation and antioxidation in the body that causes excessive ROS production in the body, resulting in cell and tissue damage [31,32]. As the main indicator of oxidative stress level, ROS can damage nucleic acids and proteins by attacking fatty acids on cell membranes to produce peroxides, triggering inflammation and apoptosis to accelerate aging. It is one of the leading causes of chronic diseases, such as heart disease and AD [33,34]. Studies have shown that the degree of oxidation of DNA and protein in the brain tissue of AD patients is significantly increased, indicating that oxidative stress may accelerate the formation of AD by accelerating cognitive Behavioural Neurology dysfunction and pathological brain damage in AD patients [35,36]. Relevant animal and cell experiments also showed that abnormally deposited Aβ can interfere with reduced coenzyme I (NADH) through Aβ-binding alcohol dehydrogenase (ABAD) in the mitochondria to affect the respiratory chain and increase ROS production, aggravating Aβ Behavioural Neurology deposition and mitochondrial dysfunction to accelerate the formation of AD [37][38][39]. Therefore, the DCFH-DA method was used to detect the intracellular ROS level [40]. DCFH-DA can enter the cell through the cell membrane and be hydrolyzed into DCFH by esterase which further react with intracellular ROS and oxidize into fluorescent DCF [41]. The intracellular ROS level can be judged by detecting the intensity of fluorescence. Our experimental results found that the intracellular ROS level of the model group was significantly higher than that of the control group, which is consistent with reports in the literature [42]. However, the intracellular ROS levels in the DNLA 0.35 mg/L and 3.5 mg/L DNLA dose groups were significantly lower than the model group. Furthermore, this experiment detected the main SOD activity and GSH content to observe the oxidative stress level of cells. The results showed that the intracellular SOD activity and GSH content of the model group induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] were significantly lower than that of the control group, but the effective dose of DNLA could significantly increase the SOD activity and GSH content after the intervention. The above results suggested that the effective dose of DNLA under the experimental conditions can reduce the increase in intracellular ROS levels induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] , and it also can inhibit the oxidative stress level of cells by upregulating SOD activity and GSH content. Mitochondria are not only the site of oxidative stress but also the organelle for energy production in the cell. Its function is of great significance for maintaining the stability of neuronal function and the integrity of synapses [43]. Moreover, the inner mitochondrial membrane is the processing site of mitochondrial enzymes that indirectly reflects mitochondria's function [44,45]. The results of our previous in vitro experiments showed that DNLA could inhibit the Behavioural Neurology reduction of MMP caused by oxidative stress and has a positive protective effect on the mitochondrial function [46]. Therefore, JC-1 fluorescent staining was used to observe the protective effect of DNLA on the mitochondrial membrane of PC12 cells damage induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] . JC-1 can quickly enter the mitochondria when the MMP is high and exist stably in the mitochondrial matrix in the form of a red fluorescent polymer. JC-1 can escape the mitochondria and exist as a green fluorescent monomer while the MMP decreases. JC-1 fluorescent staining observed that the red fluorescence of the model group was diffusely distributed than that of the control group, and the green fluorescence was strongly expressed. It was consistent with previous study [22]. However, early intervention of an effective dose of DNLA could reduce cell diffuse red fluorescence, and green fluorescence was concentration-dependent. The results showed that Aβ 25-35 could induce a decrease of intracellular MMP and destroy the integrity of the mitochondrial membrane, which is consistent with the results of previous in vitro studies [21,47]. After intervention with DNLA, it can significantly inhibit the decrease of MMP and has a positive protective effect on mitochondrial function. It is suggested that DNLA has a protective effect on MMP, and it may be the result of improving the level of oxidative stress by inhibiting ROS production. The mitochondrial-mediated apoptosis pathway involved in reduction of MMP [48,49] is an apoptotic mode triggered by Bcl-2 family proteins [50]. Bax mainly controls the integrity of the outer mitochondrial membrane and forms a homodimer with itself to control the permeability of the mitochondrial membrane when overexpressed [51,52]. It releases cytochrome C into the cytoplasm to activate the core components of cysteine and aspartic proteases, caspase-9 and caspase-3, to complete the mitochondrialmediated apoptosis pathway, leading to cell apoptosis [53,54]. As an antiapoptotic protein, Bcl-2 can form a heterodimer with Bax to prevent the release of cytochrome C to achieve the purpose of inhibiting apoptosis [55,56]. Therefore, the composition ratio of Bcl-2 family members is a key factor in regulating apoptosis; especially, the balance of Bax/Bcl-2 in cells plays an important role in the development of neuronal apoptosis and AD [57]. This experiment further explored the effect of DNLA on the expression of mitochondrial apoptosis pathway-related proteins in the PC12 cell model by western blotting. We found that the ratio of the Bax/Bcl-2 protein and the protein expression levels of cleaved-caspase-9 and cleaved-caspase-3 induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] in the model group were significantly upregulated compared to the control group cells. However, the ratio of the Bax/Bcl-2 protein was reduced after the administration of DNLA, and the protein expression of downstream cleavedcaspase-9 and cleaved-caspase-3 was downregulated. The results indicated that early intervention with effective doses of DNLA might reduce the level of intracellular oxidative stress and inhibit the mitochondrial-mediated apoptosis pathway, thereby decreasing the apoptosis induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] and achieving the purpose of protecting cells. In conclusion, we found that administering an effective dose of DNLA in advance can significantly improve the apoptosis induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] . The mechanism may be related to reducing ROS production and level of oxidative stress, which in turn inhibits the pathway of mitochondrial apoptosis. This experiment provides a basic pharmacological basis for DNLA to delay development of AD when combined with the results of the previous research. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors have no conflict of interest to report.
4,415.6
2021-08-16T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Swipt through eigen-decomposition of mimo channels In this paper, we theoretically investigate a new technique for simultaneous wireless information and power transfer in multiple-input multiple-output (MIMO) point-to-point with radio frequency energy harvesting capabilities. The proposed technique exploits the spatial decomposition of the MIMO channel and uses the eigenchannels either to convey information or to transfer energy. An optimization problem that minimizes the total transmitted power subject to maximum power per eigenchannel, information and energy constraints is formulated as a mixed-integer nonlinear program and solved to optimality using mixed-integer second-order cone programming. INTRODUCTION The integration of the energy harvesting concept into communication networks is a hot research topic with high socioeconomic impacts. Conventional approaches focus on natural energy harvesting sources such as solar power, wind, mechanical vibrations and refer to new protocols and transmission techniques that efficiently handle the harvested energy [1]. However, harvesting from natural sources is mainly an unpredictable and unstable process which could be critical for applications with strict quality-of-service (QoS) constraints. Recently there is a lot of interest to use electromagnetic radiation as a potential renewable energy resource [2]. The key idea of this concept is that electromagnetic waves convey energy, which can be converted to DC-voltage by using specific rectenna circuits [3,4]. Since radio signals carry both information and energy at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is an emergent topic. Although information theoretic studies assume that the same signal can be used for both information and power transfer, this simultaneous transfer is not possible without losses due to practical limitations. In [5], the authors introduce three main practical techniques for simultaneous information and power transfer (SWIPT): a) "time switching" (TS), where the receiver switches in time between decoding information and harvesting energy and b) "power splitting" (PS), where the receiver splits the received radio-frequency (RF) signal in two parts for decoding information and harvesting energy, respectively, and c) "Antenna switching" (AS), where the receiver is equipped with multiple antennas and splits the antennas into two groups; one group is used for conventional information decoding (ID), and the second group for RF energy harvesting (EH). In contrast to the conventional SWIPT approaches i.e., TS, PS, AS, we propose a new technique for SWIPT in the spatial domain for a basic point-to-point multiple-input multiple-output (MIMO) channel. In this work, spatial domain does not refer to the antenna elements [6,7] but mainly on the spatial degrees of freedom of the channel. Based on the singular value decomposition (SVD) of the MIMO channel, the communication link is transformed to parallel channels that can convey either information or energy; this binary allocation is in respect to the current practical limitations. We study the minimization of the transmitted power when the receiver is characterized by maximum power per eigenchannel, information rate and power transfer constraints which is a mixed-integer nonlinear optimization problem and propose several solution methods. It is worth noting that the main purpose of this work is to introduce a new technique for SWIPT in MIMO systems; this technique is studied from a theoretical standpoint and practical implementation is beyond the scope of this paper. The proposed technique is extended to MIMO channels with channel estimation error in [8]. SYSTEM MODEL & PROBLEM FORMULATION We assume a simple point-to-point MIMO model consisting of one source with transmit antennas and one destination with receive antennas. The source is connected to a constant power supply while the destination has RF transfer capabilities and can harvest energy from the received electromagnetic radiation. We consider a flat fading spatially uncorrelated Rayleigh MIMO channel where H ∈ ℂ × denotes the channel matrix. The channel remains constant during one transmission time-slot and changes independently from one slot to the next. The entries of H are assumed to be independent, zero-mean circularly symmetric complex Gaussian (ZMCSCG) random variables with unit variance (which en- where x ∈ ℂ ×1 denotes the transmitted signal with [xx ] = Q and n ∈ ℂ ×1 represents the noise vector having ZMCSCG entries of unit variance. By using the SVD of the H channel, the capacity of the MIMO channel is where , ∈ = {1, ..., }, is the -th eigenvalue of HH , and is the power allocated to the -th eigenchannel. It is also assumed that 1 ≥ 2 ≥ ... ≥ . The MIMO channel is decomposed in parallel SISO channels with where˜denotes the AWGN for the -th parallel channel and has the same distribution with (due to the unitary transformation) in reception. We assume that the destination is characterized by both information rate and RF EH requirements; this means that for the duration of one transmission the destination requires an information rate and energy . SWIPT optimization problem The proposed scheme exploits the SVD structure of the MIMO channel and achieves SWIPT in the spatial domain. More specifically, the transformation of the MIMO channel to parallel SISO channels allows the simultaneous transfer of data traffic and RF energy by using an eigenchannel either to convey information or energy. An eigenchannel cannot be used to convey both information and energy; this limitation refers to practical constraints and is inline with the other approaches proposed in the literature, e.g. power splitting. During each transmission an appropriate optimization problem is solved, which determines the usage of each antenna and a switching mechanism selects the appropriate circuits for ID or EH. In this paper, we focus on the joint eigenchannel assignment (into ID or EH) and power allocation problem under both information and energy constraints. In particular, we consider the following optimization problem where indicates the maximum power that can be used in each eigenchannel, while binary variable indicates whether the -th eigenchannel is used for ID ( = 1) or EH ( = 0). Note that terms log(1 + ) are equivalent to the more intuitive representation log(1 + ) when ∈ {0, 1}, ∈ . This mathematical program involves binary and continuous variables, as well as nonlinear functions; hence it belongs to the class of mixed-integer nonlinear optimization problems, which are very hard to solve in general. OPTIMAL POWER ALLOCATION WITH FIXED EIGENCHANNEL ASSIGNMENT In this section, we develop a waterfilling-like procedure to solve the optimal power allocation problem for a given eigenchannel assignment, which is essential for the development of a low complexity algorithm for problem (4). Let, ℐ and ℰ, denote the sets of eigenchannels assigned for information and EH respectively; then, the optimal power allocation problem can be defined as: Notice that the above problem is decomposable to the information subproblem with objective min ∑ ∈ℐ and constraints (5b), (5d), and the EH subproblem with objective min ∑ ∈ℰ and constraints (5c), (5d). The EH subproblem can be optimally solved by allocating as much power as needed to the "better" eigenchannels, i.e., the ones with higher , until constraint (5c) is satisfied, or the problem is deemed infeasible. In mathematical terms the solution of the EH subproblem for = 1, ..., |ℰ| is where index denotes the EH eigenchannel with the k-th strongest eigenchannel in ℰ. The solution of the information subproblem is summarized in the following theorem. where the Lagrange multiplier is derived from: Sets ℐ 1 and ℐ 2 are defined as Proof : The proof is similar to the proof of the waterfilling theorem with maximum power constraints and is omitted. JOINT EIGENCHANNEL ASSIGNMENT AND POWER ALLOCATION Problem (4) is nonlinear and combinatorial in nature, and hence very hard to solve. In this section a Mixed-Integer Second Order Cone Programming (MISOCP) formulation is derived that allows its optimal solution with standard optimization solvers. In addition, we provide a polynomial algorithm that optimally solves the problem for the special case of = ∞. Note that formulation (11) is MISOCP and can be solved with standard optimization solvers. Special Case: = ∞ When = ∞ two important remarks can be made regarding the eigenchannel assignment. Remark 1. Exactly one EH eigenchannel has nonzero power. Remark 1 is true because there is no imposed upper bound on the transmitted power in each eigenchannel, so that only the largest eigenvalue in this set, , ∈ ℰ will be nonzero. Remark 2. Eigenchannels with = 0 are those with the overall smallest eigenvalues. This is true because "better" eigenchannels are beneficial for both the satisfaction of the information and EH constraints. These remarks imply that the optimal assignment is obtained by examining different cases, each considering a potentially optimal assignment ℐ, for the information eigenchannels and the EH eigenchannel, respectively, as outlined in Algorithm 2. The solution of each assignment can be easily obtained with computational complexity ( ) by setting = ∞ into (7) for the information and EH subproblems. Note that for the derivation of in (8), it is true that ℐ 2 = ∅, so that the denominator of is equal to 1. It should be emphasized that Algorithm 2 solves a nonlinear combinatorial optimization problem involving binary variables, in polynomial time. In particular, a total of assignments are examined, each of computational complexity ( ); hence, the total complexity of Algorithm 2 is ( 2 ). LOW-COMPLEXITY HEURISTIC In general, the MISOCP solution approach for problem (4) have exponential complexity due to the nonlinear and combinatorial nature of the problem, and hence is not suitable for real-time execution. In this section we propose the Group Eigenchannel Assignment (GEA) heuristic which provides suboptimal results, but is suitable for real-time execution. GEA is based on the observation that one constraint usually dominates over the other acquiring the "best" eigenchannels, i.e., those with the largest eigenvalues. If the information constraint is dominant, the first eigenchannels will be assigned for ID and the rest for EH, where is a value to be determined and vice versa. Hence, a total of 2 different assignments combinations are examined resulting in computational complexity ( 2 ). GEA heuristic is outlined in Algorithm 3. First, Algorithm 2 is examined as a way to optimally solve cases of low maximum transmitted power, i.e. with * ≤ , ∈ . If that is not the case, GEA implements the main idea of assigning ID and EH eigenchannels in groups. NUMERICAL RESULTS Computer simulations are carried-out in order to evaluate the performance of the MISOCP and GEA algorithms. Since the scope of the paper is to highlight the theoretical idea of SWIPT in point-to-point MIMO setup, only small-scale fading is considered for the wireless medium. We consider a 8 × 8 MIMO system with = 2 and perfect harvesting efficiency. For each different experimental configuration, the results are averaged over 1000 randomly generated problem instances, in which the entries of the channel matrix are independent and identically distributed ZMCSCG random variables with unit variance. Mathematical modeling and solution of MISOCP formulation is done using Gurobi [11]. Fig. 1 demonstrates the effect of the ID and EH thresholds on the total transmitted power, when the MISOCP solution is invoked. The figure indicates that by increasing the ID or EH thresholds the power increases as well. For the ID threshold, there is a significant nonlinear power increase due to the logarithmic relationship between power and information rate. Interestingly, the effect of the EH threshold is small which indicates that for the specific configuration the ID threshold dominates the total power increase. Fig. 2 illustrates the relative percentage optimality deviation (RPOD) of the GEA solution compared to the optimal MISOCP solution. GEA achieves average performance within 10% from the optimal in all scenarios examined except for = 18 -= 10. An interesting observation is that there is no clear conclusion regarding the effect of the ID and EH thresholds on RPOD. For example, while there is a tendency for the RPOD to grow as a function of , for = 1 RPOD is minimal (less than 1%) for a range of values ∈ (6,14). Note that the absence of some bars in- dicates 0% deviation of GEA from optimality, i.e., optimal performance. CONCLUSIONS In this paper, we have theoretically investigated SWIPT in the spatial domain for a MIMO channel with RF EH capabilities. By using SVD decomposition for the wireless channel under perfect channel knowledge, the proposed technique uses the eigenchannels to convey either information or energy, with the goal of minimizing the overall transmitted power subject to information and energy constraints. Although the examined problem is non-linear and combinatorial in nature, an MISOCP formulation have been developed that provides optimal solutions. A polynomial-complexity optimal algorithm has been developed for the special case of having no restric-tions on the maximum power allowed in each eigenchannel. We have also shown that for known eigenchannel assignment the power allocation problem can be addressed in a waterfilling fashion. Finally, a low-complexity algorithm is presented which produces near-optimal solutions for a wide range of parameter configurations.
2,989
2015-12-28T00:00:00.000
[ "Computer Science" ]
TOI-2084 b and TOI-4184 b: Two new sub-Neptunes around M dwarf stars We present the discovery and validation of two TESS exoplanets orbiting nearby M dwarfs: TOI-2084b, and TOI-4184b. We characterized the host stars by combining spectra from Shane /Kast and Magellan /FIRE, spectral energy distribution analysis, and stellar evolutionary models. In addition, we used Gemini-South/Zorro & -North/Alopeke high-resolution imaging, archival science images, and statistical validation packages to support the planetary interpretation. We performed a global analysis of multi-colour photometric data from TESS and ground-based facilities in order to derive the stellar and planetary physical parameters for each system. We find that TOI-2084b and TOI-4184b are sub-Neptune-sized planets with radii of R p = 2 . 47 ± 0 . 13 R ⊕ and R p = 2 . 43 ± 0 . 21 R ⊕ , respectively. TOI-2084b completes an orbit around its host star every 6.08 days, has an equilibrium temperature of T eq = 527 ± 8 K and an irradiation of S p = 12 . 8 ± 0 . 8 S ⊕ . Its host star is a dwarf of spectral M2 . 0 ± 0 . 5 at a distance of 114 pc with an effective temperature of T e ff = 3550 ± 50 K, and has a wide, co-moving M8 companion at a projected separation of 1400 au. TOI-4184b orbits around an M5 . 0 ± 0 . 5 type dwarf star ( K mag = 11 . 87) each 4.9days, and has an equilibrium temperature of T eq = 412 ± 8 K and an irradiation of S p = 4 . 8 ± 0 . 4 S ⊕ . TOI-4184 is a metal poor star ([Fe / H] = − 0 . 27 ± 0 . 09 dex) at a distance of 69 pc with an effective temperature of T e ff = 3225 ± 75 K. Both planets are located at the edge of the sub-Jovian desert in the radius-period plane. The combination of the small size and the large infrared brightness of their host stars make these new planets promising targets for future atmospheric exploration with JWST. Introduction M dwarfs are the most common stars in our galaxy (Henry et al. 1994;Kirkpatrick et al. 1999), and small planets occur around M dwarfs more frequently than Sun-like stars (Nutzman & Charbonneau 2008;Kaltenegger & Traub 2009;Winters et al. 2014).M dwarfs are, therefore, attractive and exciting targets for searching for small and temperate exoplanets using the transit technique, thanks to their small sizes, low masses, and luminosities.The transit signal is much deeper than that caused by similar planets orbiting Sun-like stars, which makes such planets easier to detect and characterize.Moreover, such planetary systems are suitable targets for atmospheric characterization through transmission spectroscopy, including with JWST (Kempton et al. 2018).In addition, the radial-velocity semi-E-mail<EMAIL_ADDRESS>of the stellar hosts are higher, thanks to the low stellar masses, which makes them suitable targets for planetary mass measurements.M dwarf systems will allow a better understanding of the so-called radius valley between the super-Earth-and sub-Neptune-sized planets (see, e.g., Owen & Wu (2013); Fulton & Petigura (2018); Van Eylen et al. (2018).Moreover, the discovery of additional sub-Neptune desert planets (Mazeh et al. 2016) allows us to further explore and understand the physical properties of such exoplanetary systems. The Transiting Exoplanet Survey Satellite (TESS) mission (Ricker et al. 2015) was launched by NASA in 2018 to search for planets around bright nearby dwarfs, including M-type stars.To date, TESS has discovered more than 330 exoplanets orbiting FGKM stars, including 66 planets orbiting around M dwarfs (NASA Archive of Exoplanets).In Section 2, we present the TESS photometry, high-precision photometric follow-up observations using ground-based facilities, and high-resolution imaging from Gemini.In Section 3, we present an analysis of the host star properties derived from their Spectral Energy Distributions (SEDs) and spectra.In Section 4, we validate the planetary nature of the transit signals.In Section 5, we present our global analysis of the photometric data sets of the planetary systems, which allow us to determine the physical parameters of the star and planet.In Section 6, we present planet searches and detection limits from the TESS photometry.Finally, we discuss our results and present our conclusions in Section 7. TESS photometry The host star TIC 394357918 (TOI-4184) was observed by TESS, (Ricker et al. 2015) mission in Sectors 1, 28 and 39 for 27 days each on TESS CCD 3 Camera 3. The Sector 1 campaign started on UTC July 25 2018 and ended on UTC August 22 2018.The Sector 28 campaign started on UTC July 30 2020 and ended on UTC August 26 2020.The Sector 39 campaign started on UTC 2021 May 26 and ended on UTC 2021 June 26. The star TIC 441738827 (TOI-2084) was observed by TESS in 2-minutes cadence during Sectors 16 (UTC September 11 to October 07 2019), 19-23 (UTC November 27 2019 to April 16 2020), 25-26 (UTC May 13 to July 04 2020), 48-60 (UTC January 31 2021 to January 18 2023).TOI-4184 and TOI-2084 were selected by Stassun et al. (2018) to be observed using the 2minute short-cadence mode.To perform TESS data modeling, we retrieved the Presearch Data Conditioning light curves (PDC-SAP, Stumpe et al. (2012); Smith et al. (2012); Stumpe et al. (2014) constructed by the TESS Science Processing Operations Center (SPOC; Jenkins et al. (2016)) at Ames Research Center from the Mikulski Archive for Space Telescopes.PDC-SAP light curves have been corrected for instrument systematics and crowding effects.Figure 1 shows the TESS field-of-view for each target and photometric apertures used with the location of nearby Gaia DR3 sources around each target (Gaia Collaboration et al. 2021).TESS light curves for TOI-2084 and TOI-4184 are presented in Figure 2 and Figure 3. Ground-based photometry We used the TESS Transit Finder tool, which is a customized version of the Tapir software package (Jensen 2013), to schedule the photometric time-series follow-up observations.These are summarized in the following, and the resulting light curves are presented in Figure 4. SPECULOOS-South We used one of the SPECULOOS-South (Search for habitable Planets EClipsing ULtra-cOOl Stars, Jehin et al. (2018);Delrez et al. (2018); Sebastian et al. (2021) facilities to observe one full transit of TOI-4184.01 on UTC September 25 2021 in the Sloanz filter with an exposure time of 42s.Each 1.0-m robotic telescope is equipped with a 2K×2K CCD camera with a pixel scale of 0.35 and a field of view of 12 ×12 .We performed aperture photometry in an uncontaminated target aperture of 3.9 and a PSF full-width half-maximum (FWHM) of 1.7 .Data reduction and photometric measurements were performed using the PROSE1 pipeline (Garcia et al. 2021). SPECULOOS-North We used SPECULOOS-North/Artemis to observe two transits of TOI-2084.01.Artemis is a 1.0-m Ritchey-Chretien telescope equipped with a thermoelectrically cooled 2K×2K Andor iKon-L BEX2-DD CCD camera with a pixel scale of 0.35 , resulting in a field-of-view of 12 × 12 (Burdanov et al. 2022).It is a twin of the SPECULOOS-South (Section 2.2.1) and SAINT-EX (Section 2.2.3) telescopes.The first transit was observed on UTC 2020 August 13, and the second was observed on UTC June 25 2021.Both transits were observed in the I + z filter with an exposure time of 33 s, and we performed aperture photometry in an uncontaminated target apertures of 2.8-3.2 and a PSF FWHM of 1.4-1.6 .Data reduction (bias, dark and flat correction) and photometric measurements were performed using the PROSE pipeline (Garcia et al. 2021). SAINT-EX We used the SAINT-EX telescope to observe one full transit of TOI-2084.01 on UTC July 13 2021 in the r filter with an exposure time of 141 seconds.SAINT-EX (Search And char-acterIsatioN of Transiting EXoplanets, Demory et al. (2020)) is a 1-m F/8 Ritchey-Chretien telescope located at the Sierra de San Pedro Mártir in Baja California, México.SAINT-EX is equipped with a thermoelectrically cooled 2K × 2K Andor iKon-L CCD camera.The detector gives a field-of-view of 12 ×12 with a pixel scale of 0.35 per pixel.We performed aperture photometry in an uncontaminated target aperture of 3.2 and a PSF FWHM of 1.4 .Data reduction and photometric measurements were performed using the PROSE pipeline (Garcia et al. 2021). TRAPPIST-North We used the 60-cm TRAPPIST-North telescope to observe one partial transit and one full transit of TOI-2084.01.TRAPPIST-North (TRAnsiting Planets and PlanetesImals Small Telescope) is a 60-cm robotic telescope installed at Oukaimeden Observatory in Morocco since 2016 (Barkaoui et al. (2019), and references therein).It is equipped with a thermoelectrically cooled 2K×2K Andor iKon-L BEX2-DD CCD camera with a pixel scale of 0.6 and a field-of-view of 20 × 20 .The first transit was observed on UTC January 30 2021 in the I + z filter with an exposure time of 60 seconds.We took 154 science images and performed aperture photometry in an uncontaminated aperture of 7.6 and a PSF FWHM of 3.1 .The second transit was observed on UTC June 25 2021 in the I + z filter with an exposure time of 65 seconds.We took 216 science images and performed aperture photometry in an uncontaminated aperture of 5.6 and a PSF FWHM of 3.7 .During that second observation of TOI-2084, the telescope underwent a meridian flip at BJD 2459391.4829. Data reduction and photometric measurements were performed using the PROSE pipeline (Garcia et al. 2021). TRAPPIST-South Two full transits of TOI-4184.01were observed with the TRAPPIST-South telescope.TRAPPIST-South is a 60-cm Ritchey-Chretien telescope located at ESO-La Silla Observatory in Chile, which is the twin of TRAPPIST-North (Section 2.2.4).It is equipped with a thermoelectrically cooled 2K×2K FLI Proline CCD camera with a field of view of 22 × 22 and pixelscale of 0.65 /pixel (Jehin et al. 2011;Gillon et al. 2011).The first transit was observed on UTC August 2 2021, and the second transit was observed on UTC September 25 2021.Both transits were observed in the I + z filter with an exposure time of 150 s, and we performed aperture photometry in an uncontaminated target apertures of 3.5-6.2and a PSF FWHM of 2.4-2.7 .During the second transit of TOI-4184.01,the telescope underwent a meridian flip at BJD = 2459478.8226.Data reduction and photometric measurements were performed using the PROSE pipeline (Garcia et al. 2021). LCOGT-2.0m MuSCAT3 We used the Las Cumbres Observatory Global Telescope (LCOGT; Brown et al. (2013)) 2.0-m Faulkes Telescope North at Haleakala Observatory in Hawaii to observe two transits of TOI-2084.01simultaneously in Sloan-g , r , i and Pan-STARRS z-short filters.The first (full) transit was observed on UTC May 19 2021, and the second (partial) transit was observed on UTC May 26 2021.We used uncontaminated 4 target apertures to extract the stellar fluxes.The telescope is equipped with the MuS-CAT3 multi-band imager (Narita et al. 2020) Silla observatory in Chile.The instrument used was the DFOSC imager, operated with a Bessell I filter for two transits and a Bessell R filter for the third.In this setup, the CCD covers a field of view of 13.7 ×13.7 with a pixel scale of 0.39 pixel −1 .The images were unbinned and windowed for the first transit, resulting in a dead time between consecutive images of 10 s; however, in an effort to improve the SNR of the target PSF, the remaining transits used 2 × 2 binning and no windowing (to obtain a greater selection of comparison stars), resulting in a dead time between consecutive images of 13 s.The exposure times were 60 s for all images and transits.Due to the target being quite faint (V = 17 th mag, I = 14 th mag) and with the presence of close nearby sources (both point and extended) the telescope was marginally defocused and autoguiding was maintained through all observations.The amount of defocus applied caused the resulting PSFs to have a diameter of ≈ 10 pixels for all nights. We reduced the Danish 1.54-m telescope data using the DE-FOT pipeline (Southworth et al. 2009(Southworth et al. , 2014)).Aperture photometry was performed with an IDL implementation of DAOPHOT (Stetson 1987), with the addition of image motion tracking by cross-correlation with a reference image to produce a differential magnitude light curve.The light curve was produced after simultaneously fitting a first-order polynomial to the out of transit data.The aperture sizes and number of suitable comparison stars were adjusted to obtain the lowest baseline scatter; this method affects the scatter in the transit data but does not significantly impact the light curve shape.The timestamps from the fits files were converted to the BJD TDB time-scale using routines from Eastman et al. (2010). ExTrA The ExTrA facility (Bonfils et al. 2015), located at La Silla observatory, consists of a near-infrared (0.85-1.55 µm; NIR) multi-object spectrograph fed by three 60-cm telescopes.Five fiber positioners at the focal plane of each telescope pick up light from the target and four comparison stars.We observed one full transit of TOI-4184 b on UTC September 15 2021 with two telescopes using the 8 aperture fibers.We used the spectrograph's low resolution mode (R ∼20) and 60-second exposures.We also observed 2MASS J02542961-7941578, 2MASS J03025970-7941390, 2MASS J03025068-7918174, and 2MASS J02581731-7913567, with J-magnitudes (Skrutskie et al. 2006) and effective temperatures (Gaia Collaboration et al. 2018) similar to TOI-4184, for use as comparison stars.The resulting Ex-TrA data were analyzed using custom data reduction software. Shane/Kast Optical Spectroscopy We obtained an optical spectrum of TOI-2084 and its co-moving companion (see below) on UTC November 13 2021 using the Kast double spectrograph (Miller & Stone 1994) mounted on the 3-m Shane Telescope at Lick Observatory in clear conditions.Six exposures of 600 s each was obtained of both sources TOI-2084 simultaneously using the 600/7500 grism and 1 .5-wide slit, providing 6000-9000 Å wavelength coverage at an average resolution of λ/∆λ = 1900.We also observed the flux calibrator Feige 110 later that night (Hamuy et al. 1992(Hamuy et al. , 1994)).Data were reduced using the kastredux package Magellan/FIRE Spectroscopy We obtained a spectrum of TOI-4184 with the FIRE spectrograph (Simcoe et al. 2008) on the 6.5-m Magellan Baade Telescope on UTC September 23, 2021.We used the high-resolution echellette mode with the 0 .60 slit, providing a 0.82-2.51µm spectrum with a resolving power of R∼6000.We collected a single ABBA nod sequence (4 exposures) with integration times of 95.1 s per exposure, giving a total exposure time of 380.4 s.Af- ter the science exposures, we collected a pair of 15-s exposures of the A0 V star HD 45039 for flux and telluric calibrations followed by a pair of 10-s arc lamp exposures and a set of 10 1-s flat-field exposures.We reduced the data using the FIREHOSE pipeline4 .The final spectrum (Figure 8) has a median SNR of 77, with peaks in the J, H, and K bands of 120-140. High-Resolution Imaging from Gemini-8m0 TOI-2084 was observed on UTC June 24 2021 using the 'Alopeke speckle instrument on the Gemini North 8-m telescope and TOI-4184 was observed on UTC December 23 2021 using the Zorro speckle instrument on the Gemini South 8-m telescope (see Scott et al. (2021)).'Alopeke and Zorro provide simultaneous speckle imaging in two bands (562 nm and 832 nm) with output data products including a reconstructed image with robust contrast limits on companion detections (e.g., Howell et al. 2016).A total of 13/11 sets of 1000 × 0.06 sec exposures were collected for TOI-2084/TOI-4184 and subjected to Fourier analysis in our standard reduction pipeline (see Howell et al. 2011). Figure 5 shows our final 5σ contrast curves and the 832 nm reconstructed speckle images.We find that TOI-2084 and TOI-4184 are both single stars with no companion brighter than about 4-6 magnitudes below that of the target star from the diffraction limit (20 mas) out to 1.2".At the distance of TOI-2084/TOI-4184 (d=114/69 pc), these angular limits correspond to spatial limits of 2.3 to 137 au (TOI-2084) and 1.4 to 83 au (TOI-4184). SED analysis To determine the basic stellar parameters, we performed an analysis of the broadband spectral energy distribution (SED) of TOI-2084 and TOI-4184 together with the Gaia EDR3 parallax (with no systematic offset applied; see, e.g., Stassun & Torres 2021), in order to determine an empirical measurement of the stellar radius, following the procedures described in Stassun & Torres (2016); Stassun et al. (2017); Stassun & Torres (2018).We pulled the JHK S magnitudes from 2MASS, the W1-W3 magnitudes from WISE, the G BP and G RP magnitudes from Gaia, and the grizy magnitudes from Pan-STARRS.Together, the available photometry spans the full stellar SED over the wavelength range 0.4-10 µm (see Figure 6).We also estimated the stellar mass according to the empirical M K based relations of Mann et al. (2019).Deduced stellar parameters of TOI-2084 and TOI-4184 are presented in Table 2. Spectroscopic analysis In addition to the SED analysis, we also compared the Shane/Kast optical spectrum of TOI-2084 to the SDSS M dwarf templates of Bochanski et al. (2007) and found the best match to the M2 template (Figure 7).The spectral index classification relations of Lépine et al. (2003) confirm this classification.We see no evidence of Hα emission (equivalent width limit of <1.0 Å), indicating an age greater than ∼1.2 Gyr (West et al. 2008).We also measured the ζ index from TiO and CaH features (Lépine et al. 2007;Mann et al. 2013), finding ζ = 0.893±0.005,consistent with a metallicity of [Fe/H] = −0.13±0.20 based on the calibration of Mann et al. (2013). For TOI-4184, we also analyzed its Magellan/FIRE spectrum using the SpeX Prism Library Analysis Toolkit (SPLAT, Burgasser & Splat Development Team 2017).By comparing the spectrum to NIR spectral standards defined in Kirkpatrick et al. (2010), we find the closest match to the M5.0 standard, although the M6.0 standard provides only a marginally poorer match (Figure 8).Thus, we adopt a spectral type of M5.5±0.5 for TOI-4184.We also estimated the metallicity of TOI-4184 from the Magellan/FIRE spectrum from the equivalent widths of K-band Na i and Ca i doublets and the H2O-K2 index (Rojas-Ayala et al. 2012), and used the empirical relation between these observables and stellar metallicity (Mann et al. 2014) to estimate [Fe/H].Following Delrez et al. (2022), we calculated the uncertainty of our estimate using a Monte Carlo approach.Adding in quadrature the systematic uncertainty of the relation (0.07), we obtained our of TOI-2084B is shown in Figure 7, and is an excellent match to the M8 dwarf template from Bochanski et al. (2007).This classification is confirmed by the spectral index classification relations of Lépine et al. (2003).We see no evidence of Hα emission from this companion, although the noise is considerable in the 6563 Å region.Similarly, we are unable to reliably measure a ζ index from these data, although the close match to the dwarf template suggests a near-solar metallicity similar to TOI-2084.There are several known planetary systems orbiting stars in low-mass multiples, including the M4+M4.5 binary TOI-1452 and TOI-1760 (Cadieux et al. 2022) and the early-M triple system LTT 1445 (Winters et al. 2019) .01 has a period of 6.078 days and a depth of 2.760 ± 258 ppt at an S/N of 11.2, and .02 a period of 8.149 days and a depth of 3.313 ± 327 ppt at an S/N of 10.8.The report was reviewed by the TOI vetting team and the candidates were released on July 15 2020.A second DV report was issued on August 7, 2020 from the SPOC pipeline which included sectors up to 26 of 2-min cadence data.The first candidate was found to have a period of 6.07830 ± 0.00010 days, a transit depth of 2.8 ± 0.2 ppt with an S/N of 12.7, and a planetary radius of 2.6 ± 0.7 R ⊕ .Silimarly, the second candidate was found to have a period of 8.14903 ± 0.00018 days, a transit depth of 2.8 ± 0.2 ppt with an S/N of 11.8, and a planetary radius of 2.6 ± 0.6 R ⊕ .The odd/even phase-folded transits were compared and agreed to 1.45σ and 0.96σ for the .01 and .02candidates, respectively.As for TOI-4184, one nearby star is contaminating the aperture, but the event was limited to be on target for the .01candidate and likely on target for .02.In addition, the DV report a difference imaging centroid test result that locates the catalog position of the target star to within 2. Archival imaging We obtained archival images of TOI-2084 and TOI-4184 in order to discard the case of a background unresolved companion producing the transit signals.Whether an eclipsing binary, a planetary candidate orbiting a background star, or simply an unaccounted background star, any of these scenarios Given the pixel scale of 1-1.7 , it is impossible to rule out a background star from this diagnostic alone, though it is unlikely since we ruled out any close companion star at a minimum angular separation of 0.1 (see Section 2.4).We also compared images centered on TOI-4184 from POSS II/DSS in the blue, red and in- 1977, 1989, and 1990, respectively.Because of its high proper motion of 183.87 mas yr −1 , TOI-4184 has moved by > 8 in the 44 years spanning the observations.This allows us to confirm the lack of background contaminant in the line-of-sight brighter than a limiting magnitude of ≥ 20. Follow-up photometric validation Photometric follow-up using ground-based facilities has two objectives: identify the source of the transit event and assess if the transit depth is wavelength dependent.The presence of contaminating stars in the TESS aperture was noted for both TOI-4184.01and TOI-2084.01 in the TESS data validation reports. Planet radius [R ] Sub-Neptune desert All planets TESS M-dwarf systems TESS FGK systems TOI-4184b TOI-2084b The closest neighboring stars are respectively TIC 650071720 at 11.5 with a ∆Tmag of 4.45, and TIC 441738830 at 12.4 with a ∆Tmag of 6.15.We reached aperture sizes of a few arcseconds using ground-based facilities, which allowed us to confirm the transit events are on the expected stars for TOI-4184.01and TOI-2084.01.In the case of TOI-2084.02,twice at the expected transit times we detected a deep eclipse on the nearby star TIC 1271317080 (∆T = 4.98) at 12.9" from the target, labeled T3 in Figure 12.Thus, we rule out TOI-2084.02 as a false positive and do not consider it further.We collected photometric data for TOI-2084.01 in various bands (I+z, zs, i', r', g'), spanning the 400-1100 nm wavelength ranges.We measured a matching transit depth within 1σ in all bands.Similarly, we obtained data for TOI-4184.01 in the I+z, zs, Ic, JJ, g' bands, covering a range between 400-1210 nm where the transits depths also agree within 1σ.The transit depths measured in different wavelengths for TOI-2084 b and TOI-4184 b are presented in Figure 16, Table 4 and Table 5. Statistical validation To calculate the false positive probability (FPP) for TOI-2084.01and TOI-4184.01,we used the Tool for Rating Interesting Candidate Exoplanets and Reliability Analysis of Transits Originating from Proximate Stars (TRICERATOPS ; Giacalone et al. 2021).This Bayesian tool incorporates prior knowledge of the target star, planet occurrence rates, and stellar multiplicity to calculate the probability that a given transit signal is due to a transiting planet or another astrophysical source.The criteria for statistical validation of a planetary candidate is stated as FPP 5 < 0.01 and NFPP 6 < 0.001, which is the sum of probabilities for all false positive scenarios.We ran TRICERATOPS on the TESS light curves including the contrast curve obtained with Gemini/Alopeke and Gemini/Zorro for both stars, TOI-2084 and TOI-4184.We found FPP = 0.0005 and FPP = 0.0001 for TOI-2084 b and TOI-4184 b, respectively.Because triceratops determines that no nearby stars are capable of being sources of astrophysical false positives, we find NFPP = 0 for both candidates (TOI-2084.01and TOI-4184.01).Based on these results, we consider two planets to be validated.TOI-2084.02was rejected as a nearby eclipsing binary (NEB) based on groundbased photometric follow-up (see previous Section 4.3). Photometric data modelling We performed a joint fit of all observed light curves by TESS and ground-based telescopes described in section 2, using the Metropolis-Hastings (Metropolis et al. 1953;Hastings 1970) algorithm implemented in the updated version of MCMC (Markov-chain Monte Carlo) code described in Gillon et al. (2012).The transit light curves are modeled using the quadratic limb-darkening model of Mandel & Agol (2002), multiplied by a baseline model in order to correct for several external effects related to systematic variations (time, airmass, background, fullwidth half-maximum, and position on the detector).The baseline model was selected based on minimizing the Bayesian information criterion (BIC) described in Schwarz (1978).Table 3 shows for each transit light curve the selected baseline model based on the BIC, and correction factor CF = β w × β r to rescale the photometric errors, where β w and β r are white and red noises, respectively (see Gillon et al. (2012) for more details).TRAPPIST-South and TRAPPIST-North telescopes are equipped with German equatorial mounts that have to rotate 180 • when the meridian flip is reached.This movement results the stellar images in different position on the detector before and after the flip.The normalization offset is included as jump parameter in our global MCMC analysis.The transit light curve observed with TRAPPIST-South on UTC September 20 2021 contains a meridian flip at BJD = 2459478.8226(see Table 1), which is accounted during the global analysis. The jump parameters sampled by the MCMC for each system were: -T 0 : the transit timing; -W: the transit duration (duration between the contacts 1 and 4); -R 2 p /R 2 : the transit depth, where R p is the planet radius and R is the star radius; -P: the orbital period of the planet; b = a cos(i p )/R : the impact parameter in case of the circular orbit, where i p is the planetary orbital inclination and a p is the semi-major axis of the orbit; -√ e cos(ω), were ω is the argument of periastron and e is the orbital eccentricity the combination q 1 = (u 1 + u 2 ) 2 and q 2 = 0.5u 1 (u 1 + u 2 ) −1 (Kipping 2013), were u 1 and u 2 are the quadratic limbdarkening coefficients, which are calculated from Claret et al. (2012); and the stellar metallicity [Fe/H], the effective temperature (T eff ), log of the stellar density (log(ρ )), and log of the stellar mass (log(M )). For each star, we applied a Gaussian prior distribution on the stellar parameters obtained from SED and spectroscopy (which are R , M , [Fe/H], log g and T eff ).For each system, we per-Article number, page 12 of 22 formed two MCMC analyses, the first assuming a circular orbit, and the second assuming an eccentric orbit.The results are compatible with a circular orbit based on the Bayes factor BC = exp (−∆BIC/2).The eccentric solutions give e∼0.2 +0.3 −0.2 for TOI-2084.01and e∼0.1 +0.2 −0.1 for TOI-4184.01.For each transit light curve, a preliminary analysis composed of one Markov chain of 10 5 steps was performed in order to calculate the correction factor CF. Then a global MCMC analysis of three Markov chains of 10 5 steps was performed to derive the stellar and planetary physical parameters.The convergence of each Markov chain was checked using the statistical test of Gelman & Rubin (1992).Derived parameters of TOI-2084 and TOI-4184 are presented in Tables 2, 4 and 5. Planet searches using the TESS photometry In this section, we searched for additional planetary candidates that might remain unnoticed by SPOC and the QLP due to their detection thresholds.To this end we used our custom pipeline SHERLOCK7 , originally presented by Pozuelos et al. (2020) and Demory et al. (2020), and used in several studies (see, e.g., Wells et al. 2021;Van Grootel et al. 2021;Schanche et al. 2022). SHERLOCK allows the user to explore TESS data to recover known planets, alerted candidates, and search for new periodic signals, which may hint at the existence of extra transiting planets.In short, the pipeline has six modules to (1) download and prepare the light curves from the MAST using the lightkurve (Lightkurve Collaboration et al. 2018), (2) search for planetary candidates through the tls (Hippke & Heller 2019), (3) perform a semi-automatic vetting of the interesting signals, (4) compute a statistical validation using the TRICERATOPS (Giacalone et al. 2021), ( 5) model the signals to refine their ephemerides employing the allesfitter package (Günther & Daylan 2021), and (6) compute observational windows from ground-based observatories to trigger a follow-up campaign.We refer the reader to Delrez et al. (2022) and Pozuelos et al. (2023) for recent SHERLOCK applications and further details. For TOI-4184, we searched for extra planets analyzing the three available sectors (1, 28, and 39) together, exploring orbital periods from 0.3 to 30 d.For TOI-2084.01,we conducted two independent searches: 1) corresponding to the nominal mission, that is, 8 sectors from 16 to 26, and 2) corresponding to the extended mission, that is, 13 sectors from 48 to 60 (see Figure 3).In both searches, we explored the orbital periods from 0.3 to 50.The motivation to follow this strategy is twofold.On the one hand, the high computational cost of exploring at the same time 21 sectors, while adding many sectors might hint at the presence of very long orbital periods (> 50 days), the transit probabilities rapidly decrease for such scenarios.On the other hand, this strategy allows us to compare any finding in the nominal mission with the extended mission, providing an extra vetting step for the signals' credibility. We successfully recovered the TOIs released by SPOC, the TOI-4184.01with an orbital period of 4.90 days and TOI-2084.01with an orbital period of 6.08 days.In the subsequent runs performed by SHERLOCK , we did not find any other signal that hinted at the existence of extra transiting planets.In addition to TOI-2084.01,we also recovered a signal corresponding to TOI-2084.02,which was already classified as a false positive using ground-based observations described Section 4.3 and displayed in Figure Figure 12. Surprisingly, we did not recover the signal with the orbital period issued by TESS, 8.14 days, but its first subharmonic, which corresponds to an orbital period of 4.07 days.Then, we used the two modules implemented in SHERLOCK for vetting and statistical validation of candidates with this signal.On the one hand, using the vetting module, we found that even and odd transits yielded different transit depths; ∼2.3 and ∼1.1 ppt for even and odd transits, respectively.This indicated that our detection algorithm was confusing the secondary eclipse as the primary and yielding half of the real orbital period, which confirmed that the real orbital period is 8.14 days.On the other hand, the validation module found that its FFP is ∼0.26 and NFPP is ∼0.1.According to Giacalone et al. (2021), these values place this candidate in the false positive area in the NFPP-FPP plane.Hence, these analyses agreed with the eclipsing binary nature of this signal. Results and discussion We presented the validation and discovery of TOI-2084 b and TOI-4184 b by the TESS mission (see phase-folded light curves in Figure 2 and individual transits in Figure 3), which were confirmed through follow-up photometric measurements collected by SPECULOOS-South/North, SAINT-EX, TRAPPIST-South/North, MuSCAT3, LCOGT, Danish and ExTrA telescopes (see phase-folded light curves in Figure 4).The host stars are characterized by combining optical spectrum obtained by Shane/Kast and Magellan/FIRE, SED and stellar evolutionary models.Then, we performed a global analysis of space TESS and ground-based photometric data to derive the stellar and planetary physical parameters for each system.Table 2 shows the astrometry, photometry, and spectroscopy stellar properties of TOI-2084 and TOI-4184.Derived stellar and planetary physical parameters from our global analysis are shown in Table 4 and Table 5. Figure 9 shows the periodogram for the system.Both planets are well detected in TESS data.(from Shane/Kast spectrum).It has a wide (∼1400 au) M8 comoving companion, with a likely mass of 0.1 M .TOI-2084 b is a sub-Neptune-sized planet orbiting around the host primary star every 6.08 days, which has a radius of R p = 2.47 ± 0.13 R ⊕ , an equilibrium temperature of T eq = 527 ± 8 K, an incident flux of S p = 12.8 ± 0.8 times that of Earth.We find that TOI-2084 b has a predicted mass of M p = 6.74 +5.31 −2.81 M ⊕ using the Chen & Kipping (2017) relationship. TOI-4184 is a K mag = 11.86M5.5±0.5 metal-poor star with a metallicity of [Fe/H] = −0.27± 0.09 dex (from the Magellan/FIRE spectrum), an effective temperature of T eff = 3225 ± 75 K, a surface gravity of log g = 5.01 ± 0.04 dex, a mass of M = 0.240 ± 0.012 M and a radius R = 0.242 ± 0.013 R .TOI-4184 b is a sub-Neptune-sized planet that completes its orbit around its host star in 4.9 days, has a radius of R p = 2.43 ± 0.21 R ⊕ , an irradiation of S p = 4.8 ± 0.4 Earth ir-Article number, page 13 of 22 A&A proofs: manuscript no.TOI-2084.01_and_TOI-4184.01_KBarkaoui_et_al2016) are displayed with different lines and colors."Earth-like" here means a composition of 30% Fe and 70% MgSiO 3 .The 2% H 2 line represents a composition consisting of a 98% Earth-like rocky core and a 2% H 2 envelope by mass, while the 49% H 2 O + 2% H 2 line corresponds to a composition comprising a 49% Earth-like rocky core, a 49% H 2 O layer, and a 2% H 2 envelope by mass.Earth and Venus are identified in this plot as pale blue and orange circles, respectively.radiation, and an equilibrium temperature of T eq = 412 ± 8 K.We used the Chen & Kipping (2017) relationship to predict the plausible mass of TOI-4184 b, which is M p = 6.60 +5.20 −2.75 M ⊕ .Figure 10 shows the boundaries of the sub-Neptunedesert region determined by Mazeh et al. (2016) Characterization prospects Super-Earths and sub-Neptunes are amongst the most abundant type of exoplanets.Yet their formation, atmospheric composition, and interior structure are not well understood, as a variety of compositions can match the average density of these planets.TOI-2084 b and TOI-4184 b are part of this mysterious population.The small size and proximity of the host stars as well as their brightness in the infrared make them amenable to be further observed by most JWST modes for studying atmospheric compositions. Given the measured properties, we made a first exploratory guess of the planet's composition.We compared their masses and radii with the models from Zeng et al. (2016) As a first approximation of the suitability of both planets for atmospheric investigations, we calculate the transmission spectroscopic metric (TSM) from Kempton et al. (2018), which was developed based on simulations with NIRISS.We estimate the TSMs for TOI-2084 b and TOI-4184 b to be 26.7 +14.7 −10.3 and 57.7 +25.7 −20.1 , respectively.With 90 being the threshold for this category of planets, it is worth noting that this metric solely considers the predicted strength of an atmospheric detection when ranking the planets.Having TSM values below the threshold does not necessarily mean that detailed atmospheric studies are impossible or challenging with current facilities.In other words, these metrics do not serve as the sole criterion for determining the best targets for atmospheric studies. To further evaluate the feasibility of characterizing the atmosphere of both planets, we computed synthetic transit spectra from optical to infrared wavelengths (0.5-12 µm) at low spectral resolutions for different atmospheric scenarios (cloud-free H 2and cloudy H 2 -rich, water-rich).We used petitRADTRANS (Mollière et al. 2019) to compute the model transmission spectra, using the stellar parameters from Table 2 and the planetary parameters from Tables 4 & 5. Our test H 2 -rich models assume atmospheric chemical equilibrium computed using the FastChem code (Stock et al. 2018) with isothermal profiles at the equilibrium temperature, solar abundances, collisionally induced absorption (CIA) by H 2 -H 2 and H 2 -He, and Rayleigh scattering.We include as absorbers H 2 O, CO 2 , CO, CH 4 , NH 3 , C 2 H 4 , and C 2 H 2 .For the water-rich scenarios, we assume that the planets are enveloped in a clear, isothermal water-dominated atmosphere composed of 95% H 2 O and 5% CO 2 .The model includes the H 2 O and CO 2 Rayleigh scattering cross-sections.We also compare it to a pure water planet (100% H 2 O) with H 2 O Rayleigh scattering.An example of the resulting spectra for TOI-4184b is shown in Figure 14. As predicted by earlier studies (e.g., Greene et al. (2016); Mollière (2017); Chouqar et al. (2020)), the amplitude of the transmission spectra is highly dependent on the presence and altitude of the cloud layer, and on the average molecular weight of the atmosphere: the higher the average molecular weight of the atmosphere, the lower the scale height, and thus the lower the amplitude of the transit spectroscopy signal.The transmission spectra for the H-rich atmospheres show strong absorption features due to H 2 O, CH 4 , and NH 3 over the wavelength range 0.5-12 µm (see Figure 14).The spectroscopic modulations of the cloud-free spectra are on the order of 50-350 ppm and 100-700 ppm for TOI-2084 b and TOI-4184 b, respectively.The cloudy models present smaller absorption features due to the suppression of contributions from deeper atmospheric layers.The features are essentially muted in the cases with 10 −4 bar cloud top model for both planets (not shown here).For scenarios discussed above, the mean molecular weight varies from µ = 2 g/mol for an atmosphere dominated by molecular hydrogen to µ = 18 g/mol for atmospheres dominated by heavier molecules like H 2 O which explains the weak spectral Since 2018 NASA's TESS mission has discovered several sub-Neptune-sized exoplanets around M dwarfs (e.g., TOI-1696 b & TOI-2136 b: Beard et al. (2022) TOI-1201 b: Kossakowski et al. (2021), TOI-2081 b & TOI-4479 b: Esparza-Borges et al. (2022), TOI-122 b & TOI-237 b: Waalkes et al. (2021), TOI-269 b: Cointepas et al. (2021), TOI-2406 b: Wells et al. (2021), TOI-620 b: Reefe et al. (2022), TOI-2136 b: Gan et al. (2022), TOI-2257 b: Schanche et al. (2022) and TOI-2096 c: Pozuelos et al. (2023) ).In this paper we present the discovery and validation of two new TESS exoplanets orbiting nearby M dwarfs, TOI-2084 b and TOI-4184 b. Fig. 1 : Fig. 1: TESS target pixel file images of TOI-2084 observed in Sector 16 (left panel) and TOI-4184 observed in Sector 1 (right panel), made by tpfplotter (Aller et al. 2020).Red dots show the location of Gaia DR3 sources, and the red shaded region shows the photometric apertures used to extract the photometric measurements. Fig. 3 : Fig. 3: TESS photometric data of TOI-2084.01and TOI-4184.01.The gray points show the PDSAP fluxes obtained from the SPOC pipeline.The red and blue points correspond to the location of the transit for the candidates TOI-2084.01and TOI-4184.01,respectively. Fig. 4 : Fig. 4: Ground-based photometric light curves of TOI-2081.01(left) and TOI-4184.01(right).The gray points are unbinned data and the black points are data binned to 10 minutes.The coloured lines are the best-fitting transit model.The light curves are shifted along the y-axis for visibility. . TOI-2048 and TOI-2048B have an unusually wide separation among low-mass planet hosts in binary systems, although there are examples of such systems among more massive stellar binaries (Correa-Otto & Gil-Hutton 2017). Fig. 6 : Fig. 6: Spectral Energy Distribution (SED) fit of TOI-2084 (right) and TOI-4184 (left).The gray curves are the best-fitting NextGen atmosphere model, coloured symbols with error-bars are the observed fluxes, and black symbols are the model fluxes. Fig. 7 :Fig. 8 : Fig.7: Shane/Kast red optical spectra (black lines) ofTOI-2084 (left) and its wide stellar companion TOI-2084B (right) compared to best-fit M2 and M8 SDSS spectral templates fromBochanski et al. (2007, magenta lines).The lower panels display the difference between these spectra (black line) compared to the ±1σ measurement uncertainty (grey band).Key features are labeled, including the strong telluric O 2 band at 7600 Å (⊕).Inset boxes show close-ups of the region around the 6563 Å Hα and 6708 Å Li I lines. Fig. 10 : Fig. 10: Period-Radius diagram of known transiting exoplanets from NASA Archive of Exoplanets.The blue and orange data points correspond to the TESS FGK and M dwarf systems, respectively.The green and red stars show TOI-2084 b TOI-4184 b, respectively.The yellow region shows the boundaries of the sub-Neptune-desert determined by Mazeh et al. (2016). Fig. 11 : Fig. 11: Field images cropped on a 1'×1' region around TOI-2084 (top row of images) and TOI-4184 (bottom row).The current position of the target stars is shown with the yellow circle.Top row, from left to right: 1953 red image from POSS I/DSS, 1953 infrared image from POSS II/DSS2, 2012 z' image from PanSTARRS1, and 2021 I+z image from SPECULOOS-North.Bottom row, from left to right: 1977 blue image from POSS II/DSS2, 1989 red image from POSS II/DSS2, 1990 infrared image from POSS II/DSS2, and 2021 image z' from SPECULOOS-South. Fig. 12: TOI-2084 light curves obtained with ground-based facilities.Left panel: light curve obtained with SPECULOOS-North in the I+z filter on UTC August 17 2020.Middle panel: TOI-2084 field-of-view with nearby stars.The wide co-moving companion TOI 2084B is directly south.Right panel: light curve obtained with LCO-McD in the Sloan-i filter on UTC August 26 2020.Red and blue data points show the target (T1) and nearby star (T3) light curves, respectively.During the expected transit of TOI-2084.02,we twice detected a deep eclipse (≈ 400 ppt) on the nearby star TIC 1271317080 (∆T = 4.98) at 12.9" from the target, labeled T3. Figure A.1 and Figure A.2 show the parameters posterior distributions for each system.7.1.TOI-2084 b and TOI-4184 b TOI-2084 is a K mag = 11.15M2-type star with an effective temperature of T = 3553 ± 50K, a surface gravity of log g = 4.75 ± 0.05 dex, a mass of M = 0.49 ± 0.03 M and a radius R = 0.475 ± 0.016 R (derived from SED analysis including Gaia EDR3 parallax) and a metallicity of [Fe/H] = −0.13±0.20 Fig Fig.13: Mass-radius diagram of exoplanets with mass-radius measurements better than 25% from TEPCat and for our candidates, color-coded by their equilibrium temperature.Two-layer models fromZeng et al. (2016) are displayed with different lines and colors."Earth-like" here means a composition of 30% Fe and 70% MgSiO 3 .The 2% H 2 line represents a composition consisting of a 98% Earth-like rocky core and a 2% H 2 envelope by mass, while the 49% H 2 O + 2% H 2 line corresponds to a composition comprising a 49% Earth-like rocky core, a 49% H 2 O layer, and a 2% H 2 envelope by mass.Earth and Venus are identified in this plot as pale blue and orange circles, respectively. , shown in Figure 13.The models predict that TOI-2084 b and TOI-4184 b may have low-density volatiles, such as water, an H/He atmosphere, or a combination of both.Below, we further explore these plausible atmospheres and assess the potential for atmospheric characterization of both TOI-2084 b and TOI-4184 b planets. Fig Fig. A.2: Posterior probability distribution for the TOI-4184 system stellar and planetary physical parameters fitted using our MCMC code as described in Methods.The vertical lines present the median value.The vertical dashed lines present the median value for each derived parameter. 3, which included image reduction, boxcar extraction of the one-dimensional spectra, TOI-4184.01Oct 10 2021 Sloan-g , i LCO-SAAO-1.0m 400,240 3.9 2.3 Full Table 1: Table shows the observational parameters: date of observation, filter used, telescope, exposure time(s), photometric aperture size, and FWHM of the point-spread function. Table 3 : MCMC analysis parameters.For each transit light curve selected baseline-function (based on the BIC), deduced values of β w , β r and the coefficient correction
10,070.2
2023-06-26T00:00:00.000
[ "Physics", "Geology" ]
The Carbon-Based Evolutionary Theory (CBET) It is desirable to upgrade previous evolutionary theories, which have remained incomplete and controversial for decades. Here we employ the concept of carbon-based entities (CBEs), which include methane, amino acids, proteins, organisms, and other entities containing relatively many carbon atoms. We deduce the driving force, mechanisms, steps, modes, tempos of CBE evolution, through integration of biology, physics, and chemistry using logics for complex issues. We hence establish the Carbon-Based Evolutionary Theory (CBET). The CBET suggests that evolution is the increase in hierarchy, diversity, fitness of CBEs under natural selection and driven by thermodynamics due to the chemical effect of the thermodynamic features of the Earth on CBEs. It provides better explanations for life origin, macroevolution events, natural selection, sympatric speciation, and evolution tempos than previous evolutionary theories. It reveals the evolutionary basis of multiple important social notions, including diversity, collaboration, altruism, obeying rules, and proper increase in freedom. It refutes some wrong notions in thermodynamics, including negative entropy (negentropy) and that biological order is equal to thermodynamic order, which have misled many people. The CBET is supported by its deduction and application. It could be a rare bridge linking laws of thermodynamics, evolution of life, and development of human society, and could have great significance in various sciences. Introduction Many evolutionary theories have been proposed. The mainstream theories are Darwin's theory emerged in the 19 th century and the Modern Synthesis emerged in the 20 th century [1][2][3]. Darwin's theory elucidated the importance of natural selection, and the Modern Synthesis established the genetic basis of natural selection. The definition of natural selection in Darwin's theory, survival of the fittest, is literally confusing because many individuals who are not the fittest can survive and replicate [1][2][3]. The Modern Synthesis reinterpreted natural selection as gradual changes in gene frequencies of populations because those individuals carrying adaptive mutations are more reproductively successful [1][2][3]. Evolution is a process of thermodynamics, but evolution has not been well explained with laws of thermodynamics [1][2][3]20,21] (see Section 5). Moreover, evolution shows a progressive process, but previous evolutionary theories generated some prejudiced notions harmful for development of human society (see Section 5). Together, it is desirable to upgrade previous evolutionary theories with a more scientific and comprehensive one, which can integrate with advances in biology, laws of thermodynamics, and notions useful for development of human society. To achieve this goal, we deduce from multi-disciplinary integration the Carbon-Based Evolutionary Theory (CBET) with the concept of carbon-based entities (CBEs). CBEs include methane, amino acids, proteins, nucleic acids, lipids, organisms, and other entities chemically containing relatively many carbon atoms. CBEs have hierarchies, and large organic molecules are higher-hierarchy CBEs (HHCBEs) compared with middle organic molecules, but they are lower-hierarchy CBEs (LHCBEs) compared with organisms. The CBET could achieve the above goal because it employs the following five factors, which are all important for evolution and neglected by previous theories: the leading actor throughout life origin and evolution (CBEs), chemical reactions of CBEs, the temperate climate and much water on the Earth, integration of biology, physics, and chemistry, and logics for complex issues [2,3]. The infant version of the CBET was published in 2000−2001 as an article and a book using mathematical methods targeting the evolution of the universe without the concept of CBE [1,2]. Afterward, we spent around 20 years applying it to the evolution of the surface of the Earth and making it easily understandable. Deduction of the driving force of evolution The Earth's surface has widespread temperate heat streams flowing from the Sun, geotherm, and other energy sources. The Earth, as a rare habitable planet in astronomy, receives temperate sunlight for billions of years [14]. Meanwhile, many sites on the Earth, particularly at hydrothermal vents, have emitted geothermal energy for long periods [16,17]. The Earth has much water and the atmosphere to make these heat streams more temperate, more widespread, last longer through winds, rains, and evaporation. Widespread temperate heat streams on the Earth trigger many reactions, as per the second law of thermodynamics (heat can spontaneously flow from a hotter body to a colder body, and cannot spontaneously flow from a colder body to a hotter body. See Supplementary File) [20,21]. Therefore, stones can spontaneously absorb heat as much as possible from these heat streams and increase their temperatures via physical reactions; CBEs can spontaneously absorb heat as much as possible from these heat streams to form HHCBEs via chemical reactions, partially because carbon atoms are prone to form covalent and other chemical bonds after absorbing heat [22]. Numerous CBEs brought to the Earth by meteorites could also absorb heat to form HHCBEs [3,7]. Although all HHCBEs shall degrade later, some HHCBEs are relatively stable. Hence HHCBEs can be accumulated, and accumulated HHCBEs can continue to absorb heat to form further higher-hierarchy CBEs. These reactions, which have occurred at a myriad of places for billions of years on the Earth, leads to increase in hierarchy of CBEs including life origin and life origin (Figure 1). Therefore, the tendency of CBEs to absorb heat as much as possible from the widespread temperate heat streams on the Earth to form HHCBEs is the driving force of evolution, which can be expressed with the following formula. The macroevolution events that some unicellular organisms evolved into multicellular organisms and some animal individuals (e.g. ants) evolved into eusocial societies, are consistent with the above driving force, as these events are increase in hierarchy of HHCBEs driven by thermodynamics. The macroevolution event that some ectotherm animals evolved to warmblooded animals is also consistent with the above driving force, as warm-blooded animals can absorb more heat than ectotherm animals. Initially, the driving force of evolution was from sunlight and geotherm. Later, with the increase of organisms on the Earth, biological energy became a source of the driving force of evolution. This is important for animals which actively absorb heat and obtain CBEs from other organisms. Energy from coals, petrol, water flows, and atomic nucleus has been utilized by humans for development of human society. During the whole history of the Earth, the amount and the diversity of HHCBEs including organisms on the Earth are generally increasing [23]. However, meteorite impacts, huge volcano eruptions, long glacial periods, and other catastrophes can destroy the temperate heat streams on the Earth and structures of many organisms [24][25][26]. Consequently, the amount and the diversity of organisms could decline greatly for these catastrophes, sometimes leading to mass extinctions [24][25][26]. Deduction of the major steps of evolution The driving force of evolution from thermodynamics leads to hierarchy-wise formation and accumulation of HHCBEs. For example, amino acids, nucleotides and other middle organic , there should be the following seven major steps of evolution on the Earth (Figure 1). Step 7, many animal individuals of the same species collaborate with each other and form animal societies, which include societies of bees, ants, humans, and some other animals. Animal societies have novel functions which cannot be fulfilled by animal individuals. For example, some ant societies plant fungi for food [27]. Some animal societies are eusocial societies, where some individuals reduce their own lifetime reproductive potential to raise the offspring of others. Human societies are also based on individual collaboration, but they are different from animal societies in various respects. Many animals are presocial as they do not form solid societies, but families where parents take care of their own progenies [28]. Although presocial species are more common than eusocial species, eusocial species usually have large populations [28]. This is consistent with the advantages of animal societies in protecting themselves, avoiding intraspecies competition, and obtaining heat and CBEs for reproduction. Step 6, many cells interact with each other and form multicellular organisms, which include fungi, plants, and animals. Step 5, many complexes of large organic molecule aggregates interact with each other and form the first batch of unicellular organisms, which are CBEs having the complicated functions of self-reproduction via catalysis (for efficiently generating HHCBEs) and self-protection (for efficiently maintaining HHCBEs). Step 4, many large organic molecule aggregates interact with each other and form complexes of large organic molecule aggregates, which, like organelles in the unicellular organisms, have some complicated functions (e.g. synthesis of proteins). Step 3, many large organic molecules interact with each other and form large organic molecule aggregates (e.g. lipid bilayer membranes and channels allowing ions to pass lipid bilayer membranes) [29]. Step 2, many middle organic molecules (e.g. amino acids, nucleotides, glucose) interact with each other and form proteins, nucleic acids, polysaccharides, and other large organic molecules. Step 1, many small molecules (e.g. CO 2, CH4, H2O, H2S, NH3) interact with each other and form middle organic molecules (e.g. amino acids, nucleotides, glucose). This step also occurred on other planets, and lots of CBEs were sent to the Earth by meteorites [30]. Steps 1−5 constitute chemical evolution which is also termed abiogenesis or life origin. Regarding life origin, previous evolutionary theories emphasize the special role of RNA (e.g. the world of RNA hypothesis) and some organic molecules with the function of autocatalysis [31], while the CBET highlights collaborative interaction, i.e. collaboration, of many organic molecules and other CBEs, which is supported by a few bacteria created by humans [32]. Deduction of three mechanisms of evolution The first is termed the driving force mechanism, where the driving force of evolution directly leads to increase in the amount of HHCBEs, which is equal to increase in hierarchy and structural complexity of CBEs. Because few mechanisms exist for generation of identical HHCBEs, increase in the amount of HHCBEs means increase in diversity of HHCBEs. Therefore, the driving force mechanism leads to increase in the amount and diversity of HHCBEs, which is equal to increase in hierarchy, structural complexity, and diversity of CBEs. The second mechanism, termed the structure-function mechanism, represents that CBEs with increased hierarchy and structural complexity spontaneously obtain some complicated functions, due to interaction inside HHCBEs. For example, although no amino acids emit fluorescence, when green fluorescence protein is formed by amino acids, it obtains spontaneously the function of emitting green fluorescence, due to interaction of amino acids. The third mechanism, termed natural selection, represents the natural phenomenon that an HHCBE shall increase or decrease its numbers over time as per its overall fitness, and fitter HHCBEs shall increase their numbers relatively more rapidly. The structure-function mechanism leads to numerous complicated functions under natural selection, including self-reproduction, sexual reproduction, non-random mutation, predation of animals, infection of pathogens, immunity of hosts, animal feelings, and human accumulation of knowledge [2,3]. These functions add fitness to the relevant HHCBEs. For example, nonrandom mutations as evidenced in many microbial genomes and mammalian immunoglobulin genes [9,15], can be fulfilled through complicated structures of organisms, and they are useful to generate advantageous mutations and avoid disadvantageous mutations. Sexual reproduction can be fulfilled through complicated structures of organisms and generate numerous mutants, which are useful to fit different environments, through recombination of genomic sequences. This mutation strategy is less risky than nucleotide substitution, because the recombined genomic sequences have passed long-term natural selection [2,3]. Natural selection is a tautology, namely that those fit survive and those surviving are fit, and those having greater numbers are the fitter, and the fitter have greater numbers. Previously natural selection was criticized due to this tautology [33]. We think this tautology cannot refute natural selection, like the fact that the champion is the one who ran the fastest, and the one who ran the fastest is the champion, and a champion must be available if there is a race. Similarly, natural selection must exist naturally, because no mechanism makes all HHCBEs are formed and maintained at the equal rates. Therefore, the driving force of evolution, which leads to longterm repeated formation of HHCBEs, is the prerequisite of natural selection. Accordingly, the first leading role of evolution is not natural selection, but the driving force of evolution from thermodynamics. Natural selection, mutation, genetic drift, or competition was claimed to be the driving force of evolution [3][4][5]21,[34][35][36]. These actions are not based on energy, and they are largely mechanisms or processes of evolution, so they are not the driving force of evolution. The role of energy in biological evolution was highlighted previously [37,38], but energy has not been linked to the driving force of evolution. Expression of the F-CBET The driving force mechanism, the three mechanisms, and the major steps of evolution, as shown in Figure 1, constitute the F-CBET. Because the driving force mechanism and the structure-function mechanism are directly from the driving force of evolution, the F-CBET can be so expressed: evolution is the increase in hierarchy, diversity, and fitness of CBEs under natural selection and driven by thermodynamics due to the chemical effect of the thermodynamic features of the Earth on CBEs. Some social notions from the F-CBET Collaboration and altruism (altruism is a special type of collaboration supporting the production and functions of other entities) are important throughout CBE evolution. For example, many small molecules spontaneously collaborate each other and "sacrifice" themselves to form large organic molecules, and many molecules inside cells spontaneously collaborate each other and "sacrifice" themselves to support the replication and functions of nucleic acids, and many immune cells in multicellular organisms spontaneously collaborate each other and "sacrifice" themselves to support the production and functions of other cells. Many individuals in animal societies spontaneously collaborate each other and "sacrifice" themselves to support the existence of other individuals. Obeying rules and restricting freedom constitute collaboration and altruism inside HHCBEs throughout CBE evolution. For example, many molecules obey rules and restrict their Reliability of the F-CBET The F-CBET is reliable because it is not built on novel laws, novel observations or novel experiments, but deduced mainly from classical laws of thermodynamics with some facts that are well known to be important for evolution. The growth of all known organisms is a process that CBEs absorb heat from temperate heat streams to form HHCBEs. The production of numerous organic molecules, various viruses, and some bacteria in factories or laboratories through chemical synthesis or genetic engineering [39][40][41], is also a process that CBEs absorb heat from temperate heat streams to form HHCBEs in a hierarchy-wise way. Moreover, all known biological reactions comply with classical laws of thermodynamics, so biological evolution complies with classical laws of thermodynamics. These facts all support the CBET. The F-CBET provides better explanations for life origin, macroevolution events, nonrandom mutations, and altruism. These better explanations support the F-CBET. We used about 20 minutes to explain the F-CBET to 26 undergraduate students, and they all understand and accept the F-CBET. This supports the F-CBET. Different explanations of natural selection in the CBET Natural selection in the CBET is different from natural selection in Darwin's theory and the Modern Synthesis in the following respects, although they all represent the sane natural process or mechanism leading to increase of fitness. First, natural selection in the CBET applies to nonliving HHCBEs and organisms, while natural selection in previous theories is largely restricted to organisms. Second, natural selection was expressed as "survival of the fittest" in Darwin's theory, and "gradual replacement of populations with those carrying advantageous mutations (which we summarize as "survival of the fitter")" in the Modern Synthesis [1][2][3], while natural selection is expressed as "survival of the fit" in the CBET, as per its tautology (those fit survive and those surviving are fit). Whether an HHCBE is fit is determined by the HHCBE and its environment . This is also consistent with research advances which suggest that many genomic changes are neutral without increase in fitness, and many organisms carry disadvantageous traits and harmful mutations [3][4][5]10,12,34]. Third, natural selection in previous theories usually emphasizes fitness in a single aspect, while natural selection in the CBET highlights the overall fitness. For example, antelopes are less strong than buffaloes to fight against carnivores, but they run fast and have other advantages, making their overall fitness is adequate. This suggests a novel mechanism of sympatric speciation: organisms with different combinations of traits can speciate in the same ecological niche of the same area because they all have adequate overall fitness. Previously, only the mechanism for sympatric speciation targeting different ecological niches of the same area has been proposed, as different ecological niches exert different selection pressures, which render organisms evolving towards different directions [3]. Fourth, the targets of natural selection in previous theories were claimed to be individuals, populations, or genes [35,42], while all genomic sites, all traits, and all hierarchies are claimed to be under natural selection in the CBET. This is because natural selection "selects" organisms as per their overall fitness, which is influenced by all genomic sites, all traits, and all hierarchies. Therefore, natural selection functions extensively in evolution. Moreover, a conserved trait or genomic site without change during a long geological period does not mean that the trait or site is not under natural selection, but likely under strong negative selection [42]. Fifth, as per previous theories, a biological trait is usually assumed to be advantageous in natural selection, while in the CBET, a biological trait (e.g. long necks of giraffes) may be neutral, advantageous, or disadvantageous in natural selection in general. Moreover, a biological trait may be advantageous in some aspects and disadvantageous in some other aspects (e.g. long necks of giraffes are useful for finding predators, but harmful to bones and Significance of the CBET For biology, the CBET is more scientific and comprehensive than previous theories, because the CBET is deduced from laws of thermodynamics, and as given in for evolution and listed in Section 1, and they did not employ the simple expression of the second law of thermodynamics to explain evolution, and they did not reveal the driving force or mechanism in a direct and comprehensible way [44][45][46][47][48][49][50]. Although some notions or theories in thermodynamics, such as negative entropy (negentropy) and the dissipative structure theory, have been employed to explain evolution [44][45][46][47][48][49][50], as detailed in Supplementary File, these notions or theories are elusive, controversial, or even wrong, mainly because scientists were misled by the wrong notion that biological order is equal to thermodynamic order [44][45][46][47][48][49][50]. Biological order is accumulated slowly through long-term natural selection and requires movements of microscopic particles, while thermodynamic order can increase rapidly by releasing heat to the surroundings and requires microscopic particles to be static (e.g. cold perfect crystals have low entropy and high thermodynamic order). When a seal is dying in ice and becoming cold, its entropy is declining with increase in its thermodynamic order and decrease in its biological order. Biological order supports high entropy of an organism because biological order supports relatively rapid movement of microscopic particles in the organism, like the fact that traffic order supports relatively rapid running of cars in a metropolis. Therefore, the notion that biological order is equal to thermodynamic order is wrong, and the notion of negentropy is wrong because negentropy was built on the wrong notion that biological order is equal to thermodynamic order. Conclusions This article deduces a novel evolutionary theory termed the CBET, which is quite different from previous theories (Figure 1 and
4,450.6
2021-06-17T00:00:00.000
[ "Chemistry", "Physics", "Biology", "Philosophy", "Computer Science" ]
Ascending aortic aneurysm in a patient with bicuspid aortic valve, positive history of systemic autoimmune diseases and common genetic factors: a case report The bicuspid aortic valve (BAV) and specific systemic autoimmune diseases are associated with cardiovascular manifestation, including aortic aneurysm. We reported a case of 64 year-old patient with BAV and a history of ankylosing spondylitis (AS) and systemic lupus erythematosus (SLE), and who developed ascending thoracic aortic aneurysm. The patient presented also the homozygosity for genetic variants of MMP9, ACE, MTHFR and PAI-1 genes. Gene-environmental interactions may represent an additional pathogenetic dimension in the still challenging management of the abnormalities of the aortic wall, including dilatation, aneurysm and dissection. Introduction The bicuspid aortic valve (BAV) affects 1 to 2% of the population and may be complicated by abnormalities of the aortic wall, including dilatation, aneurysm and dissection [1]. We reported a case of a 64-year-old male with BAV and a history of ankylosing spondylitis (AS) and systemic lupus erythematosus (SLE), and who developed ascending thoracic aortic aneurysm. In addition we have performed a genetic screnning of 5 gene polymorphisms (ACE I/D, MTHFR 677C>T, MMP9-1562C>T, PAI 1 4G/5G; MMP12 A82G) reported in multiple case control studies to be genetic risk factors for the development of the abdominal aortic aneurysm (AAA) [4]. Case presentation The patient was a -64 year-old man, who had a 25-year history of hypertension and a 8-year history of hypothyroidism. In 2000, the patient underwent a corrective osteotomy of the spine for ankylosing spondylitis and spondylarthropathy. He had also been diagnosed as having a dilatation of the ascending thoracic aorta associated to BAV. The patients was treated with metilprednisolone along with colchicine, and was followed regularly for progressive aortic dilation. He was admitted at our hospital for further evaluation on December 2008. Transthoracic echocardiography revealed aortic root and ascending aortic dilatation (Figures 1, 2). A computed tomography scan confirmed the presence of the ascending aorta aneurysm (53 mm) and ectasia of the aortic root and arch (respectively 45 mm and 42 mm). The patient was referred to our cardiothoracic surgery service in order to evaluate the need for aortic root replacement. Preoperative coronary angiography and aortography confirmed the presence of ascending aortic aneurysm root without evidence of hemodynamically significant stenosis or regurgitation so the cardiac surgeons decided to postpone surgical treatment at the time when the valve requires replacement. Genetic testing was performed by polymerase chain reaction/restriction fragment length polymorphism analysis in order to identify genotypes patient. The genetic results for the studied polymorphisms are shown in table 1. The patient was TT, DD, 5G5G and TT homozygous for the MMP9-1562C, ACE I/D, PAI 1 4G/5G and MTHFR C677T genetic polymorphisms, respectively. Discussion Patients with AS and SLE may develop cardiovascular manifestations ranging from asymptomatic forms to life threatening conditions, including common cardiovascular manifestation and valvular problems [2,3]. The mechanism responsible for the occurrence and progression of aortic dilatation has also not yet been elucidated in detail. Aortic aneurysm may be the result of the medial degeneration, induced by chronic inflammation and accelerated by prolonged corticosteroid therapy. Aneurism formation may be more common in patients with the coexistence of BAV. In fact, bicuspid aortic valve is considered to be a cause of intrinsic changes in the aortic wall resulting in aneurysms of the ascending aorta, independent of degree of valvular dysfunction [1]. In addition to the presence of BAV and the inflammatory involvement of the aortic wall by immune diseases and systemic hypertension, genetic factors may contribute significantly to the development of aortic dilation in our patient. Indeed, our observations are in agreement with such hypothesis, reporting the presence of homozygosity for genetic variants of MMP9, ACE, MTHFR and PAI-1 genes that have been previously associated with a significant risk of abdominal aortic aneurysm disease [4]. Expression of MMP-9 is elevated in vascular disease, and in particular within aneurysm tissues. A meta-analysis of 2 larger case-control studies that have looked at the ACE I/D polymorphism in AAA patients showed a strong overall association between ACE D allele (RR 1.33 [1.20e1.48]) and disease [4]. An association between the presence of AAA and elevated MTHFR 677C>T has been indicated, and meta-analysis of these studies reveal a significant increased risk of AAA disease for the T allele variant [4]. Recently, a study suggested a significant association between growth and plasminogen activator inhibitor (PAI) 1-675 4G/5G and AAA [5]. In the present case report, in addition to inflammatory involvement of the aortic wall by systemic autoimmune diseases, these specific genetic variants may have promoted the development of aortic dilation in this patient. BAV is responsible for a large proportion of patients coming to aortic valve replacement. The mechanism responsible for the associated vascular complications remains controversial. Some patients with BAV have rapidly progressive valve and aortic dysfunction while some remain without complications. Several advances in the molecular genetics of aortic valve disease related to BAV, have recently been made, especially through the use of linkage analysis. These resulted in the discovery of mutations in NOTCH1 gene, a signaling and transcriptional regulator gene on chromosome 9, NOTCH1 and some different loci linked to BAV on chromosomes 18, 5, and 13 [6]. However, genetic basis of BAV remains unclear. Our data suggest, for the first time, that some genetic biomarkers (functional polymorphisms) may predispose BAV patients at an increased risk of aortic dilatation, aneurysm formation, and dissection. Our findings are hypothesis generating and need to be confirmed by further clinical studies. Elucidating the genetic basis for BAV may have substantial implications in clinical practice. Identification of specific genetic markers may helpful for early clinical detection of relatives; genetic markers might be also used to predict the natural progression of the condition and to identify those cases that might have potentially life-threatening complications from BAV Therefore, future studies focusing on the identification of additional disease-causing and susceptibility genes are needed in order to improve understanding of the pathophysiological processes as well as to identify new therapeutic strategies.
1,391.6
2009-07-06T00:00:00.000
[ "Medicine", "Biology" ]
Problems in fuelling spark ignition engines with dimethyl ether This paper discusses briefly the production technology of dimethyl ether, taking into account plant raw materials and the physical and chemical properties of DME as compared to diesel fuel. The benefits and disadvantages of DME as a fuel are presented and changes in the emission of harmful substances characterised as compared to the combustion of diesel fuel. Also, basic usage problems are addressed, e.g. the wear of engine’s elements, cavity and leakages in the fuel system. Introduction Extensive development of vehicle transport over the past decades has resulted in bigger consumption of engine fuels and rapid degradation of the environment. The need to ensure fuel supply -matching the growing demand -on world markets, combined with depleting crude oil resources brought a greater interest in alternative fuels. The selection of alternative fuels is determined by four criteria: -availability of raw material, -degree to which the production technology is available, -impact on the degradation of environment throughout the entire life cycle (WtW), -feasibility in terms of powering modern-day engines. One of the alternative fuels that according to the aforesaid criteria appears more advantageous than a number of biofuels is dimethyl ether (DME) [2,5,11]. Its production has been developed chiefly in Asian countries -a major worldwide producer of DME is China, where coal is the basic raw material. In Europe there are several pilot installations, among others in Sweden and Denmark. Raw materials used for DME production may include: − coal, − natural gas, − solid biomass, − liquid biomass, − biogas, − waste, e.g. waste plastics. High diversity of the raw materials that may be used stems from the method of producing the said fuel. Raw materials containing in their structure carbon and hydrogen, regardless of the manner of their bonding in particles are transformed into syngas, i.e. a combination of CO and H 2 in adequate proportions. In the next phase syngas is used for the synthesis of methanol, of which DME is ultimately produced. Below are presented chemical reactions occurring in respective phases of DME synthesis. Hydrogen essential for adjusting the composition of syngas is usually the product of conversion of carbon oxide into vapour: The aggregate process of DME synthesis from syngas is as follows: The formula of DME synthesis reaction is unrelated to the raw material. In technological practice raw material has considerable impact on the chemical composition of gasification products, which must be purified from sulphur compounds (mainly SO 3 ), partly CO 2 , non-combusted hydrocarbons and a number of other chemical pollutants. In case of certain raw materials this process is very problematic in terms of effective production of syngas of a particular purity and a specified CO/H 2 proportion. Methane or bio-methane is another raw material attractive for DME/bioDME synthesis. If DME is synthetized from natural gas or biogas, the process chart is similar as the one presented above. The only difference takes place in the initial phase, where methane (biogas) is directly processed into syngas. Those technologies have been developed by BP Chemicals and Haldor Topsoe, and also several Japanese companies, in collaboration with Total oil and gas company. Another important aspect as regards the implementation and use of bioDME is the reduction of CO 2 emissions in the fuel consumption chain. In accordance Gasification with Directive 2009/28/EC [6] bioDME is considered to reduce CO 2 emission in the life cycle at the level of 92-92%, depending on the biomass material. Due to its properties, DME may be stored and transported as LPG. Usually DME is stored in aboveground or underground containers with a horizontal axis and transported by road or railway tanks. DME physical and chemical parameters Dimethyl ether (DME), with a structure described by CH 3 -O-CH 3 formula in standard conditions (0.1 MPa pressure, temperature of 273.15 K) is a colourless gas with a characteristic odour. Compressed under pressure higher than 0.6 MPa, it undergoes condensation. In such form it is considered as fuel for SI combustion engines [35]. Basic physical and chemical parameters of liquid DME compared to diesel fuel are presented in Table 1. There are references in literature to other values of parameters corresponding to DME properties. Benefits of DME as fuel for SI engines Low boiling temperature. The boiling temperature of DME is much lower than that of diesel fuel. Because of this parameter, liquid fuel injected to the engine's cylinder is promptly vaporised and therefore the self-ignition delay is shorter. With quick evaporation it is possible to use low injection pressure of (20-30) MPa [3], even in condition of the engine's high rotational speed. Large cetane number. The cetane number indicating the combustion speed is higher for DME than for diesel fuel. Thus, compared to diesel fuel, DME has a much shorter self-ignition delay and in result the emission of nitrogen oxides [10] is reduced. Mastered methods of storage and distribution. Because of similar physical and chemical properties of DME and the condensed mixture of propane and butane (LPG), the matter of their storage and distribution is considered well studied. DME can be stored in liquid form in condition of moderate pressure -above 0.6 MPa [10]. According to the results of studies conducted by High Pressure Gas Safe-ty of Japan [15], due to chemical stability during storage, the diffusion ratio and the risk of the tank's explosion in the course of heating are similar for DME and LPG. Very low toxicity. DME is a volatile organic compound, with no carcinogenic or mutagen properties. Its toxicity is considered very low or insignificant [22]. Also, it is considered that DME does not have any hazardous impact on human health [18]. Non-corrosive for metals. DME does not have any corrosive effect on metals used in the construction of fuel systems in combustion engines [Błąd! Nie można odnaleźć źródła odwołania.0]. Disadvantages of DME as fuel for SI engines Low calorific value. Because of the particle structure in DME which contains oxygen, its calorific value accounts for approx. 65% of the calorific value of diesel fuel [14]. Moreover, considering that the density of liquid DME comes to 80% of diesel fuel's density -to achieve a similar calorific effect, a twice greater quantity (in terms of volume) of DME must be injected to the engine's cylinder than that of diesel fuel [21]. Low viscosity. Compared to diesel fuel, DME characterises with very low (at least twenty times lower [31]) viscosity and in consequence also poorer lubricity. Those unfavourable properties cause leakages from the fuel system and also worsen the workability of its movable elements, thus making them more prone to wear due to greater friction [10]. Low bulk modulus. Liquid DME is two to four times more compressible than diesel fuel [30]. For this reason, in fuel systems in SI engines not adapted to the characteristics of liquid DME fuel pressure may be decreased in highpressure areas of injection systems. Aggressive towards certain plastics. DME shows corrosive action in most elastomers, damaging sealing elements and other components of fuel systems made of elastomers [7,10,34]. Emission of harmful products of DME combustion DME produced on an industrial scale from methane can be also derived from renewable resources, e.g. wood. Combustion gases from DME-fuelled engines -compared to those powered with diesel fuel -contain less harmful substances, including particular matter, sulphur oxides and hydrocarbons. The emission of nitrogen oxides and carbon oxide may be smaller or greater, depending on the conditions of engine's operation [Błąd! Nie można odnaleźć źródła odwołania.0, 20]. Because of greater volatility of DME than that of diesel fuel, in case of leakage DME rapidly outflows to atmosphere, and thus does not contaminate soil. Engines fuelled with DME [8,13,14] prove to emit less noise and less particular matter, THC and nitrogen oxides. Fuel consumption vs the generated energy is comparable with diesel oil. Studies on a one-cylinder AVL engine with engine displacement equal to 2000 cm 3 with turbocharger and a charge-air cooler proved the following differences in emission compared to diesel fuel combustion -presented in Table 2 [24]. Due to high oxygen content and the lack of bonds between carbon atoms (C-C) in a DME particle, the use of that fuel in SI engines helps to reduce PM emission significantly compared to such emission in diesel-fuelled engines [8,17]. Particular matter present in exhaust gases from DME-fuelled engines is mainly the product of combustion of grease and additives in DME that improve its viscosity [1, [23]]. For this reason, the exhaust cleaning systems in SI engines fuelled with DME usually do not require any PM filtration systems. The emission of nitrogen oxides resulting from the combustion of DME is not conclusive. It is assumed that because of a shorter self-ignition delay and also a smaller part of the fuel combusted before its full mixing, the emission of nitrogen oxides is lower [14]. The factors listed below reduce the maximum temperature in the combustion chamber and therefore reduce NO x emission. Nevertheless, increased NO x emission as compared to a diesel-fuelled engine is possible. This is because of the extended period of the highest temperature throughout the combustion process [3]. The emission of hydrocarbons [4,16,19,23,26,28] manifests in cases of a rich mixture, whether local or global. In case of DME, which contains oxygen particles in its structure, the occurrence of a rich mixture locally is limited owing to incomplete fuel & air mixing. As regards carbon monoxide, an increase in the emission is recorded occasionally -compared to diesel fuel which may be attributed to longer injection while at the same time its pressure is lower and the diameter of openings in the fuel injection system is greater. However, reducing HC and CO emissions is relatively easy owing to popular oxidising catalytic reactors. Because of a shorter self-ignition delay and the resulting slower increase of pressure inside the combustion chamber, a reduction of noise levels generated by the combustion engine is observed [8,15]. There are several compounds that should be considered, even though they are not regulated, in fuelling SI engines with DME. The emission of formaldehydes [32,33] is greater and also moderate emission of sulphur dioxide, polycyclic aromatic hydrocarbons, benzene, toluene and xylene [32,33] may be expected. Laboratory research has been conducted to evaluate the reactivity and ozone forming potential [12,17,24] when fuelling engines with DME. In a typical urban atmosphere the DME reactivity is equal to or lesser than in case of conventional fuel. Therefore, it is reasonable to assume that the use of DME -compared to conventional fuel -may have positive impact on ozone levels in urban agglomera-tions. DME resilience in troposphere has been estimated to last 5-6 days. Usage problems relating to SI engines fuelled with DME Pure DME cannot be considered as a substitute of diesel fuel. Its use for fuelling combustion engines requires a modification of the fuel system and the use of additives improving certain physical and chemical properties of DME. Critical problems concerning the use of SI engines fuelled with DME are discussed below: Excessive wear of elements of the fuel system due to greater friction. The relatively low viscosity of DME implies poor greasing properties of the fuel. This results in greater friction between the movable parts of certain elements of the fuel system such as injection pumps, pumps in Common Rail and injectors [Błąd! Nie można odnaleźć źródła odwołania.0], thus accelerating their wear. Low viscosity of DME may be resolved by additives improving the lubricity, a change in materials used in the fuel systems and processing of surfaces exposed to greater wear because of friction [31]. Among those solutions the most effective may be considered additives enriching DME. Positive results have been proved in using commercial additives for DME [9], mainly those improving its lubricating properties. Those additives can be added in an amount ranging from 100 to 1000 mg/kg. Moreover, as fuel constituents improving its lubricity, added may be substances such as diesel, castor oil, vegetable oils and related esters [7,15,27,29]. Fig. 1 presents examples of results of a lubricity study on DME, where DME is enriched with various additives, based on Medium Frequency Pressurised Reciprocating Rig [27], which is a modification of the standard HFRR method (High Frequency Reciprocating Rig), wherein the grease properties of fuel are expressed as the diameter of wear scad measured with precision up to 1 μm. DME additives included methyl esters of colza oil, castor oil and Lubrizol 539. For comparison, ill. 1 also shows additional lubricity criteria for diesel fuel. Fig. 1. Results of DME lubricity study -DME enriched with methyl esters of colza oil, castor oil and WSD (wear scad diameter) [27] As illustrated by the above, even a small amount added of a substance improving lubricity changes DME properties considerably. According to the cited studies [2,14], the commercial additive proved to be the most effective. A dose of 800 ppm is sufficient to achieve lubricity comparable with that of diesel fuel. It should be noted that methyl esters of colza oil and castor oil, which are cheaper than commercial additives, also allow for improving the lubricity of DME. Leaks from the fuel system. Factory fuel systems of SI engines fuelled with diesel are not adapted for DME because of great likelihood of leakages. Literature refers to two main causes of such leakages: low viscosity of DME and aggressive action of DME on sealing elements. Even in conditions of atmospheric pressure, DME leakage may be considerable, even if lack of tightness occurs between elements moving between each other, e.g. piston -cylinder in injection pumps, coming to as much as (40 ÷50)% of the fuel quantity [11]. Greater leakages are observed in case of engines intended for trucks and machinery equipment than in light-duty vehicles [9]. Leakages from the DME fuel systems may be prevented also by increasing its viscosity by applying suitable additives, as discussed earlier and also by exchanging sealing elements prone to corrosion into elements covered with Teflon or made of PTFE [10,7,34]. Cavity in the injection system. Due to high vapour pressure of DME cavity may develop in injection systems in engines fuelled with DME. Cavity results in hindered flow of fuel and corrosion of the system's elements. The intensity of cavity increases with the increase of fuel's temperature and occurs more frequently in areas of nondefined, dynamic fuel flow. An effective method of preventing cavity in DME injection systems is to maintain fuel pressure in the system above (1.2-3) MPa [11,31]. DME dimethyl ether bioDME dimethyl ether of plant origin SI self-ignition engine WSD wear scad diameter Bibliography
3,383.8
2017-08-01T00:00:00.000
[ "Engineering" ]
Low-Rate DoS Attacks Detection Based on MAF-ADM Low-rate denial of service (LDoS) attacks reduce the quality of network service by sending periodical packet bursts to the bottleneck routers. It is difficult to detect by counter-DoS mechanisms due to its stealthy and low average attack traffic behavior. In this paper, we propose an anomaly detection method based on adaptive fusion of multiple features (MAF-ADM) for LDoS attacks. This study is based on the fact that the time-frequency joint distribution of the legitimate transmission control protocol (TCP) traffic would be changed under LDoS attacks. Several statistical metrics of the time-frequency joint distribution are chosen to generate isolation trees, which can simultaneously reflect the anomalies in time domain and frequency domain. Then we calculate anomaly score by fusing the results of all isolation trees according to their ability to isolate samples containing LDoS attacks. Finally, the anomaly score is smoothed by weighted moving average algorithm to avoid errors caused by noise in the network. Experimental results of Network Simulator 2 (NS2), testbed, and public datasets (WIDE2018 and LBNL) demonstrate that this method does detect LDoS attacks effectively with lower false negative rate. Introduction Denial of service (DoS) attacks have always been the main threats to network security [1]. In February 2019, the website of the Philippine National Association of Journalists suffered a DoS attack and was closed for 12 h. The Facebook was also attacked by DoS in March 2019, and users could not log in to their accounts. Nowadays cloud computing [2], software defined network [3,4], and wireless sensor networks [5,6] are widely applied. The development of these technologies makes the current network structure which has higher node density, larger scale and limited resources more vulnerable to DoS attacks [7][8][9]. This situation is even worse when more and more variants of DoS attacks arise [10,11]. Low-rate denial of service (LDoS) is a smart attack unlike the flooding attacks due to its stealthy and low-rate attack traffic behavior. It sends periodical packet bursts to attack legitimate flows by exploiting the vulnerability of transmission control protocol (TCP) adaptive mechanism [12]. Therefore, it is fairly simple for LDoS attacker to elude the existing counter-DoS mechanisms [13]. Existing researches [14] indicate that the network traffic is actually a non-stationary signal due to the unpredictable change of the network at all times. The anomalies of network traffic caused by LDoS attack flows may indicate in the time domain, such as the traffic reduced by fake congestion. They may also be expressed in the frequency domain, such as periodicity, abnormal frequency distribution of the traffic, and so on. However, these existing LDoS detection algorithms are only based on the characteristics in the time domain [15][16][17][18] or the frequency domain [2,19,20]. Another limitation associated with the emerging literature is that not enough attention has been paid to the adaptive ability of detection schemes to scenes and the filtering of noise in the environment. In response to the above limitations, we propose an anomaly detection method based on adaptive fusion of multiple features (MAF-ADM) for LDoS attacks. Time-frequency joint analysis, a powerful tool for analyzing non-stationary signals, is used to analyze the anomalies of network traffic caused by LDoS attack streams. Several statistical metrics of the time-frequency joint distribution are chosen to generate isolation trees. Then anomaly score is calculated as the basis of LDoS attack detection. The major contributions of our work are threefold. Firstly, we analyze network traffic in time-frequency domain and introduce a series of novel features for detecting LDoS attacks. These attributes can simultaneously reflect the anomalies in time domain and frequency domain. By evaluating on Network Simulator 2 (NS2) simulations, these attributes do have good sensibility to identify LDoS attacks of different parameters. Secondly, we establish isolation trees for the detection metrics and then fuse them together to describe the network state by linear weighted way. The weight of each isolation tree is dynamically adjusted according to their ability to isolate the LDoS attacks. By this way, the adaptability and accuracy of the method is further improved. Thirdly, we apply the weighted moving average algorithm to filter noise so that the method has lower false positive rate. The rest of the paper is organized as follows: Section 2 introduces related researches about characteristics and detection methods of LDoS attacks in recent years. Section 3 describes the detection metrics based on time-frequency analysis. A new detection algorithm based on MAF-ADM is proposed in Section 4. The performance of MAF-ADM is tested on simulation experiment NS2, testbed, and the public datasets in Section 5. In Section 6, we summarize this paper and introduce the future work. Characteristics of LDoS Attacks LDoS attack flow has a lower average rate than the traditional DoS attack flow, which makes it more insidious and difficult to be detected [21]. LDoS attacks send periodical packet bursts with model as shown in Figure 1 [22]. P represents the attack period, R is the attack rate, and L denotes the duration of a single attack pulse. It repeatedly evokes adaptive adjustment of TCP congestion control so that the network is in a fake congestion state when the attack strength (R * L) is large enough. Depending on the adaptive mechanism evoked by the attack, LDoS attacks can be divided into retransmission timeout (RTO)-based attacks and additive increase and multiplicative decrease (AIMD)-based attacks. • RTO-based LDoS attacks: A TCP sender normally sets retransmission timeout (RTO) for each packet. As shown in Figure 2a [23], when the network link is in normal state, we can assume that RTO of the sender is the minimum value (usually set to 1 s in order to achieve optimal throughput of the network). But when an attack pulse is arrived, the TCP gets into the timeout retransmission state. During the attack interval, the sender begins to get into the slow start and successfully retransmits. For some data packets, the RTO can also return to the minimum value by Formula (1) [24]. G is the clock granularity. SRTT and VRTT represent round-trip time and the variation of round-trip time respectively. The above process repeats so that the quality of network services is reduced. • AIMD-based LDoS attacks: The additive increase and multiplicative decrease (AIMD) mechanism is to resend the packet immediately after the sender receives three duplicate acknowledge characters (ACKs), which reduces its congestion window (CWND) through multiplicative decrease (MD) algorithm and increases the CWND according to additive increase (AI) algorithm. The link is always in the AIMD state and does not enter the timeout retransmission and slow start under the AIMD-based LDoS attacks as Figure 2b [25] presented. But its CWND is decreasing so that the system performance is gradually reduced. Finally the CWND is reduced to a limit and the system performance is the worst, which cannot be recovered by itself. Both of the above LDoS attacks exploit the TCP adaptive mechanism. The LDoS attacker usually chooses user datagram protocol (UDP) stream to launch the attack. Even if the network sends a congestion indication (such as packet loss, repeated ACKs etc.), UDP does not reduce the number of packets sent to the network but TCP does. Under LDoS attacks, the attack pulse stream preempts more and more network resources, and the victim believes that the network is "congested" and rapidly reduces its transmission rate. The quality of service in the network is seriously reduced as Figure 3 [26] showed. Therefore, how to detect LDoS attacks is a very important issue for network security. Detection of LDoS Attacks There are various LDoS detection algorithms proposed in recent years. Most of them can be roughly classified into time domain and frequency domain according to detection characteristics. • Time domain based detection algorithm Meng et al. [27] established a feedback control model to describe the process of random early detection(RED) congestion control, by which the congestion window and router queue behaviours were analyzed together. Then a queue distribution model consisted of the instantaneous queue and the average queue was proposed to extract the attack feature. After that, a simple distance-based approach and an adaptive threshold algorithm were combined to detect every LDoS attack burst. The experimental results of NS2 and testbed proved that LDoS attack bursts can almost be detected completely and this method was especially robust to legitimate short bursts. Wu et al. [28] proposed a detection algorithm based on the multifractal characteristics of network traffic. It was proved that the multifractal characteristics of network traffic are different between the states of normal and LDoS attacks by using MF-DFA algorithm. Then the wavelet point-by-point estimation algorithm was used to calculate the Hölder exponent to determine when the attack begins and ends. The NS2 results showed that the approach could achieve the detection probability of 92% and false positive rate of 9%. Guo et al. [29] presented a situation aware method based on multi-feature adaptive fusion to detect LDoS attacks in the border gateway protocol (BGP) routing system. The statistical characteristics of BGP routing information such as frequency of announce messages, frequency of withdraw messages and average autonomous systems (AS) path length were selected as representative of security state of the system. Each of the above features was modeled by reverse cloud generation algorithm, and then the dynamic weights were used to fuse the submodel. Experiment results showed that this method can effectively detect not only control plane attacks and but also data plane attacks (BGP-LDoS). Tang et al. [30] applied the two steps cluster to analyze network traffic on a large time scale. According to the characteristics of TCP traffic was abnormal when the LDoS attack occured, the abnormal cluster was further detected by using the concept of data slice from a small time. Experimental results on NS2 and public datasets Lawrence Berkeley National Laboratory (LBNL) and Measurement and Analysis on the WIDE Internet (WIDE2018) showed that LDoS attacks could be effectively detected. • Frequency domain based detection algorithm Neha et al. [2] proposed an algorithm for detecting and filtering LDoS attack streams in the frequency domain. This method based on power spectral density was used to monitor the aggregated flow in the cloud network in real time. The method could significantly reduce the possibility of attack in a real cloud environment based on OpenStack. Chen et al. [23] combined power spectral density to propose two new information features for detecting LDoS attacks, which named Fourier power spectrum entropy and wavelet power spectrum entropy. Based on these two information features, a Robust-RED queue management algorithm based on power spectral density was proposed to filter the LDoS attack streams. The algorithm was verified on the NS3 simulation experiment platform, which could indeed resist different LDoS attacks. Wu et al. [20] also proposed a method based on frequency spectral analysis for detecting and filtering LDoS attack streams. The TCP streams and LDoS attack streams were transformed from time domain to frequency domain and obtained the round-trip time according to the frequency domain search algorithm. It was found that the magnitude of energy of TCP stream is mainly concentrated in the points of n/RTT. According to this feature, an infinite impulse response filtering algorithm was proposed, which can filter LDoS attack flows with as little impact as possible on legitimate TCP flows. Wu et al. [31] applied Kalman filter to detect LDoS attacks. By analyzing the characteristics of victim network traffic at the beginning of LDoS attacks, the error between one step prediction and the optimal estimation was used as the basis for detection. These existing detection methods still have some deficiencies, such as (1) high false negative rate caused by using only the characteristics of time domain or frequency domain; and (2) lack of processing of network traffic noise and adaptability. For example, the key parameters such as the detection threshold depend on experience and cannot be adjusted according to the change of network environment. To address the above limitations, a new algorithm for detecting LDoS attacks is proposed in this paper. This study is based on the fact that the time-frequency joint distribution characteristics of legitimate TCP traffic will be changed by the LDoS attacks flow. The detection features are more robust to detect different LDoS attacks since the time-frequency joint distribution can simultaneously reflect the anomalies in time domain and frequency domain. Then the anomaly score is calculated by MAF-ADM to metric that change, which is the basis of detecting LDoS attacks. Time-Frequency Joint Analysis Based Detection Metrics In this section, we firstly describe that how to obtain time-frequency joint distribution by performing short-time Fourier transform on network traffic in the bottleneck link. The reason for that is the network will be in a state of fake congestion and network traffic in the bottleneck link will be the first to bear the brunt when an LDoS attack occurs. Some statistical features of the time-frequency joint distribution are extracted as detection features, which accurately represent the anomalies caused by LDoS attacks both in frequency domain and in time domain [32]. STFT Analysis of Network Traffic In this paper, detection window is used as the basic unit for detecting LDoS attacks. Detection window is defined as a sample sequence consisting of network traffic samples x (τ) that are continuously acquired over a given length of time. Given window function of fixed time width w (t) that slides along the time axis x (τ), the short-time Fourier transform (STFT) of the signal is defined as Formula (2) [33]. Considering that the time series x (τ) of sampling network traffic is in discrete form, it is necessary to discrete transformation. We set t and f as the sampling intervals of time variable and frequency variable respectively, and N is the total number of samples of the time series x (τ), m, n = 1, 2, ..., N. The discrete form of the sequence's STFT is defined as Formula (3) [33]. The result STFT x (m, n) of the transformation obtained by the equation is a two-dimensional complex matrix. The rows m and columns n of the matrix correspond to the sampling point of time and frequency respectively. The elements in the matrix correspond to the spectral amplitude. The magnitude matrix can be expressed as Formula (4). Time-Frequency Joint Distribution Based Detection Metrics The matrix A (m, n) is essentially the energy distribution of the signal at different frequencies of different times. In this subsection, by using NS2, we built a dumbbell network topology as the same as Section 5.1.1 and selected two kinds samples (normal samples and samples containing LDoS attacks) for analyzing the anomalous characteristics of the time-frequency joint distribution of TCP traffic caused by LDoS attack flows. Total Signal Energy The total signal energy (TSE), named T, refers to the sum of the amplitude frequency of all elements in the time-frequency joint distribution matrix as Formula (5). In Figure 4, the total signal energy values of 150 detection windows acquired in normal state and LDoS attack state respectively are compared. Due to the constant preemption of resources by the LDoS attack stream, the service quality of TCP connection in the network is affected. Therefore, the average value of TCP is lower, and the value of TSE is also reduced according to Formulas (4) and (5). Segmentation Frequency Ratio The segmentation frequency ratio (SFR), expressed as S = S Low , S MidLow , S MidHigh , S High , mainly reflects the frequency distribution of the original signal. We divide the time-frequency joint distribution matrix from the highest frequency to the DC part into four parts according to the ratio of 1/2, 1/4, 1/8, 1/8, which including high frequency, medium high frequency, medium low frequency, and low frequency. This division is based on the fact that the anomalies in the low frequency part are more obvious and require further subdivision. Thus we take S Low as an example to illustrate the calculate process as Formula (6). Figure 5 shows the instantaneous frequency comparison at between a certain moment under normal state and LDoS attack state. The network traffic is stable and the fluctuation is small in normal state, which concentrated in the low frequency part. But the pulse attack flow makes the network links consecutively switching between states of overload and underload. The congestion control mechanism is triggered repeatedly so that the TCP traffic is in "up" and "down" repeatedly and dramatically. Therefore, the SFR is more even in LDoS attack state. Normalized Variance of Segmentation Frequency The normalized variance of segmentation frequency (NVSF), denoted by N = N Low , N MidLow , N MidHigh , N High , mainly reflects the fluctuation of energy in the frequency part. The normalized variance is the variance obtained by dividing each element by the mean of all elements of the entire time-frequency joint distribution matrix. For the same reason that the signal in the low frequency part is more concentrated so that the change is more obvious, the division of the frequency part is consistent with the division in SFR. Then how to calculate N Low is shown as the following Formula (7). Normalized variance comparison of each frequency part between detection window in normal state and LDoS attack state as shown in Figure 6. The segmentation frequency distribution in the normal state is more concentrated, while the distribution of each frequency part is more even in LDoS attack state. LDoS Attacks Detection Method In this section, we present MAF-ADM for detecting LDoS attacks as shown in Figure 7, which achieves transition between features of network traffic and anomaly score of network state. This study is based on isolation forest which is an excellent anomaly detection method purely based on concept of isolation without employing any distance or density measure. We firstly utilize the features of time-frequency joint distribution to generate isolation trees for normal state (traffic data under normal state that has been collected from bottleneck links in the network), and then fuse all isolation trees into an isolation forest through linear weighted manner. With the isolation forest, we can evaluate the anomaly score by weighted moving average algorithm to judge whether LDoS attacks occur. Generate Isolation Trees As analyzed above, we can utilize the features of time-frequency joint distribution to describe the possibility of the network suffering from LDoS attacks. It is costly that simply combined these features to construct a multi-dimension description model. Therefore, we build isolation trees for these features, which have a low linear time complexity and a small memory requirement. Supposing Y = {y i } , y i = T, S, N , i = 1, 2, ..., n is the detection metrics of training data with d characteristic dimension, the binary tree structure named isolation tree is used to separate samples containing LDoS attacks from normal samples.Since samples that containing LDoS attacks usually have the characteristics of being sparsely distributed and distant from dense normal samples, they are closer to the root node in the isolation tree structure and therefore more easily isolated. The construction steps [34] of isolation trees are that randomly selecting feature q and its value p to recursively split the training data Y until one of the following three conditions is met: • The isolation tree reaches a defined height; • There is only one sample on the node; • Features of all the nodes are the same. Linear Weighted Fusion In [34], path length h j (y i ) is defined as the number of edges traversed by the sample y i Y from the root node to the external node in the j st isolation tree, which describes its deviation from normal state. However, the strategy of randomly selecting features and dividing feature values may make some isolation trees not equipped with the ability to distinguish between normal samples and samples containing LDoS attacks. For the purpose of analyzing the ability of isolation trees to isolate samples containing LDoS attacks, we also used two kinds of samples (normal samples and samples containing LDoS attacks) to calculate the path length in each isolation tree. Figure 8 proves that the ability of each isolation tree to isolate abnormal samples is not the same. For example, in the 22 st isolation tree and the 45 st isolation tree, two samples is widely separated, while in the 63 st isolation tree and the 87 st isolation tree , the two samples are too close to be indistinguishable. Path length in different tree structures is not comparable, so anomaly score S is proposed to fuse the normalized results of all isolation trees. It ignores the difference between isolation ability of isolation trees that using mean value of the path length to calculate anomaly score. In order to more rationally synthesize the result of each isolation tree, we apply the weighted path length to instead of the mean value. Then the weight of the j st isolation tree is obtained by Formula (8). where w cur j is the current weight of the j st isolation tree. w pre j is the previous weight. λ ∈ [0, 1] is used to control the speed of weight updating so that this method can be adaptive to scene change. d j is the isolation ability of the j st isolation tree at present,which can be calculated as Formula (9). There are a total of t isolation trees. Then the anomaly score can be calculated by Formula (10). where c (n) is the average depth of isolation trees.It is used to normalize the result and its calculation Formula (11) [35] is as follows. H (i) is the harmonic number and can be estimated by Euler's constant as Formula (12). Discrimination of LDoS Attacks Network traffic has randomness, which means that many accidental factors, such as data stream bursts, data stream silence and occasional noise, may easily cause false positives. To solve this problem, we adopt weighted moving average algorithm to smooth anomaly score as Formula (13). Anomaly score before the current detection window is used to represent the abnormality degree of the current detection window. As Figure 9 shown, the curve of anomaly score smoothed by the weighted moving average algorithm is smoother, so that the false alarm can be effectively reduced. α k is the weight of detection window k as Formula (14). Considering that the values of adjacent windows are similar, the larger weight is given to the nearer detection window so that the smoothed value is closer to the real value. Then the criterion for determining whether the sample y i includes LDoS attacks is as follows: • When ∑ t k=1 w cur k h k (y i ) → c (n),S (y i , n) → 0.5, that means all samples in the data set do not contain obvious LDoS attacks; • When ∑ t k=1 w cur k h k (y i ) → 0,S (y i , n) → 1, that means the sample includes LDoS attacks; • When ∑ t k=1 w cur k h k (y i ) → n − 1,S (y i , n) → 0, that means the sample is normal. The anomaly score calculated based on the above algorithm is a continuous value between 0 and 1, so we need a threshold to divide whether the LDoS attack occurs. The anomaly scores will be approximately normal distribution when the number of samples is sufficient according to the Central Limit Theorem. Therefore, the threshold can be calculated as Formula (15). The given constant z in the confidence interval is related to detection accuracy, which is set to 2.58 in this paper. Then the sample y i will be judged as LDoS attacks when its anomaly score is large than the threshold. Experiments and Results Analysis In this section, we verified the detection performance of this method on NS2 [36], testbed, and public datasets [37,38]. Experiments of NS2 and testbed were used to verify the stability and accuracy of the method for detecting LDoS attacks. Experiments on the public datasets were used to evaluate the false positive rate of the method in complex network environment. Indexes used to evaluate the detection performance are detection accuracy, false negative rate and false positive rate. The Experimental Environment We built a dumbbell-type network topology by NS2 as shown in Figure 10. There were a total of 25 legal flows in the network, which included 15 TCP flows, five TCP flows and tive UDP flows for generating background traffic. Router two and Router three were connected by a bottleneck link with a bandwidth of 10 Mbps and a delay of 30 ms. Except for that bottleneck link, all other links had a bandwidth of 100 Mbps and a delay of 15 ms. All TCP flows used the New Reno congestion control protocol with RTO set to 1.0 s. All routers used RED as the queue management algorithm. Other parameters were the default parameters of NS2 platform. In this topology, legitimate users communicated with others by using TCP connections and UDP connections. LDoS attacker usually used UDP protocol to send periodic pulse streams. In Router three, we extracted and sampled the packet arrival number of TCP at a period of 0.1 s to obtain the time series data. The duration of detection window was set to 10 s. Performance of LDoS Attacks Detection Based on the analysis in Section 2.1, we conducted multiple groups of simulations to evaluate our method for detecting LDoS attacks of different parameters. The specific settings of the attack parameters are shown in Table 1. The anomaly score of normal network traffic was applied to determine an appropriate threshold for detecting the LDoS attacks. The state of detection window was identified as LDoS attacks when its anomaly score was larger than 0.5264. From G2 to G4, we set controlled experiments of the LDoS parameters respectively. The variation range of anomaly scores under different attack parameters was calculated as Figure 11. Figure 11a,b present that the anomaly score distribution of network traffic under LDoS attacks did not vary a lot when P and T changed. From Figure 11c, we can observe that the anomaly score distribution of the normal network traffic was closest to that of network traffic under LDoS attacks with R = 2 Mbps. The reason for that is the low ratio of attack rate of LDoS attack stream to the bottleneck link bandwidth. When the LDoS attack stream only has a weak advantage to compete with the legitimate TCP stream for resources, it is difficult to cause the link congestion to reduce the quality of service. We further compared TCP traffic and abnormal scores of the normal state, LDoS attacks with (R = 2 Mbps, L = 0.1 s, P = 1 s) and LDoS attacks with (R = 30 Mbps, L = 0.1 s, P = 1 s), as shown in Figure 12. These results seem consistent with our study. For example, Figure 12a,b shows the distribution of TCP traffic and anomaly score between the normal state and LDoS attack with (R = 2 Mbps, L = 0.1 s, P = 1 s) is very similar, which means the LDoS attack effect was very weak so that it almost could not reduce the quality of network service. In addition, the 1st detection window of the normal state was misjudged as under LDoS attack, the reason is that the network at this time was in a state of just establishing TCP connections, and since the traffic distribution was similar to the state under LDoS attack, false alarm occurred. Figure 12c was under the strongest attack, the quality of service was severely reduced, and therefore anomaly score was the highest. Testbed Experimental Environment For verifying the detection performance of this method for LDoS attacks in the real network environment, we established a network platform as Figure 13 presented. The testbed consisted of six legal users, one LDoS attacker, two routers and one server. There were six legal flows in the network, which included five TCP flows and one UDP flow. The bottleneck link bandwidth between router one and router two was 10 Mbps, and the remaining links bandwidth were 100 Mbps. In this topology, six computers (PC1-PC6) used socket program to establish connections with the server. PC1-PC5 applied TCP protocol and PC6 adopted UDP protocol. All six computers sent packets to the server continuously. LDoS attacker reduced the quality of network service by sending high speed pulse stream periodically. We set the attack period to 1 s, and adjusted the attack intensity by changing the attack duration and rate. The attack rate was controlled by changing the number of threads. The larger the number of threads, the stronger the attack rate. Performance of LDoS Attacks Detection We also conducted multiple sets of experiments on the testbed to evaluate the performance of our algorithm. The specific experimental parameters were set as shown in Table 2. Sampling time and duration of detection window were set as the same as NS2. Then the time-frequency joint distribution of network traffic in a detection window was obtained by STFT. The feature matrix was calculated. We used the matrix extracted from the training data to construct the isolation forest. The isolation forest was used to calculate the anomaly score of the four groups of test data. Among the four groups of test data, the method proposed in this paper could identify most of the attack data as presented in Figure 14. The misjudgment mainly occurred at the moment when the attack just began or ended. We have analyzed network traffic of G4 in Figure 15. The network traffic was in the transition stage between LDoS attacks state and normal state so that it was wrongly judged. Experiments on Public Datasets LBNL and WIDE2018 We performed experiments on the Measurement and Analysis on the WIDE Internet (WIDE2018) dataset and the Lawrence Berkeley National Laboratory (LBNL) dataset in this subsection. These datasets consisted of various speed of links from 2 Mbps to 10 Gbps. Neither of the above two datasets contained LDoS attacks, so we used only the false positive rate to evaluate the detection performance. From WIDE2018, we selected 16 days of data (from 20180101 to 20180216) and used 35 days of data (from 20180217 to 20180528) for training. In LBNL, 21 days of data were selected and the first 600 s of each day was used for training. We applied methods to classify the network traffic, and the classification results are shown in Figure 16. When classifying these real normal TCP traffic, our method generated 46 false alarms on WIDE2018, and the false positive rate on LBNL was only 0.71%. Comparison with Other LDoS Methods The detection method in this paper is compared with the existing algorithms for detecting LDoS attacks in recent years in terms of experimental platform, detection accuracy, false negative rate and false positive rate as shown in Table 3. It shows that our method could effectively detect LDoS attacks of different intensity on NS2 and testbed. On top of that, detection performance of LBNL and WIDE also proves that our method could overcome the influence of noise and had a low false alarm rate in the real complex network environment. Compared with other methods, the method we proposed had better detection performance in terms of higher detection accuracy and lower false positive rate. Conclusion and Future Work In this paper, we analyzed that the statistic attributes of TCP traffic in the time-frequency joint domain would be changed under LDoS attacks. Based on that, we developed MAF-ADM for LDoS attacks. On the one hand, the weighted fusion algorithm was applied to build the isolation forest according to the ability of the isolation trees to isolate samples containing LDoS attacks. On the other hand, we adopted the weighted moving average algorithm and the dynamic threshold algorithm to calculate anomaly score and threshold according to different network environments. The method we proposed could detect 100% of LDoS attacks successfully on simulation platforms NS2, which does have good sensibility to identify LDoS attacks of different parameters. Results of experiments on testbed and the public datasets also demonstrate that this method does have better adaptability in the complex real network environment and immune to normal fluctuations of the network traffic.In conclusion, the proposed method can distinguish LDoS attacks and legitimate traffic effectively. It has better adaptability, higher accuracy and lower false positive rate. In the future work, we will continue our research in two directions. First, we will put effort to study variations of LDoS attacks and how they work, such as the aggregated or synchronous low-rate distributed DoS attacks. Another promising direction we hope to achieve is the development of MAF-ADM to defend against variants of LDoS attacks and the deep integration of MAF-ADM with other network security appliances against LDoS attacks, such as intrusion detection in wireless sensor network, prevention appliance in cloud computing, and so forth.
7,663.6
2019-12-29T00:00:00.000
[ "Computer Science" ]
Photocatalytic Water Splitting—The Untamed Dream: A Review of Recent Advances Photocatalytic water splitting using sunlight is a promising technology capable of providing high energy yield without pollutant byproducts. Herein, we review various aspects of this technology including chemical reactions, physiochemical conditions and photocatalyst types such as metal oxides, sulfides, nitrides, nanocomposites, and doped materials followed by recent advances in computational modeling of photoactive materials. As the best-known catalyst for photocatalytic hydrogen and oxygen evolution, TiO2 is discussed in a separate section, along with its challenges such as the wide band gap, large overpotential for hydrogen evolution, and rapid recombination of produced electron-hole pairs. Various approaches are addressed to overcome these shortcomings, such as doping with different elements, heterojunction catalysts, noble metal deposition, and surface modification. Development of a photocatalytic corrosion resistant, visible light absorbing, defect-tuned material with small particle size is the key to complete the sunlight to hydrogen cycle efficiently. Computational studies have opened new avenues to understand and predict the electronic density of states and band structure of advanced materials and could pave the way for the rational design of efficient photocatalysts for water splitting. Future directions are focused on developing innovative junction architectures, novel synthesis methods and optimizing the existing active materials to enhance charge transfer, visible light absorption, reducing the gas evolution overpotential and maintaining chemical and physical stability. Introduction The continual increase in world population and lifestyle standards has led to a seminal growth in global energy consumption [1]. Amounting to about 90% of global energy, fossil fuels supply the transportation and industrial sectors, leading to high emission of greenhouse gases including carbon dioxide [2,3] and resulting in a substantial depletion of carbon-based resources that could be otherwise used for the production of valuable chemicals. In 2013, worldwide energy consumption was 17 TW and is expected to at least double by 2050 [4]. Development of a clean and renewable source of energy is crucial to mitigate consequences of fossil fuel consumption including climate change, eventual depletion of energy supplies, market uncertainty, and foreign oil dependency [5][6][7]. There are several alternative energy sources including wind, geothermal, hydropower, and solar which are relatively clean and sustainable in comparison with fossil fuels, however, each of them has some limitations which make this substitution challenging. Electricity generated by wind turbines is Hydrogen and Related Concerns Although hydrogen with its unique properties of high-energy efficiency, easy storage, and freedom from pollution has been considered as a promising alternative to the conventional sources of energy, H 2 has some drawbacks, which need to be addressed for it to be practically used as fuel. Storing hydrogen as a compressed gas or liquid requires energy and additional costs [16]. The limited infrastructure for hydrogen fueling is another factor limiting its practical use. The most important hitch of current hydrogen production methods is the reliance on fossil fuels (natural gas reforming) for its production. Extensive research has therefore been conducted to explore techniques for producing hydrogen from renewable sources. Hydrogen Evolution by Solar Energy Steam methane reforming is a widely used technique to produce hydrogen from natural gas at high temperatures (up to 900˝C) and pressures [17,18]. Coal gasification is also employed to generate hydrogen through partial oxidation at high temperatures and pressures (up to 5 MPa) [17]. Biomass materials such as crops, animal wastes, and plants under thermochemical routes generate hydrogen through pyrolysis and gasification which produce byproducts of CO, CO 2 , and methane [19]. Biological processes for hydrogen production from biomass materials are other promising techniques [20] but they are not economically feasible yet. Consequently, current hydrogen generation techniques suffer from a reliance on fossil fuel sources, harsh process conditions, and significant costs. Alternative methods that utilize renewable sources of energy for hydrogen production such as hydropower, wind power, and sunlight must be explored. Among these sustainable energies, solar energy has been considered a more promising source due to its lesser location dependence in comparison to wind and hydropower energy. The combination of solar energy with plentiful water resources provides a reasonable platform for hydrogen generation which is called solar water splitting [21][22][23]. There are three approaches to split water using solar energy [24]: (1) thermochemical water splitting; (2) photobiological water splitting; and (3) photocatalytic water splitting. Although the thermochemical approach is the simplest, the requirement for large solar concentrators makes this method highly expensive and less favorable [25]. Biophotolysis can be divided into water biophotolysis and organic biophotolysis depending on the microorganism type, product, and mechanisms of the reaction [26]. Although water biophotolysis is cleaner than organic biophotolysis (regarding CO 2 emissions) [27], low yields of hydrogen production, toxic effects of enzymes, and limitations on scaling up the process exist [28]. Photocatalytic water splitting possesses several advantages over thermochemical and photobiological water splitting including: (1) low cost (capable of reducing the photovoltaic arrays) [29]; (2) relatively high solar-to-H 2 efficiency; (3) capability of separating H 2 and O 2 streams; and (4) flexible reactor size which is appropriate for small scale usage [30]. The US Department of Energy (DOE) has established the Molecules 2016, 21, 900 3 of 29 ultimate target of 26% for the solar to hydrogen energy conversion ratio which requires aggressive research to improve the current status [31]. Photocatalytic Water Splitting To achieve overall water splitting and investigate structure-property relationships of photocatalysts the two half reactions of water splitting have been studied extensively. These reactions being hydrogen and oxygen evolution reactions usually involve the use of sacrificial reagents to improve the hydrogen and oxygen yield. Even though a catalyst can catalyze both reactions with the aid of sacrificial electron donors and acceptors, this may not work for overall water splitting. To clarify, water splitting discussed in this review refers to directly splitting of water into hydrogen and oxygen in a 2:1 ratio by the use of a proper photocatalyst. Several research and review articles have proposed the mechanisms of photocatalytic water splitting [32][33][34]. The reaction is first initiated by photon absorption, which generates numerous electron-hole pairs with sufficient potentials. Those charge carriers then migrate to the surface of the catalysts and react with surface active sites. Finally, the photo-generated electrons reduce water to form hydrogen, and the holes oxidize water molecules to give oxygen. Fujishima and Honda first reported the overall photocatalytic water splitting by a titanium dioxide (TiO 2 ) electrode [35]. Since this pioneering work, numerous research studies of water splitting have been conducted on semiconductor materials, especially via heterogeneous catalysis. Semiconductors have non-overlapping valence bands and conduction bands with a band gap in between that of insulators and conductors. When sufficient photochemical energy is applied, electrons will be excited into the conduction band, leaving electron holes in the valence band and excess electrons in the conduction band. These electron-hole pairs play key roles in the redox reactions of water splitting. Electrons are responsible for reducing protons to hydrogen molecules, and oxygen anions will be oxidized by the holes. In order to initiate the redox reaction, the highest level of the valence band should be more positive than water oxidation level (E O 2 {H 2 O , 1.23 V vs. Normal hydrogen electrode; NHE), while the lowest level of the conduction band should be more negative than the hydrogen evolution potential (E H 2 {H 2 O , 0 V vs. NHE). Therefore, the minimum band gap for a suitable water splitting photocatalyst should be 1.23 eV. Accordingly, TiO 2 , ZrO 2 , KTaO 3 , SrTiO 3 , and BiVO 4 are good candidates for photocatalytic water splitting [36][37][38] However, some typical semiconductors such as SiC, ZnO, and CdS, even though their band gap fits well into the water splitting redox potential, are not active for water splitting due to photocorrosion. Photo-corrosion happens when the anion from the catalyst itself is oxidized by photogenerated holes instead of water. Another challenge is that most semiconductor catalysts only operate under ultraviolet (UV) light which accounts for only ca. 4% of the total solar energy [32][33][34]. To improve the solar energy efficiency, photocatalysts with the ability to work under visible light are highly desirable, since visible light contributes to almost half of the incoming solar energy. The band gap of semiconductor materials should be less than 3 eV to have a visible light response. Recently, semiconductor catalysts coupled with carbon materials or precious metal particles have been shown to have better visible light response [35,36]. In addition, metal sulfides, metal nitrides, and metal-free catalysts are also promising catalytic systems for photocatalytic water splitting by visible light [37][38][39][40]. Traditional water-splitting photocatalysts are based on transition metal oxides which form stable compounds due to the high electronegativity of oxygen atoms [41]. Transition metal oxides can be classified into two groups according to their d orbital structure. Early transition metals like Ti, V, Nb, and W have empty d orbitals, thus having a low valence band energy. Also, their valence bands are strongly influenced by the oxygen 2p orbitals. As a result, these materials have large band gaps which make them less efficient for photocatalytic reactions. Several strategies including doping and creating defects have been engaged to increase their light absorption efficiency. For example, Zhao et al. successfully designed defect-enriched black TiO 2 through high-temperature hydrogenation and the synthesized material exhibited excellent photocatalytic hydrogen evolution reactivity [42]. On the other hand, late transition metals such as Mn, Fe, Co, and Ni have occupied d orbitals. Their oxides usually have small band gaps and the strong d-d transitions play significant roles [41]. Fe 2 O 3 is a typical example in this group due to its abundant and inexpensive nature, and its attractive photocatalytic activities have been reported [43,44]. Having little polaron conductivity is the disadvantage of late transition metal oxides [45]. To overcome those limitations, multicomponent metal oxides have been developed. Moreover, metal nitrides and metal sulfides were synthesized and shown better photocatalytic activities. Wei et al have reported that combining cations with s 2 and d 0 configurations can lower the band gap. The coupling between s band from s 2 cation and p band from oxygen can increase the valence band level while the coupling between d band from d 0 cation and p band can lower the conduction band level [46]. A typical example of this type of ternary oxides is BiVO 4 . Its photocatalytic properties have been intensively studied over the years [47][48][49][50]. Further doping BiVO 4 with other cations such as Ag + , V 5+ , and W 6+ can increase its electronic conductivity and resulting in better catalytic performance [51][52][53]. Other examples of band gap tuning by ternary oxides include CuWO 4 , ZnFe 2 O 4 , CaFe 2 O 4 , CuBi 2 O 4 , and CuNb 3 O 8 , etc. [54][55][56][57][58]. Besides metal doping techniques; nitrogen substituting can also decrease the band gap due to its higher-lying 2p orbital levels [59,60]. Like nitrogen, sulfur and selenium also possess higher-lying p bands than those of oxygen; they can also be used to create smaller band gap materials than their oxides counterparts [61][62][63]. Moreover, modification of the catalysts with silicon, group III-V semiconductors, and carbon-based materials have been reported and proved to be efficient methods for developing photoactive materials [64][65][66]. The summary of very recent photocatalysts is presented in Table 1. It is crucial to note that the amount of active photocatalyst material, the light source, turnover frequency and catalytic stability is different in each entry of the table. Therefore, the hydrogen production should not be deemed as the sole measure of performance in every system. Once the electron-hole pairs are generated, these charge carriers need to move to the surface of the catalysts and catalyze water splitting at the interfaces between the electrode and electrolyte. The major challenge in this step is the recombination of electrons and holes [32,34]. The photogenerated electron-hole pairs can recombine in a short period of time before they catalyze the redox reactions, releasing heat or photon energy. In general, fewer defects and small particle size are believed to be able to inhibit the recombination of electrons and holes. Surface defects usually can serve as adsorption sites for electrons and holes and facilitate their recombination before redox reaction, thus decreasing photocatalytic activity. Highly crystalline and stoichiometric materials have fewer defects on the surface; therefore, they are beneficial for the overall water splitting reaction. On the other hand, nanosized materials can provide short diffusion distances for electrons and holes to get to the surface active sites, thus limiting the recombination probability. Nonetheless, materials with small particle size usually lead to high surface area, which contributes to effective interaction between charge carriers and surface active sites. Lastly, the migrated electrons and holes will interact with surface active sites and go through a series of redox reactions to produce hydrogen and oxygen. At this point, the intrinsic activity and the number of the surface active sites become crucial. Even if the photogenerated electrons and holes are well separated and reached the material surface, the reaction cannot happen without proper active sites. The bottom level of conduction bands of many transition metal oxides are not negative enough to start the hydrogen evolution reaction, so co-catalysts such as precious metals and NiO are needed to provide assistance for water reduction [32]. However, the top level of valence bands of metal oxides are usually positive enough to oxidize water to oxygen without the aid of co-catalysts. High surface areas can provide more accessible active sites and are reported to be beneficial for the water splitting reaction but, this factor is not as large as other structural parameters such as crystallinity and particle size [67][68][69][70]. This is due to the adsorption of reactant water molecules not being that dominant in water splitting as in other reactions like dye degradation. Moreover, water splitting into hydrogen and oxygen is an energy demanding reaction, which is thermodynamically unfavorable. The backward reaction is more likely to occur. Therefore, the separation and removal of produced oxygen and hydrogen play a major role in this reaction. Heterogeneous photochemical water splitting consists of three components: a catalyst, visible light absorber, and sacrificial electron donor. Although the basic principles of photochemical and photoelectrochemical systems are identical, they differ in their setup. In photochemical reactions, there is a semiconductor-electrolyte junction at which the water splitting reaction takes place ( Figure 1). The required potential for water splitting is generated at the semiconductor-liquid interface. The semiconductor should be stable in the electrolyte to prevent any corrosion. Depending on the band edge position of the semiconductor as discussed previously, they can be active in hydrogen production, oxygen production, or overall water splitting [89]. Photo-Electrochemical Reactions In photoelectrochemical (PEC) water splitting, a photocatalyst, which is a semiconductor, is irradiated by UV-visible light with energy greater or equivalent to the band gap of the semiconductor ( Figure 2). The light energy will be absorbed by the photocatalyst and results in charge separation at the valence band and conduction band. The holes are produced at the valence band, and the photoexcited electrons are located in the conduction band. The holes trigger the oxidation of water at the surface of conduction band while the photo-excited electrons at conduction band reduce the absorbed H + to H2. Mainly in photoelectrochemical water splitting, semiconductors are applied as a photocathode or photo-anode depending on the reaction, which is favored. In photo-electrochemical water splitting, a semiconductor electrode should be in contact with an electrolyte, which contains a redox couple. In PEC water splitting, the overall reaction takes place at two different electrodes. In this method, the potential which is needed for water splitting is being provided by illuminating the cathode or anode [91]. Photo-Electrochemical Reactions In photoelectrochemical (PEC) water splitting, a photocatalyst, which is a semiconductor, is irradiated by UV-visible light with energy greater or equivalent to the band gap of the semiconductor ( Figure 2). The light energy will be absorbed by the photocatalyst and results in charge separation at the valence band and conduction band. The holes are produced at the valence band, and the photo-excited electrons are located in the conduction band. The holes trigger the oxidation of water at the surface of conduction band while the photo-excited electrons at conduction band reduce the absorbed H + to H 2 . Mainly in photoelectrochemical water splitting, semiconductors are applied as a photocathode or photo-anode depending on the reaction, which is favored. In photo-electrochemical water splitting, a semiconductor electrode should be in contact with an electrolyte, which contains a redox couple. In PEC water splitting, the overall reaction takes place at two different electrodes. In this method, the potential which is needed for water splitting is being provided by illuminating the cathode or anode [91]. Photo-Electrochemical Reactions In photoelectrochemical (PEC) water splitting, a photocatalyst, which is a semiconductor, is irradiated by UV-visible light with energy greater or equivalent to the band gap of the semiconductor ( Figure 2). The light energy will be absorbed by the photocatalyst and results in charge separation at the valence band and conduction band. The holes are produced at the valence band, and the photoexcited electrons are located in the conduction band. The holes trigger the oxidation of water at the surface of conduction band while the photo-excited electrons at conduction band reduce the absorbed H + to H2. Mainly in photoelectrochemical water splitting, semiconductors are applied as a photocathode or photo-anode depending on the reaction, which is favored. In photo-electrochemical water splitting, a semiconductor electrode should be in contact with an electrolyte, which contains a redox couple. In PEC water splitting, the overall reaction takes place at two different electrodes. In this method, the potential which is needed for water splitting is being provided by illuminating the cathode or anode [91]. Reaction Setup The most common experimental setup used by researchers consists of a reaction cell, a gas circulation pump, vacuum pumps, and a gas chromatograph detector. The oxygen and hydrogen produced can also be detected using oxygen and hydrogen sensors, or by volumetric methods. The reaction solution should be purged with inert gases before testing, and the whole setup should be air-free to measure the amount of evolved oxygen accurately. Several light sources can be used to initiate the reaction. For photocatalysts with a UV light response, high-pressure mercury lamps are employed and the reaction cell should be quartz. For the catalysts with band gaps smaller than 3 eV, a 300 W xenon lamp and a filter are used to generate visible light. A solar simulator is also used as incident light when evaluating solar hydrogen evolution. Different types of reaction cells have been reported in the literature. Cells with two simultaneous semiconductors were employed in the 1970s and 1980s [39,69]. Single-junction cells have been reported to drive the hydrogen evolution reaction. However they are not satisfying for the overall water splitting due to insufficient photovoltage [67,68]. Multi-junction devices coupled with electrocatalysts could provide a large enough photo-voltage to drive water splitting [70]. A monolithic three-junction amorphous silicon photovoltaic cell coupled to cobalt phosphate and Ni-Zn-Mo tri-metal catalyst has been reported and exhibited an efficiency of 4.7% [92]. Photocatalytic Condition Various parameters affect the photocatalytic activity of an inorganic photoconductor including surface chemistry, surface and junction defects [93], crystallinity, doping and deep traps, band edge positions, particle size, and morphology [94]. A variety of methods are tried for controlled synthesis of photocatalyst to tune these variables including hydrothermal [95], microwave assisted [96], surfactant assisted [97], and sonochemical [98] synthesis. The synthesis parameters modify the activity of the catalyst especially when dealing with morphologically active nanoparticles and high surface area structures. Catalyst synthesis conditions including temperature, surfactants, concentration, and pH impact the structural features including crystal size, shape, and structure of the material [99]. Concentrations of building blocks in the solution affect nucleation and growth of the crystal structure, ultimately determining the activity. This is particularly tunable for co-catalytic systems where one-dimensional and two-dimensional structured materials have a geometric dependency. Copper oxide and zinc oxide core-shell nano wires [100], Cu 2 O/CuO heterojunction decorated with nickel [101], three dimensional branched cobalt-doped α-Fe 2 O 3 nanorod/MgFe 2 O 4 heterojunction [102], CuO nanoplates coupled with anatase TiO 2 [103] are examples of geometrically active co-catalytic systems. For the synthesis of inorganic photocatalysts, pH along with hydrothermal temperature, treatment time, and solvent ratio control the morphology [104][105][106][107][108][109]. For BiVO 4 solvent volume ratios of ethylene glycol over water (EG/H 2 O) ranging from 10/50 to 60/0 completely change the morphology of nanostructures. At 10/50 and 20/40, the FE-SEM images show lamellar shapes, with 20/40 making sheets thicker. Then at 30/30, 40/20, 50/10, and 60/0, leaf-like, bowknot-like, candy-like, and olive-like shapes were obtained. This is explained by higher viscosity of EG and its inhibitory effect on crystal growth thus allowing the nanocrystals to rotate and find a 3D structure to stabilize [110][111][112]. The morphology of BiVO 4 is controllable with pH ranging from irregular microparticles at high pH to more uniform sized hollow structure microspheres at low pH [106]. Surface and Band Structure There are various studies indicative of surface chemical functionality, local structure and morphological characteristics of catalysts affecting the photocatalytic activity of the water splitting reaction. Surface chemical functionality is modified to protect against corrosion; deactivate destructive surface states; tailor band-edge positions; or selectively extract of carriers to improved catalytic activity [41,113]. Other than the increased catalytic activity via surface area, there is a trade-off between light absorption and carrier diffusion lengths which surface structure does influence. Increased surface area may lead to decreased photovoltage and increased surface recombination. Therefore careful understanding of loss mechanisms are required before surface modifications are employed [41]. Band gap engineering is one of the main ways to increase the process efficiency where adjusting the layer thickness and sequence leads to the development of new electronic states [131]. Assuming junction materials to be A and B, junction architecture comes in three types: Type I is when the CB of material A is higher than that of material B while the VB of A is lower than B. Since electrons/holes tend to move downward/upward respectively for lower energy, they both get accumulated in material A. In type II, the VB and CB of Material A are both lower than those of Material B which leads to charge carrier separation. Type III is similar to type II with enhanced differences between CB's and VB's of the junction material [132]. Type II junction architecture is one widely used method to fabricate photocatlyst heterojunctions [133]. Enhanced activity due to charge carrier separation is reported for CdS/TiO 2 [134,135] and CdS/ZnO [136,137] heterostructures. Nanostructuring of the junction films are important since excessive thickness would hinder efficient charge transfer. It is suggested that the film thickness be comparable with charge carrier diffusion length while being thick enough to significantly absorb light [41]. Nanostructuring of BiVO 4 /WO heterojunctions has led to near theoretical maximum photocurrent generation of BiVO 4 material [138]. A recent study showed that band gap engineering techniques could allow photocathodes to carry out the water reduction reaction step of a PEC cell by using molecular beam epitaxy. Growing a wide band gap oxide of strontium titanate (SrTiO 3 ), to a 4 nm thick layer acts as a protection layer for silicon as well as a tunneling junction for charge transport. The substrate being p-type silicon is matched with SrTiO 3 lattice, so a perfect interface with very low density of defect can be fabricated. A maximum photocurrent density of 35 mAcm´2 was attained under a broad-spectrum illumination at 100 mWcm´2 as well as an open circuit potential of 450 mV [139]. Liao et al. synthesized cobalt(II) oxide (CoO) nanoparticles with shifted band edge position that could achieve solar-to-hydrogen efficiency of around 5% [140]. UV active semiconductors can be turned into visible light active materials by addition of cations and anions. Coupling wide and narrow band gap materials to get the right spectrum for full spectrum harvesting is utilized by co-catalysts such as CuO/ZnO [141]. Doping could favorably affect band gaps of photocatalysts due to successful band gap reduction of the photo-anode. Design and Description As previously mentioned, a suitable photocatalyst for overall water splitting should have a band gap of at least 1.23 eV with no photocorrosion. In terms of water splitting, high crystallinity and small particle size are desired to minimize the recombination of photo-generated electrons and holes. Metal oxides, sulfides, nitrides, and phosphates with d 0 and d 10 metal cations have been employed as water splitting catalysts. Group I, and Group II metals along with some lanthanides form perovskite materials can also be used to catalyze photochemical water splitting. The band structure of different types of semiconductors with respect of the redox potentials of water splitting are summarized in Figure 3. To improve solar energy efficiency, modification of photocatalysts by doping with some transition metal cations such as Ni 2+ , Cr 3+ , and V 5+ can help to increase the visible light response. To prohibit the energy decreasing backward reaction of water splitting and increase the hydrogen production yield, suitable co-catalysts including RuO 2 , NiO, Au and Pt can be used. In this section, we will focus on heterogeneous photocatalysts including TiO 2 , metal oxides, metal sulfides, and metal nitrides. can also be used to catalyze photochemical water splitting. The band structure of different types of semiconductors with respect of the redox potentials of water splitting are summarized in Figure 3. To improve solar energy efficiency, modification of photocatalysts by doping with some transition metal cations such as Ni 2+ , Cr 3+ , and V 5+ can help to increase the visible light response. To prohibit the energy decreasing backward reaction of water splitting and increase the hydrogen production yield, suitable co-catalysts including RuO2, NiO, Au and Pt can be used. In this section, we will focus on heterogeneous photocatalysts including TiO2, metal oxides, metal sulfides, and metal nitrides. TiO2 Since Fujishima and Honda first demonstrated that TiO2 was a promising photo-anode for UV light-driven photocatalytic water splitting [34], which has been widely studied in many photocatalytic reactions due to its chemical stability, low cost, environmentally friendly nature, and tunable electronic energy band gap [143][144][145][146][147]. Figure 4 shows a band gap illustration of TiO2. The valence band of TiO2 is more positive than of O2/H2O (1.23 V vs. NHE, pH = 0), while the conduction band is more negative than of H + /H2 (0 eV vs. NHE, pH = 0) [148]. However, TiO2 materials suffer from two major drawbacks. One is the fast charge carrier recombination, which results in the release of unproductive energy. Another one is the inability to harvest visible light [149], since TiO2 can only be excited by UV light due to its wide band gap of 3.0-3.2 eV, which only covers 5% of the solar spectrum [14]. To enable visible light harvesting and prevent the recombination of photogenerated electron-hole pairs, proper modification should be performed. In this section, suitable modification methods will be introduced, including doping, making heterojunctions with other semiconductors or metals, and structural changes. Doping TiO2 with other elements can change the optical properties and suppress the charge recombination adequately [150]. A variety of metals and non-metals have been doped into TiO2 materials. Anionic doping of TiO2 has been extensively reported by various dopant elements such us B, C, N, F, S, and Cl [151,152]. Luo and co-workers [153] have synthesized Br and Cl co-doped TiO2 via a hydrothermal method, in which a titanium chloride is used as titanium source and incorporated TiO 2 Since Fujishima and Honda first demonstrated that TiO 2 was a promising photo-anode for UV light-driven photocatalytic water splitting [34], which has been widely studied in many photocatalytic reactions due to its chemical stability, low cost, environmentally friendly nature, and tunable electronic energy band gap [143][144][145][146][147]. Figure 4 shows a band gap illustration of TiO 2 . The valence band of TiO 2 is more positive than E 0 ox of O 2 /H 2 O (1.23 V vs. NHE, pH = 0), while the conduction band is more negative than E 0 red of H + /H 2 (0 eV vs. NHE, pH = 0) [148]. However, TiO 2 materials suffer from two major drawbacks. One is the fast charge carrier recombination, which results in the release of unproductive energy. Another one is the inability to harvest visible light [149], since TiO 2 can only be excited by UV light due to its wide band gap of 3.0-3.2 eV, which only covers 5% of the solar spectrum [14]. To enable visible light harvesting and prevent the recombination of photogenerated electron-hole pairs, proper modification should be performed. In this section, suitable modification methods will be introduced, including doping, making heterojunctions with other semiconductors or metals, and structural changes. The other alternative method to extend the photocatalytic activity of TiO2 to visible light region is to dope this material with carbon. For instance, Faria et al. have reported doping of TiO2 with carbon nanotubes (CNTs) [154]. Although different mechanisms have been proposed to explain this enhancement, the mechanism of synergic effect of carbon on TiO2 remains unclear. Three mechanisms have been explored to describe the synergetic effect of carbon on TiO2. The first possible mechanism is carbon can play the role of an electron sink, which can effectively prevent the recombination process [155]. Another mechanism proposes carbon as a photosensitizer, which can pump electrons into TiO2 conduction band [156]. Besides the proposed mechanisms, carbon can also act as template to disperse the TiO2 particles and hinder the agglomeration of TiO2 nanoparticles [157]. Unlike non-metal ion doping, metallic dopants usually introduce additional energetic levels in the band gap, which reduce the energy barrier and induce a new optical absorption edge [158,159]. Piskunov et al. [152] have reported enhancement in photocatalytic water splitting activity of Fe-doped TiO2, where Fe 2+ /Fe 3+ acts as electron-trap centers and Fe 3+ /Fe 4+ acts as hole-trap centers. Luo et al. [97] have shown at vanadium doping shifts the absorption band to the visible region and the V 4+ /V 5+ pair efficiently traps the electrons and holes, which suppress the recombination of electron and holes. Figure 5 represents the schematic band alignment of doped TiO2 semiconductors. Doping TiO 2 with other elements can change the optical properties and suppress the charge recombination adequately [150]. A variety of metals and non-metals have been doped into TiO 2 materials. Anionic doping of TiO 2 has been extensively reported by various dopant elements such us B, C, N, F, S, and Cl [151,152]. Luo and co-workers [153] have synthesized Br and Cl co-doped TiO 2 via a hydrothermal method, in which a titanium chloride is used as titanium source and incorporated bromide by hydrobromic acid. The unique Br and Cl-doped TiO 2 exhibits an extended light absorption into the visible light region, in which the non-metal dopants were proven to be the key factor to narrow the band gap. The resulting material showed an enhanced solar light-induced water splitting activity. The other alternative method to extend the photocatalytic activity of TiO 2 to visible light region is to dope this material with carbon. For instance, Faria et al. have reported doping of TiO 2 with carbon nanotubes (CNTs) [154]. Although different mechanisms have been proposed to explain this enhancement, the mechanism of synergic effect of carbon on TiO 2 remains unclear. Three mechanisms have been explored to describe the synergetic effect of carbon on TiO 2 . The first possible mechanism is carbon can play the role of an electron sink, which can effectively prevent the recombination process [155]. Another mechanism proposes carbon as a photosensitizer, which can pump electrons into TiO 2 conduction band [156]. Besides the proposed mechanisms, carbon can also act as template to disperse the TiO 2 particles and hinder the agglomeration of TiO 2 nanoparticles [157]. Unlike non-metal ion doping, metallic dopants usually introduce additional energetic levels in the band gap, which reduce the energy barrier and induce a new optical absorption edge [158,159]. Piskunov et al. [152] have reported enhancement in photocatalytic water splitting activity of Fe-doped TiO 2 , where Fe 2+ /Fe 3+ acts as electron-trap centers and Fe 3+ /Fe 4+ acts as hole-trap centers. Luo et al. [97] have shown at vanadium doping shifts the absorption band to the visible region and the V 4+ /V 5+ pair efficiently traps the electrons and holes, which suppress the recombination of electron and holes. Figure 5 represents the schematic band alignment of doped TiO 2 semiconductors. The formation of a semiconductor-semiconductor heterojunction can decrease the charge recombination rate by yielding long-lived electron-hole pairs [160]. Proper band alignments allow charge transfer from one semiconductor to another [4]. Resasco et al. [161] have reported a TiO 2 /BiVO 4 host-guest photo-anode system, in which TiO 2 acts as an electron acceptor, and BiVO 4 serves as a visible light capturer. Due to the good electron affinity of TiO 2 and the small optical band gap of BiVO 4 (2.5 eV), the resulting heterojunction as a photo-anode performed better than bare TiO 2 or BiVO 4 ( Figure 6). the band gap, which reduce the energy barrier and induce a new optical absorption edge [158,159]. Piskunov et al. [152] have reported enhancement in photocatalytic water splitting activity of Fe-doped TiO2, where Fe 2+ /Fe 3+ acts as electron-trap centers and Fe 3+ /Fe 4+ acts as hole-trap centers. Luo et al. [97] have shown at vanadium doping shifts the absorption band to the visible region and the V 4+ /V 5+ pair efficiently traps the electrons and holes, which suppress the recombination of electron and holes. Figure 5 represents the schematic band alignment of doped TiO2 semiconductors. The formation of a semiconductor-semiconductor heterojunction can decrease the charge recombination rate by yielding long-lived electron-hole pairs [160]. Proper band alignments allow charge transfer from one semiconductor to another [4]. Resasco et al. [161] have reported a TiO2/BiVO4 host-guest photo-anode system, in which TiO2 acts as an electron acceptor, and BiVO4 serves as a visible light capturer. Due to the good electron affinity of TiO2 and the small optical band gap of BiVO4 (2.5 eV), the resulting heterojunction as a photo-anode performed better than bare TiO2 or BiVO4 ( Figure 6). Using a sacrificial agent helps TiO2 in performing either water oxidation or reduction. The sacrificial agent reacts with one of the charge carriers while the other carrier is in charge of either oxygen or hydrogen production. Typically, sacrificial agents such as methanol, ethanol and ethylene glycol, which have, lower oxidation potentials than water are used to inhibit the electron hole pair recombination in TiO2 [162]. In another scenario the valence band energy level of one semiconductor is higher than the other while the conduction band energy level is lower than the other semiconductor. As a result of this band gaps alignment of two semiconductors, a charge separation occurs and recombination process decreases [163]. In a metal-semiconductor heterojunction structure, noble metals, such as Au, Pt, Pd, Ru, have been reported to trap photogenerated electrons due to their significant role as electron sinks. Among noble metals, Au has been studied as the preferred co-catalyst for photocatalytic hydrogen production due to its high affinity towards photo-generated electrons, high resistance to oxidation, less activity towards the side reactions of hydrogen production, and the existence of surface plasmon resonance [164][165][166]. Wu et al. [167] have investigated the anisotropic growth of TiO2 onto Au nanorods, which achieved an enhanced visible light-induced hydrogen production. The close contact between TiO2 and Au facilitated the generation of surface plasmon resonance induced electrons. By engineering the structure, the performance of hydrogen evolution under visible light irradiation could be improved. Using a sacrificial agent helps TiO 2 in performing either water oxidation or reduction. The sacrificial agent reacts with one of the charge carriers while the other carrier is in charge of either oxygen or hydrogen production. Typically, sacrificial agents such as methanol, ethanol and ethylene glycol, which have, lower oxidation potentials than water are used to inhibit the electron hole pair recombination in TiO 2 [162]. In another scenario the valence band energy level of one semiconductor is higher than the other while the conduction band energy level is lower than the other semiconductor. As a result of this band gaps alignment of two semiconductors, a charge separation occurs and recombination process decreases [163]. In a metal-semiconductor heterojunction structure, noble metals, such as Au, Pt, Pd, Ru, have been reported to trap photogenerated electrons due to their significant role as electron sinks. Among noble metals, Au has been studied as the preferred co-catalyst for photocatalytic hydrogen production due to its high affinity towards photo-generated electrons, high resistance to oxidation, less activity towards the side reactions of hydrogen production, and the existence of surface plasmon resonance [164][165][166]. Wu et al. [167] have investigated the anisotropic growth of TiO 2 onto Au nanorods, which achieved an enhanced visible light-induced hydrogen production. The close contact between TiO 2 and Au facilitated the generation of surface plasmon resonance induced electrons. By engineering the structure, the performance of hydrogen evolution under visible light irradiation could be improved. Figure 7 shows the electron transfer pathways between Au nanoparticles and TiO 2 semiconductors. Another method to facilitate the photocatalytic water splitting process in the TiO 2 system is by structural modification. The structure of TiO 2 has a significant effect on the photocatalysis performance. Li et al. [169] have reported that amorphous TiO 2 with more defects suffers from a faster charge recombination rate than high crystalline TiO 2 . Other than crystallinity, the mesoporous structure of TiO 2 also plays a key role in the study of photo-electrode. Zheng et al. [170] have found at the existence of a mesoporous structure favors the rapid diffusion of products and suppresses the electron/hole recombination. Also, the morphology of the photocatalyst has a major effect on the photocatalytic activity [4]. 1-D TiO 2 forms such as nanotubes [152], nanowires [171], and nanofibers [172] have been studied and show improved photocatalytic activity. Tuning of morphology has attracted considerable attention due to the change in material morphology that can alter charge carrier diffusion pathways. Therefore, to improve the photocatalytic hydrogen evolution efficiency of TiO 2 , modification of its structure is highly relevant. Figure 7 shows the electron transfer pathways between Au nanoparticles and TiO2 semiconductors. Another method to facilitate the photocatalytic water splitting process in the TiO2 system is by structural modification. The structure of TiO2 has a significant effect on the photocatalysis performance. Li et al. [169] have reported that amorphous TiO2 with more defects suffers from a faster charge recombination rate than high crystalline TiO2. Other than crystallinity, the mesoporous structure of TiO2 also plays a key role in the study of photo-electrode. Zheng et al. [170] have found at the existence of a mesoporous structure favors the rapid diffusion of products and suppresses the electron/hole recombination. Also, the morphology of the photocatalyst has a major effect on the photocatalytic activity [4]. 1-D TiO2 forms such as nanotubes [152], nanowires [171], and nanofibers [172] have been studied and show improved photocatalytic activity. Tuning of morphology has attracted considerable attention due to the change in material morphology that can alter charge carrier diffusion pathways. Therefore, to improve the photocatalytic hydrogen evolution efficiency of TiO2, modification of its structure is highly relevant. Among various strategies to overcome fast charge recombination which leads to low photocatalytic efficiency [173,174] plasmonic photocatalysis is the most promising approach to promote charge separation and visible light absorption [175,176]. Nanoparticles of Au [177][178][179][180], Ag [181] and Pd [182] have been applied to improve visible and near-infrared (NIR) absorption and generate surface plasmon resonance (SPR) hot electrons [176]. The plasmonic properties of metallic nanoparticles (normally Au and Ag) are very attracting due to their ability to promote catalytic reactions. Oscillation of the conduction band in the plasmonic structures results is localized surface plasmonic resonance (LSPR) which finally leads to hot electron generation through non-radiative decay of plasmons. These electrons assist catalytic reactions [183]. Wang et al. have discussed two mechanisms that surface Among various strategies to overcome fast charge recombination which leads to low photocatalytic efficiency [173,174] plasmonic photocatalysis is the most promising approach to promote charge separation and visible light absorption [175,176]. Nanoparticles of Au [177][178][179][180], Ag [181] and Pd [182] have been applied to improve visible and near-infrared (NIR) absorption and generate surface plasmon resonance (SPR) hot electrons [176]. The plasmonic properties of metallic nanoparticles (normally Au and Ag) are very attracting due to their ability to promote catalytic reactions. Oscillation of the conduction band in the plasmonic structures results is localized surface plasmonic resonance (LSPR) which finally leads to hot electron generation through non-radiative decay of plasmons. These electrons assist catalytic reactions [183]. Wang et al. have discussed two mechanisms that surface plasmon can enhance the electron-hole formation and separation: (1) local electromagnetic field enhancement (LEMF) in order to enhance interband transition rate through strong local field; and (2) resonant energy transfer (RET) from the plasmonic dipoles to the e-h pairs in the semiconductor through a near-field electromagnetic interaction [184]. Composition, shape and environment of noble metal nanoparticles significantly influenced on their surface plasmon resonance (SPR) property [185]. Among the various Au nanoparticles shapes [186][187][188][189], nanorods have been used for H 2 evolution. To make the most of the charge separation of hot electrons, Au nanorods are usually interfaced with efficient electron acceptors (e.g., TiO 2 ) [167]. Moreover, Au triangular nanoprisms (TNPs) have shown different modes of SPR [185]. These anisotropic structures are significantly crucial for the hot electron transfer [190]. Metal Oxides Other than TiO 2 , a number of other representative metal oxides (such as Fe 2 O 3 , WO 3 , ZnO, Cu 2 O, Al 2 O 3 , Ga 2 O 3 , Ta 2 O 5 , CoO and ZrO 2 ) have also been widely studied due to their stability in aqueous solution and their low cost. However, most metal oxides suffer from large band gaps limiting their ability to absorb visible light. In a typical metal oxide, the valence band and conduction band have O 2p and metal s character and therefore relatively ionic bonded materials have a large band gap [46]: ZnO (3.4 eV) [191], Ga 2 O 3 (4.5 eV) [192], Al 2 O 3 (8.8 eV) [193]. Using transition metal cations with d n electronic configurations may help overcome this issue, Fe 2 O 3 ("2.0 eV) [46,194] and Co 3 O 4 ("1.3 eV) [195] have increased light absorption but lack efficient charge carrier transfer due to small polaron dominated conductivity and associated high resistivity [196,197]. Using post-transition metals like PbO (2.1 eV) [191,196], SnO (2.4 eV) [197,198] and Bi 2 O 3 (2.5 eV) [191,199] with occupied s band leads to better photogeneration of charge carriers however they are indirect semiconductors where optical absorption band edges vary with the square root of photon energy leading to a less efficient carrier extraction process [46]. Therefore ternary metal oxide compounds have been investigated to overcome these limitations such as Bi 20 TiO 32 [200], SnNb 2 O 6 [201] and BiVO 4 [46,50]. BiVO 4 has been investigated for having both a low band gap (2.4-2.5 eV) and reasonable band edge alignment for the water redox potentials [202]. Both nand p-type semiconducting properties have been recorded by BiVO 4 as well as high photon-to-current conversion efficiencies (>40%) [203]. Mishra et al. [204] have reported that Fe 2 O 3 as photocatalytic materials has a proper band gap of 2.2 eV, which allows photon absorption under visible light irradiation. However, severe bulk recombination has limited the usage of Fe 2 O 3 . Some mechanistic studies also have been conducted for water oxidation and reduction reactions. Haghighat et al. [205] have studied the mechanism of water oxidation on iron oxide photocatalysts through evaluating the electron-transfer by changing the pH and potential space during the process. Morales-Guio et al. have designed an oxidatively electrodeposited optically transparent photocatalyst of amorphous iron-nickel oxide (FeNiO x ) for the oxygen evolution reaction [206]. It was demonstrated that low loading of FeNiO x and its high activity at low over potential was achieved in unassisted water splitting with solar-to-hydrogen conversion efficiencies more than 1.9% and "100% Faradaic. Similar to Fe 2 O 3 , WO 3 has been considered as a potential photo-anode material for its suitable valence band position, which favors a high onset potential for water oxidation. Elsewhere, Amer et al. [207] have reported ZrO 2 modification by deposition of thin layers of ZrN on ZrO 2 nanotubes, to prepare core-shell structures for the photo-anode activated under visible light. However, Moniz et al. [4] found that the main drawback of WO 3 is its instability toward anodic photocorrosion. These low E g materials (e.g., Fe 2 O 3 and WO 3 ) can be modified including doping with metal cations or by forming heterojunction structures with other semiconductors [158]. Sivula et al. [208] have demonstrated a WO 3 /Fe 2 O 3 photo-anode for water oxidation by using WO 3 as a host scaffold to support Fe 2 O 3 thin layer, due to the suitable band gap alignment between WO 3 and Fe 2 O 3 allowing fast electron transfer at the interfaces of host/guest. They found an increase in the photon absorption efficiency and a higher surface area of Fe 2 O 3 , resulted in a higher activity for water oxidation. Cobalt oxide (CoO) also shows photocatalytic activity toward H 2 evolution [140,209]. Liao et al., have reported CoO nanocrystals as photocatalyst for water splitting under visible light [140]. However, short lifetime and fast deactivation of CoO nanoparticles limit their usage as photocatalyst for hydrogen evolution. As the morphology of nanostructures can influence on the band-edge positions of material [210], designing CoO with different morphology such as nanotubes or nanowires could provide more efficient photocatalyst materials. Zhan et al. developed CoO nanowires on the carbon fiber papers with hydrogen generation rate of 81.3 µmol¨g´1¨h´1, which indicate higher chemical stability in comparison with CoO nanoparticles [209]. SrTiO 3 (STO) has also been widely used for hydrogen production as a solid-state photocatalyst with a band gap of 3.2 eV, which has been explored for the overall water splitting under UV light irradiation. Since STO is active toward water splitting only in the UV region, the solar to hydrogen conversion (STH) is low. Doping methods enhance the quantum efficiency of SrTiO 3 in the visible light region [211][212][213]. Domen et al. have recently reported the photocatalytic behavior of SrTiO 3 in the overall water splitting reaction can significantly be improved by flux-mediated Al doping. Doping Al in SrTiO 3 has improved the photocatalytic activity by 30% at 360 nm. In another study, SrTiO 3 is doped with a small amount of Rh to provide a donor level in the band gap region of SrTiO 3 : Rh. This prevents the charge recombination and subsequently facilitates the hydrogen production (2223 µmol¨h´1¨g´1) in comparison to pure STO [214]. Tantalum oxide (Ta 2 O 5 ) has been an attractive semiconductor for photocatalytic water splitting [215][216][217][218]. Due to the wide band gap of Ta 2 O 5 (about 4 eV), it is required to narrow the band gap with some techniques such as doping with foreign ions. Lu et al. have described Ta 2 O 5 nanowires as an active photocatalysts which generated hydrogen with the rate of 214 mmol¨g´1¨h´1 under Xe lamp irradiation without any cocatalyst [215]. They have discussed that Ta 2 O 5 nanowires with low dimensional structures provide higher surface area with favorable carrier transport to harvest light for H 2 production. Very recently, Zhu et al. have reported gray Ta 2 O 5 nanowires which were modified by aluminum reduction to improve electron density and photoelectrochemical water splitting properties of the material [218]. Metal Sulfides CdS and ZnS are the most studied metal sulfide photocatalysts in the past decades [148]. Compared to metal oxide semiconductors, CdS with narrower band gap (~2.4 eV) is considered promising as a visible-light-driven photocatalyst for water splitting [99]. However, as a result of rapid recombination of photogenerated electrons and holes, bare CdS semiconductors usually show low hydrogen production rates [164]. Moreover, high activity of CdS under light irradiation leads to the corrosion of semiconductors [163]. To circumvent this problem, CdS materials can be coupled with other noble metals as co-catalyst or form a heterojunction structure with other semiconductors [219]. In such a case, the photogenerated electrons on the conduction band of CdS can be transferred to electronic levels of noble metals or be delocalized and transferred between the conduction bands of semiconductors. Huang et al. [220] have established a hollow bimetallic sulfide material with a very narrow band gap. The bimetallic metal sulfides exhibit a hydrogen production rate comparable to Pt when sensitized by Eosin Y dye or coupled with TiO 2 and C 3 N 4 semiconductors. Nickel sulfide, in particular has proven to be tremendously useful in raising the activity of semiconductors when use as a co-catalyst along with TiO 2 , CdS, and g-C 3 N 4 [221]. Unlike CdS, wide band gap ZnS (3.6 eV), responds weakly to visible light [222,223]. Efforts have been made to improve the photoactivity of ZnS for hydrogen evolution. Li et al. [224] have reported highly active Zn 1-x Cd x S solid solution systems for visible-light induced hydrogen production. The band gaps of the solid solution photocatalysts can easily be tuned by varying the Zn/Cd molar ratios. Compared to bare ZnS with a large band gap, the narrower band gap of the resulting solid solution photocatalyst favors the absorption of photons under visible light irradiation. Metal oxides with large band gaps provide higher stability for the composite material [225]. Some of the metal oxides that have been proposed to combine with CdS are TaON, TiO 2 and ZnO [226,227]. Different nanostructures of carbon can also be combined with CdS to promote catalytic behavior towards water splitting by preventing the charge recombination process. Due to the high conductivity of the carbon nanostructures, any contact between CdS and carbon can substantially improve the charge separation and subsequently the catalytic behavior of nanocomposites will be enhanced. Different strategies have been taken to synthesize the carbon-based CdS from simple mixing of carbon and CdS to in-situ growing of the CdS at the surface of graphene oxide using oxygen moieties as the template [228]. WS 2 -Au-CuInS 2 has also been developed for photocatalytically H 2 production by insertion of gold nanoparticles between of WS 2 nanotubes and CuInS 2 (CIS) nanoparticles [229]. Introducing Au nanoparticles led to significant enhancement of light absorption. Moreover, H 2 evolution efficiency has been reported as the highest one for WS 2 -Au-CIS due to the more rapidly photogenerated carrier separation from the type II band structures and the localized surface plasmonic resonance (LSPR) effect from the Au nanoparticles. Nitrides To efficiently harvest solar light, nitrides and oxynitrides can be applied as photocatalysts for water splitting [230]. The 2p orbitals corresponding to nitrogen in nitrides have higher energy than analogous orbitals of oxygen in metal oxides. Consequently, in nitrides lower energy is needed to excite electrons to the conduction band. A solid solution of GaN and ZnO has been considered as a promising photocatalyst for water oxidation [231]. Although GaN and ZnO are poor visible light absorbers due to their large band gaps, after mixing (Ga 1-x Zn x )(N 1-x O x ) will have new electronic states that considerably reduce the band gap [232]. The other well-known catalyst for water splitting is a perovskite-like mixed oxide of NaTaO 3 and SrTiO 3 , which has a 50% quantum yield for the 280 nm-light [233]. Replacement of one of the oxygens in these oxides with nitrogen leads to a shift in the absorption edge toward higher wavelengths (600 nm), which improves their photocatalytic activity under visible light. Tantalum nitride (Ta 3 N 5 ) also has been identified as an active photocatalyst for water splitting. In 2013, Zhen et al., reported a template-free synthesis of Ta 3 N 5 nanorod which was modified with Co(OH) x to be used as anode for photoelectrochemical cell for water splitting [234]. Elsewhere, Ta 3 N 5 has been modified by partial substitution with Mg 2+ and Zr 4+ which led to apparent decrease in onset potential for PEC water oxidation [235]. Such compositional modification could apply to other semiconductors in order to enhance photocatalytic activity. Other than the oxynitrides, graphitic carbon nitrides (g-C 3 N 4 ) also have been utilized as photocatalysts to produce hydrogen due to their narrow band gap of 2.7 eV that has a conduction band shallower than hydrogen evolution potential and a valence band potential deeper than the reversible oxygen evolution potential. g-C 3 N 4 could produce hydrogen from water under visible light (<540 nm) in the presence of a sacrificial agent (oxidizing agent) without the aid of any noble metal. However, pristine g-C 3 N 4 shows a low affinity towards photocatalytic reactions. Wang et al. have reported for the first time a graphitic semiconductor that was synthesized from cyanamide [236] which showed an absorption edge in the visible light region and steady hydrogen production over 75 h. Band gap engineering of g-C 3 N 4 has been reported to enhance the photocatalytic properties through a non-metal (i.e., S, F, B, P) [237] and metal doping (i.e., Pt, Pd, Fe, Zn, Cu) [238] strategy. Moreover, the charge separation in g-C 3 N 4 can also be enhanced by applying conductive graphene, carbon nanotubes, and reduced graphene oxide at the interface with g-C 3 N 4 [239]. Martin et al. have shown nature-inspired semiconductor systems among which the most efficient system consist of g-C 3 N 4 and WO 3 for hydrogen and oxygen evolution (21.2 and 11.0 µmol¨h´1¨g´1, respectively) under visible light for 24 h [240]. They conclude that g-C 3 N 4 can be considered as a multifunctional photocatalyst with the ability of application in PEC cells or coupled solar systems. Recently, Suib et al. have reported a metal-free carbon-based nanocomposite for the hydrogen evolution under visible light using various precursors which resulted in different morphologies, band gaps and consequently different photocatalytic activities. Hybridized g-C 3 N 4 with nitrogen doped graphene quantum dots showed a higher photocatalytic activity for the nanocomposite [241]. Theoretical Modeling of Photocatalytic Water Splitting Theoretical studies concern various aspects [242] of photocatalytic reactions such as light absorption [243], electron/hole transport [244,245], band edge alignment of semiconductors [246,247] and surface photoredox chemistry [248]. Density Functional Theory (DFT) [249,250] is extensively used as a theoretical method to predict and understand the electronic structure of materials due to high accuracy, predictive power, modest computational cost [251] and reproducibility [252]. However, one of the major shortcomings of DFT has been the inaccurate prediction of band gaps. This is because DFT formulation lacks a proper description of self-interaction and correlation terms. Pragmatic approaches involve hybrid functional or addition of electron repulsion to selected localized orbitals [242]. Hybrid functionals have better accuracy for band gap prediction and the position of the excited states but are computationally demanding compared to standard exchange and correlation functional forms [253]. The proper way to tackle the band gap problem is through many-body perturbation theory (MBPT) which has a long-standing record of success [254,255]. This approach although being computationally expensive, provides a standard for comparative studies to develop new methods [253]. One more recent approach known as TB09 has been proposed using a modified version of the Becke-Johnson exchange potential [256] combined with an LDA correlation [257]. This method along with variations have been shown to be some of the most accurate approaches found in the literature to this date relative to computational cost [253,258]. Computational methods are especially helpful for prediction of impurity states induced by dopants in tuning band gaps in photocatalytic systems such as TiO 2 [242]. Time-dependent density functional theory (TD-DFT) is not widely used, and the numbers of studies implementing these methods are cluster-based models [4]. Besides predictive power, theoretical and computational tools are capable of advancing our understanding of various aspects of states. For example, for BiVO 4 comprehensive studies investigated the band structure and density of states [46], electron/hole generation, and migration and energy profiles of surface reactions [259]. For BiVO 4 photoexcited electrons and holes are driven to different crystal facets [260]. These findings were obtained by comprehensive computational studies that showed that compared to (011) facets, the (010) facets have lower absorption beyond 420 nm, better transport of electron/holes, more favorable water absorption, and lower potential energy surfaces for OER [259]. Theoretical studies like that would lead to rational improvement of band structure and morphological design of the photocatalytic material. With advances in accuracy and eventual decrease of computational costs, high-throughput computational screening is going to be an emerging field. This is going to help with choosing the optimal components hence slashing the time of discovery of new materials to a fraction of what now it used. Experimental rapid screening is reported by scanning electron microscopy [52], and multiplexing counter electrode [261] for photocatalytic material discovery. However, computational screening studies currently on record related to photoactive materials are rare and very recent, which highlights the potential of impactful research that could soon emerge in this area [262][263][264]. Conclusions Hydrogen production from solar energy using photocatalytic active materials has been considered as one of the most promising steps towards generating clean and renewable alternatives for fossil fuels. In order to use solar energy more efficiently, different approaches have been employed to shift photocatalyst activity towards the visible range while retaining stability and efficiency. TiO 2 as the pioneer photocatalyst also has some limitations such as wide band gap, high hydrogen overpotential, and rapid recombination of produced electron-hole pairs which have been addressed by various methods including doping, coupling with carbon, noble metal deposition, using dyes, and surface modifications. Other metal oxides such as iron oxide, zinc oxide, copper oxide also have been discussed as well as metal sulfides including cadmium sulfides, and zinc sulfides. In addition, nitrides and nanocomposite materials that have been used as photocatalysts for water splitting have been reviewed. The current outlook of efficient water splitting relies on an innovative design of photocatalytic materials. Recent studies on heterojunction photocatalysts have shed light on the nature of charge transfer. Heterojunctions involving carbon based material is believed to be one of the feasible future routes to efficient photocatalyst design [4]. The architecture of the heterojunction directly influences the activity and could potential lead to great improvements [265,266]. The future direction of photocatalytic water splitting is more focused on development of an efficient photoanode with band edges that match the redox potentials of water and with rapid charge-transfer activity under visible light while maintaining chemical and physical stability [267]. Theoretical and computational models could help us understand the electronic density of states and band structure and therefore point towards a rational design of photocatalysts [46]. Computational high throughput screening is an emerging field that will be utilized in material selection and junction design to yield optimized band structures.
13,854.4
2016-07-01T00:00:00.000
[ "Chemistry", "Environmental Science", "Materials Science" ]
A heat transfer model for liquid film boiling on micro-structured surfaces ABSTRACT High heat transfer coefficient (HTC) and critical heat flux (CHF) are achieved in liquid film boiling by coupling vibrant vapor bubbles with a capillary liquid film, which has thus received increased interest for thermal management of high-power electronics. Although some experimental progress has been made, a high-fidelity heat transfer model for liquid film boiling is lacking. This work develops a thermal-hydrodynamic model by considering both evaporation atop the wick and nucleate boiling inside the wick to simultaneously predict the HTC and CHF. Nucleate boiling is modeled with microlayer evaporation theory, where a unified scaling factor is defined to characterize the change of microlayer area with heat flux. The scaling factor η is found to be independent of wicking structure and can be determined from a few measurements. This makes our model universal to predict the liquid film boiling heat transfer for various micro-structured surfaces including micropillar, micropowder, and micromesh. This work not only sheds light on understanding fundamental mechanisms of phase-change heat transfer, but also provides a tool for designing micro-structured surfaces in thermal management. INTRODUCTION Thermal management is becoming increasingly important for advanced electronic and energy systems, such as radars, microprocessors, power inverters, and space systems, where a large amount of heat needs to be dissipated from a limited space and area, and most importantly with a small temperature difference [1 -5 ].Among various thermal management techniques, liquid-vapor phase-change-based cooling strategies, such as capi l lary evaporation [6 -9 ] and immersion cooling with pool boiling [10 -13 ], have attracted great attention due to their excellent heat transfer performance.However, conventional cooling methods utilizing liquid-vapor phase-change processes encounter challenges in simultaneously improving the heat transfer coefficient (HTC) and critical heat flux (CHF) on the same wicking structure [4 ,9 ,14 ,15 ].Recently, capi l larydriven liquid film boiling where vapor bubbles are generated within a wicked liquid film has shown some promise in simultaneously enhancing HTC and CHF [16 ,17 ].Despite the experimental progress on enhancing the heat transfer performance of liquid film boiling on novel micro-structured surfaces [18 -22 ], there sti l l lacks a high-fidelity model that can capture the detailed physical phenomena and predict the CHF and HTC, and guide the design of wicking surfaces.During liquid film boiling, the heat first passes from the heated surface to the solid-fluid matrix of the wick, and then dissipates either by nucleate boiling through vapor bubbles inside the wick or by evaporation through the thin-film region atop the wick [16 ].As the heat flux increases, the nucleate boiling becomes dominant, leading to corresponding changes in the heat transfer ratio between nucleate boiling and evaporation atop the wick, as well as the dynamic characteristics of the vapor bubble and liquid film meniscus.Characteristics of such coupling phenomena are further complicated by the variety of wicking structures, which vary both the two-phase capi l lary delivery and the thermal transport in the solid-fluid matrix [16 ,23 ,24 ].It is thus challenging but highly desirable to develop a high-fidelity heat transfer model that can account for the thermal-hydraulics and interfacial processes on micro-structured wicking surfaces. Recently, empirical correlations for CHF of liquid film boiling on the surfaces of micropi l lar arrays [25 ] or copper inverse opals [26 ] have been proposed.It is not known whether these specific fitting formulas based on their own experiments can be used for other structures.Determining the HTC of liquid film boiling under varying heat flux can be much more challenging, as it dynamically varies with the expanded liquid-vapor interfacial area of vapor bubbles coupled with the evaporative meniscus [16 ,18 ,20 ].Recently, a model was developed for liquid film boi ling [27 ] by assuming a constant liquid-vapor interfacial area, i.e. not vary ing w ith heat flux and modeling nucleate boiling inside the wick through pore-scale evaporation.The evaporation atop the wick is neglected.Under these assumptions, the model has good agreement with experimental data after an empirical parameter was adjusted based on the specific wicking structure [27 ]. In this work, we develop a model to predict both HTC and CHF of liquid film boiling for various micro-structured wicking surfaces.Both evaporation atop the wick and nucleate boiling inside the wick are taken into consideration.Evaporation atop the wick is determined by analyzing the liquid meniscus curvature variation and the thermal resistance network of the thin-film region close to the tri-phase line [6 ,28 ].Nucleate boiling inside the wick is analyzed at pore-scale through the microlayer evaporation borrowed from pool boiling [29 -32 ], with a scaling factor related to heat flux being introduced.Liquid film boiling experiments on micromesh surfaces were conducted along w ith w ick wettability, wicking capability, and structural characterizations, to determine the scaling factor η. According to our experiments, the scaling factor η is found to be independent of structural parameters, which makes our model universal to predict the liquid film boi ling heat transfer for various micro-structured surfaces.Our model predictions are in good agreement with the reported experimental data on various types of microstructured surfaces, including micromesh, micropowder, and micropi l lar. MODELING AND ANALYSIS This work aims to develop a unified model for the prediction of HTC and CHF of liquid film boiling on various micro-structured surfaces with different liquid supply methods.Typical microstructures include micromesh, micropowder, and micropi l lar (Fig. 1 A-C), while the liquid can be delivered with one-side [16 ], two-side [33 ], and all-around [21 ] supply methods (Fig. 1 D-F).Liquid film boiling with one-side liquid supply on a generalized microstructured surface is i l lustrated in Fig. 1 G.A set of key structural, thermophysical, and wicking parameters including porosity ε w , permeability K w , effective pore radius r eff , effective thermal conductivity k eff , and characteristic radius of wick skeleton r c relating to three typical microstructures, including micromesh, micropowder, and micropi l lar, are listed in Note S1.Similarly, different liquid supply methods alter liquid deliver y boundar y conditions as seen in Note S2. Figure 1 G i l lustrates transport in the x to z direction, i.e. heat is transferred mainly in the thickness direction ( z -direction), while the liquid flows along the wicking direction ( x -direction).On the one hand, heat is absorbed from the heated surface to the liquid film, and transfers inside the liquid film through heat conduction or activates nucleate boiling.On the other hand, the flow of thin liquid film inside the wicking structures depends on both the wicking resistance usually characterized by liquid permeability and capi l lary pressure, in addition to boiling and vaporization.Here, liquid permeability depends on vapor bubble distribution during the boiling process. The following assumptions are made to model the physical processes: (1) nucleate boiling occurs uniformly under a constant heat flux across the surface since most wicks are planar-isotropic, perpendicular to the direction of thickness. (2) Heat conduction through the wicked liquid film is one-dimensional along the z -direction, since wick thickness along the z -direction is generally much smaller than its length along the x -direction. (3) The temperature of the vapor within the micropore is uniform along the z -direction and equal to the saturated vapor temperature.This is reasonable since the liquid film thickness is typically less than hundreds of micrometers, leading to negligible pressure variation that drives the temperature change [34 ]. (4) Liquid can be well wicked to maintain a liquid meniscus on top of the wick; the equilibrium liquid meniscus is used to calculate the evaporation atop the wick.(5) For nucleate boiling inside the wick, the microlayer evaporation model can be adopted as has been widely used in pool boiling [29 ,32 ,35 ]. (6) The evaporation atop the wick and nucleate boiling inside the wick can be characterized as heat transfer coefficients h e and h bv , respectively. In the following, we first find the temperature distribution across the wicking structure as a function of the heat transfer coefficient based on the heat balance equation.The heat transfer coefficient h e of Figure 1.Capillary-driven liquid film boiling on the micro-structured surface.Typical wicking structures include (A) micromesh [16 ], (B) micropowder [21 ], and (C) micropillar [25 ] studied in this work.Common liquid supply methods for liquid film boiling include (D) one-side [16 ], (E) two-side [33 ], and (F) all around [21 ]. (G) Schematic of liquid film boiling heat transfer on a micro-structured surface, where nucleate boiling occurs inside the wicking structure and the evaporation dissipates heat atop the wick.(H) Control volume for heat transfer modeling in the thickness direction ( z -direction) with a unit width.(I) Control volume for liquid transport along the wicking direction ( x -direction).Panel (A) is reproduced with permission from ref. [16 ], Copyright 2018 Elsevier.Panel (B) is reproduced with permission from ref. [21 ], Copyright 2019 Elsevier.Panel (C) is reproduced with permission from ref. [25 ], Copyright 2014 Elsevier. evaporation atop the wick is derived by analyzing the thermal resistance network relating to the curvature of the liquid meniscus.The heat transfer coefficient h bv of nucleate boiling inside the wick is determined by a pore-scale analysis using the microlayer evaporation theory.The transport of the liquid film inside the wicking structure is modelled using the Brinkman equation.The CHF of boiling can be due to many mechanisms; we consider here the CHF of liquid film boiling to be determined by liquid wicking failure assuming the liquid on the heated surface is completely supplied by surface wicking [26 ].By solving the coupled momentum and energy equations, we obtain theoretical expressions for the HTC h t and the CHF q CHF of liquid film boi ling on various micro-structured surfaces. Temperature distribution across the wicking liquid film Considering a control volume with a unit width inside the wick (Fig. 1 H), the heat balance equation can be expressed as q c ( z ) = q c ( z + d z ) + d q b , where q c = q c (d x 1) is the heat conduction rate in the z -direction, q c is the conduction heat flux, and q b is the nucleate boiling heat transfer rate.Given that the conduction heat flux at z + d z can be expressed as q c (z + d z ) = q c (z ) + dq c (z ) / d z , the heat balance equation can then be expressed as: We note that an equivalent homogeneous wick is adopted in Fig. 1 H for generalization, and this control volume is arbitrarily independent of wicking structure.The conduction heat flux in the zdirection is given by: where T w is the temperature of the wicking structure.Similar to Ref. [27 ], we assume that heat is dissipated by nucleate boiling with a uniformly volumetric heat transfer coefficient h bv by averaging the size of bubbles along the wick.The heat transfer rate by nucleate boiling can then be obtained as: where T v is the vapor temperature.Substituting Eqs. ( 2) and ( 3 ) into Eq.( 1), the governing equation of T w across the wick can be expressed as: Equation ( 4) is essentially a fin equation, which is a second-order ordinary differential equation (ODE).Two boundary conditions at the bottom ( z = 0) and top ( z = δ l ) of the liquid film must be imposed for solving Eq. ( 4).The heat flux at the heated surface ( z = 0) is equal to the total heat flux, and this boundary condition can be expressed as [27 ]: where q t is the total heat flux.Considering evaporation happens at the top surface of the wick ( z = δ l ) [16 ], the boundary condition at the top surface should be written as: where q e and h e are the heat flux and heat transfer coefficient of the evaporation atop the wick, respectively.δ l is the thickness of the wicked liquid film. We note that the evaporation atop the wick was neglected in Ref. [27 ] by writing the boundary condition k e f f (dT w / d z ) | z =δ l = 0 .Integrating Eq. ( 4) with boundary conditions of Eqs. ( 5) and ( 6), the temperature distribution of the wicking structure can be obtained as: where m = h bv /k e f f .In the forthcoming sections, δ l and h e are determined through analysis of the transport and evaporation of the wicking liquid film, while h bv is determined by pore-scale analysis of microlayer evaporation.T w ( z ) can be determined according to Eq. ( 7) with known δ l , h e , and h bv , and thus the HTC of liquid film boiling can then be obtained by writing Transport and evaporation of the wicking liquid film The liquid film transport inside the wicking structure can be described by the Brinkman equation as [36 ]: where u l is the liquid flow velocity, ε l is the effective porosity, μ l is the liquid dynamic viscosity, K w is wick permeability, K rl is the relative liquid permeability [37 ,38 ], P l is the local liquid pressure, ρ l is the liquid density, and g is the gravitational acceleration.The boundary conditions for Eq. ( 8) can be obtained with the non-slip velocity at the substrate and shear-free stress atop the wick as: Equation ( 8) is a second-order linear ODE, which can be solved with the boundary conditions of Eq. ( 9) to obtain liquid wicking velocity u l ( x , z ).Then, the average liquid wicking velocity for arbitrary cross-section at x can be expressed as ūl (x ) = δ l 0 u l (x, z )d z/δ l [6 ], and thus the liquid pressure gradient d P l / d x driving the liquid film can be obtained as: where λ = √ ε l /K rl K w .The relative liquid permeability K rl in Eq. ( 10) is computed using the empirical relation as K rl = s 3 [38 ], where s is the liquid saturation.This liquid saturation s is related to nucleate boiling and can be determined by analyzing vapor flow based on the Ergun equation [27 ] (see Note S3).A control volume, shown in Fig. 1 I, is adopted to depict liquid flows along the x-direction due to capi l lary wicking.The energy balance equation due to phase change can be written as q t (d x • 1) = h fg d ṁ v , where h fg is the latent heat of vaporization, and ṁ v is the mass flow rate of vapor.Combining the mass conservation as d ṁ l + d ṁ v = 0, where ṁ l ( x ) = ρ l ūl (x )δ l (x ) • 1 is the mass flow rate of the liquid, we obtain: Since all the liquid evaporated from the surface should enter at x = 0, the boundary condition can be set as: where L is the wicking length as shown in Fig. 1 G. Integrating Eq. ( 11) with the boundary condition of Eq. ( 12), we obtain the average liquid flow velocity as: Substituting Eq. ( 13) into Eq.( 10), the pressure gradient d P l / d x can then be obtained as: Here, δ l is related to the curvature of liquid meniscus H , which depends on liquid pressure P l and can be determined by the Young-Laplace equation as H = ( P v − P l )/2 σ [39 ], with P v being the vapor pressure and σ being the surface tension.Equation ( 14) is a non-linear first-order ODE for P l , which can be solved by the four-order Runge-Kutta method with boundary condition P l | x =0 = P sat .The liquid pressure P l and liquid film thickness δ l can be obtained accordingly (see Note S4). With the curvature of the liquid meniscus determined, the heat transfer coefficient h e of evaporation atop the wick can then be calculated as: Here, A unit is the cross-sectional area of the one-unit cell for the wick, R tf is thermal conduction resistance through the thin-film region and R i, e is the thermal resistance for interfacial evaporation [28 ].The heat transfer coefficient h e of evaporation atop the wick is developed based on the saturated vapor environment, which is often seen in liquid film boiling experiments [16 ,21 ,23 -25 ].The detailed calculations of A unit , R tf and R i, e for different wicking structures can be found in Note S5. Heat transfer coefficient of nucleate boiling inside the wick For capi l lary-driven liquid film boi ling, there are two t ypical t ypes of microstructure: porous structures with interconnected micropores and micro-pi l lared surface.For the former, we use a representative micropore unit within the wick to analyze the nucleate boiling inside the wick.This micropore unit can be conceptualized as an annular pore with an effective pore radius of r eff , as shown in Fig. 2 A. This assumption is reasonable when the effective structural characteristics, including effective pore radius and porosity, are used to ensure the same capi l lary force and permeability for liquid wicking and bubble escaping.For the micro-pi l lared surface, we use a micropi l lar unit to analyze the nucleate boiling inside the wick.Since the analysis of both types of structures is similar, we will present here only the model derivation for the porous structures with interconnected micropores.The derivation for the micropillar can be found in Note S6.The micropore unit approximation for porous structures was also used in earlier studies [27 ] to analyze the nucleate boiling, where the nucleate boiling was assumed to be thinfilm evaporation with a uniform liquid film thickness δ tl independent of heat flux (Fig. 2 B).In Ref. [27 ], a constant liquid-vapor evaporation area is assumed, resulting in a constant heat transfer coefficient h bv as: Indeed, under higher heat flux, the effective evaporation area due to nucleate boiling should change with the heat flux due to the different fractions of bubbles activated.Similar to microlayer evaporation analysis for pool boiling widely used in the literature [29 ,32 ,35 ], we adopt microlayer evaporation to model nucleate boiling inside the wicking structures at various heat fluxes.As shown in Fig. 2 C, the heat within the micropore mainly transfers through a microlayer region characterized by a thickness of δ ml .For saturated water, the microlayer has a thickness δ ml that typically ranges between 1 and 10 μm [29 ], which is similar to the thin-film region for evaporation atop the w ick w ith a thickness < 0.15 r c [7 ], where r c is the characteristic radius of the solid skeleton of the wick (see Note S1).An averaged thickness δ ml of microlayer region is used here to estimate h bv by setting δ ml = 0.15 r c /2 in our calculations. The nucleate boiling heat transfer can be considered as heat transfers through a microlayer conduction resistance R l from the wall to the liquid-vapor interface, then evaporates with an interfacial resistance R i, b , given by: where φ ml is the center angle corresponding to the microlayer region in the annular micropore, A ml is the liquid-vapor interfacial area of the microlayer region (or microlayer area), k l is the thermal conductivity of the liquid, and h i is the heat transfer coefficient for the liquid-vapor interface [40 ].Thus, h i can be obtained from the Schrage equation [41 ], with an accommodation coefficient of 0.04 chosen for water [28 ,40 ].The total heat dissipated by microlayer evaporation q b within the micropore can be expressed as Assuming the microlayer evaporation dissipates heat with an equivalent volume-averaged heat transfer coefficient h bv , q b can also be expressed as , where V up = V pore / ε w is the total volume of unit, V pore = π r eff 2 1 = r eff A total /2 is the volume of the micropore unit, and A total is the surface area.The volumetric heat transfer coefficient of nucleate boiling h bv can then be obtained as: Combining the analysis for the micro-pi l lared surface (see Note S6), a general form of h bv can be then obtained as: where χ is a structural factor: χ = −1 is used for the porous medium such as micropowder and micromesh, while χ = ε w /(1 − ε w ) is used for micro-pi l lared surface.The term A ml / A total represents the area fraction of the microlayer region to the total heated surface.As shown in pool boiling, the microlayer area increases with the heat flux [29 ,31 ,32 ,42 ], and an approximately linear function between A ml / A total and heat flux was recently proposed [32 ], i.e.A ml / A total ∝ q t .Since the microlayer evaporation in pool boiling and liquid film boiling are similar, this linear function between the microlayer area fraction ( A ml / A total ) and the total heat flux is also adopted in this work, i.e.A ml / A total = ηq t by introducing η as a scaling factor.Equation ( 20) can then be reformulated as: Equation ( 21 ) shows that larger heat flux q t , higher porosity ε w , and smaller effective pore radius r eff can lead to higher heat transfer coefficient of nucleate boiling h bv .In this work, the empirical parameter η is obtained by fitting the model predictions to the experimental results of liquid film boi ling, with the details provided in Section 3. HTC and CHF of liquid film boiling With the obtained h e and h bv , the temperature distribution of the wicking liquid film along the zdirection can be calculated from Eq. ( 7).Specifically, the wall temperature T w | z =0 can then be expressed as: Since h e varies w ith w icking distance x , an average wall temperature is adopted as: The HTC of liquid film boiling can then be calculated as We note that the previous model [27 ] simplified the liquid film thickness δ l as the wick thickness δ w (i.e.δ l ≈ δ w ) and ignored evaporation atop the wick, i.e. h e = 0.By applying these simplifications to Eq. ( 24), we have h t = mk eff tanh( m δ w ), which is the same result as derived in Ref. [27 ].When no bubbles nucleate (i.e.h bv = 0), Eq. ( 24) reduces to h t = 1/(1/ h e + δ w / k eff ), which is consistent with the HTC correlation developed for capi l lary evaporation [6 ].By such limit analysis, we can see that our model is more general and can be reduced to those in Ref. [27 ] and for capi l lary evaporation. The CHF of liquid film boiling occurs when the liquid pressure drop is equal to the maximum capi l lary pressure P c, max in the wick [6 ], i.e. − L 0 (dP l / d x ) d x = P c,max .By combining this equation with Eq. ( 14), the relation of CHF for liquid film boiling can be obtained as: From Eq. ( 25), it is evident that increasing capi l lary force P c , max , increasing wick permeability K w , and augmenting relative liquid permeability K rl can enhance the CHF.Since K rl is an exponential function of the liquid saturation s [27 ,37 ], increasing the liquid saturation by promoting vapor escape can also enhance the CHF.Neglecting substrate friction (i.e.neglecting no-slip boundary condition and assuming uniform velocity along the z -direction at each x with ∂u l /∂z = 0 ), assuming uniform vapor distribution across the liquid film (i.e. the relative liquid permeability K r l to be constant), Eq. ( 25) reduces to q CHF = 2P c,max ρ l h f g δ w K rl K w /μ l L 2 , which is similar to the CHF correlation for liquid film boiling proposed by Zhang et al .[26 ].When simplifying liquid film thickness to become the wick thickness ( δ l ≈ δ w ), neglecting gravitational effect (i.e.g = 0), and assuming the K rl equals 1 (indicating that no bubbles exist inside the wick), Eq. ( 25) can be reduced to q CHF = 2P c,max ρ l h f g δ w K w [1 − tanh (λδ w ) /λδ w ] /μ l L 2 , which is consistent with the CHF correlation to predict the dry-out heat flux of capi l lary evaporation [43 ].In Note S7, we also compare the heat transfer performance between liquid film boiling and capi l lary evaporation using our model. EXPERIMENTAL DETERMINATION OF THE SCALING FACTOR η The scaling factor η in Eq. ( 21) characterizes the relationship between microlayer area fraction and heat flux for liquid film boiling, which is difficult to determine theoretically due to a lack of understanding of the bubble dynamics of liquid film boi ling.Instead, we theoretically analyze the η value of pool boiling in Note S8.Considering the difference in the transport processes between the pool boiling and liquid film boiling, we here obtain the η value by fitting our model predictions with our experimental data on sintered multi-layer micromesh surfaces.In this work, the physical properties of the micromesh, such as porosit y, permeabilit y, and wettability, which are not fully presented in the existing literature, are characterized (see Note S9).Liquid film boiling experiments are conducted using a custom-made experimental system [16 ,28 ], which provides a saturated vapor environment for liquid film boi ling (Fig. 3 A).The tested sample is mounted vertically to the heating block with a 10 × 10 mm 2 heating area and immersed in degassed deionized water with one-side liquid supply (see Note S10), consistent w ith our prev ious works [16 ,20 ,28 ].The heat flux q t and surface temperature T w | z =0 are obtained from the onedimensional temperature distribution in the heating block.The measured HTC is calculated as The experimental data on micromesh samples s1 and s2, which have similar wire diameter ( ∼50 μm) and thickness ( ∼265 μm) but different spacing width (77 and 160 μm, respectively), is used to obtain the η value.The model prediction of HTC is obtained from Eq. ( 24) based on the structure properties as shown in Table S5.To maintain consis-tenc y w ith the experiment, we include an extra heat conduction resistance of the substrate in the modelpredicted HTC calculation, as 1/(1/ h t + δ sub / k sub ), where h t is calculated by Eq. ( 24), δ sub is the thickness of the substrate in the experiment, and k sub is the thermal conductivity of the substrate.This calculation method is also used in the following sections when comparing model-predicted HTC with experimental data.The factor η in Eq. ( 21) is determined to be η = 2.15 × 10 −3 cm 2 W −1 by fitting the model prediction of HTC with our experimental data on such uniform micromesh samples, with a mean absolute percentage error (MAPE) of 5.19% (Fig. 3 B).Such a unified value of η could then be used to estimate the microlayer area fraction of the uniform wicking surface according to A ml / A total = ηq t .Figure 3 C shows the model-predicted CHF for different uniform micromesh samples, with thickness ranging from 260 to 377 μm, porosity ranging from 0.57 to 0.76, and spacing width ranging from 77 to 205 μm.With the same value of η = 2.15 × 10 −3 cm 2 W −1 , our CHF model agrees well with experimental data, with ±15% accuracy (Fig. 3 C). Figure 3 D-F show the effects of thickness δ w , porosity ε w , and spacing width s w on CHF and maximum HTC (HTC at CHF).The CHF is enhanced with increased micromesh thickness (Fig. 3 D), due to enhanced wick permeability and decreased resistance of the liquid flow [6 ].However, increasing the wick thickness could also increase the vapor escaping resistance and sightly decrease the CHF.Although increasing the porosity of the micromesh also increases the liquid wicking velocity, there exists an upper limit for the wick porosity for the micromesh with the same wire diameter and the same spacing width (Fig. 3 E).Increasing the spacing width enhances wick permeability, however, the maximum capi l lary force is reduced due to the enlarged effective pore radius [44 ], thereby yielding an optimal spacing width s w for CHF (Fig. 3 F).The maximum HTC is mainly controlled by the maximum microlayer area A ml, max for nucleate boiling, calculated as A ml, max = 2.15 × 10 −3 q t A total , where is the total area of the micropore with N being the number of micropores and V total being the total wick volume.As shown in Fig. 3 D and E, the maximum HTC is increased with larger thickness and increased porosity since the maximum microlayer area for evaporation is expanded with an increased CHF q CHF and the expanded total micropore area A total .The effect of spacing width s w on the maximum HTC is multifaceted.Though increasing s w augments porosity to expand the total micropore area A total , it simultaneously increases the effective pore radius r eff and may reduce the total micropore area A total .Besides, q CHF initially increases with spacing width and subsequently decreases, which also accordingly changes A ml, max .Consequently, the competition results in an optimal s w to maximize effective h bv for nucleate boiling and HTC (Fig. 3 F).From Fig. 3 D-F, we can clearly see that our model with the same value of η = 2.15 × 10 −3 cm 2 W −1 can predict both maximum HTC and CHF with high fidelity for micromesh with different structural parameters.Although changing the wick geometry changes both the total surface area A total and the effective microlayer region area A ml , the area fraction A ml / A total is effectively estimated by the scaling factor η in our model. Model validation Figure 4 A compares the experimental data on micropowder surface ( d w = 250 μm, δ w = 900 μm and ε w = 0.64) from Weibel et al .[24 ], our ex-perimental data on micromesh of sample s2 ( d w = 50 μm, s w = 160 μm, δ w = 264 μm and ε w = 0.71), and our model predictions for HTC using Eq. ( 24) on the same sample.The modelpredicted HTC from the Sudhakar et al .model [27 ] for our micromesh sample s2 is also presented in Fig. 4 A. It is shown that the model by Sudhakar et al .[27 ] predicts a constant HTC, due to its underlying assumption of a constant evaporative area inside the micropore for nucleate boiling and adiabatic condition atop the wick (i.e.h e = 0 due to the neglect of evaporation atop the wick).In contrast, our model can predict the increasing trend of HTC with increasing heat flux, which agrees well with results from previous liquid film boiling experiments [16 ,18 ,20 -24 ].This is because our model considers both the evaporation atop the wick and the variation in the nucleate boiling inside the wick under varying heat flux. To validate our model and understand the coupling roles between evaporation atop the wick and nucleate boiling inside, we develop two simplified models (RM) based Eq. ( 24 ).RM1 is a simplification with the approximations of Sudhakar et al .[27 ] with h e = 0, δ l = δ w and ηq t = 0.9, and RM2 is a reduced model by setting h e = 0 (neglecting evaporation atop the wick).The value ηq t = 0.9 in RM1 is derived from the assumption according to Ref. [27 ], where evaporation occurs uniformly at the entire liquid-vapor interface (i.e.φ ml = 2 π ), and the microlayer thickness is assumed as 0.1 times the effective pore radius (i.e.δ ml = 0.1 r eff ), giving ηq t = A ml / A total = [( r eff − δ ml ) φ ml ]/(2 π r eff ) = 0.9.As shown in Fig. 4 A, RM1 can predict nearly identical results with Ref. [27 ], and a constant HTC is obtained.At low heat flux, the predictions of HTC from RM2 deviate from our original model.This is because at low heat flux, the heat transfer fraction of evaporation atop the wick is relatively high and RM2 neglects the contribution of the evaporation atop the wick.In our model, the relationship between evaporation atop the wick and nucleate boiling inside is self-consistent.The heat dissipation fraction of evaporation atop ( q e /q t ) from our model decreases as the heat flux fraction ( q t /q CHF ) increases (Fig. 4 B).The evaporation heat dissipation fraction exceeds 20% when heat flux fraction ( q t /q CHF ) is below 30%, which indicates that evaporation atop the wick should be considered in liquid film boiling heat transfer modeling, and its neglection may cause a large deviation, especially for low heat fluxes. As shown in Fig. 4 C, our model also exhibits good agreement with experimental data of sample s2 when predicting CHF.To further validate our model prediction, we develop RM3 based on our CHF relation of Eq. ( 25) by neglecting the friction of substrate [24 ].Model prediction based on Sudhakar et al .[27 ] for our sample s2 measurement is also presented.A simplified model, RM1 (with h e = 0 and ηq t = 0.9), is developed based on our model by setting consistent conditions with the assumptions of Sudhakar et al .[27 ].Another simplified model, RM2, is developed based on our model with the assumption of h e = 0.The heat flux fraction q t /q CHF is adopted since the CHF is varied for different microstructures in the literature.(B) The fraction of the evaporation heat flux atop the wick to the total heat flux q e /q t as a function of heat flux fraction q t /q CHF .(C) Comparison of the model-predicted CHF with the experimental data and predictions from other models.The red solid line, purple triangle, and blue diamond represent our experimental result of sample s2, model prediction from Sudhakar et al .[27 ], and model prediction from Zhang et al .[26 ], respectively.(D) Comparison of measured HTC on micro-pillared surface with the predictions of this work and from pool boiling models [32 ,45 ]. (E) Comparison of measured CHF on micro-pillared surface with the predictions from this work and pool boiling models [10 ,46 ,47 ].The red circles in (D) and the red solid line in (E) are the experimental data on micropillar ( d w = 60 μm, δ w = 320 μm and ε w = 0.6) from Cai and Bhunia [25 ]. (i.e.liquid wicking velocity is uniform along the zdirection at each x with ∂u l /∂z = 0 ) and neglecting the gravitational effect (i.e.g = 0), to be consistent with the case in Ref. [27 ].Notably, the CHF prediction from RM3 is very close to the model by Sudhakar et al. [27 ].Besides, a simplified model (RM4) is developed with ∂u l /∂z = 0 , g = 0, and K rl = 0.15, according to Zhang et al .[26 ] and, as expected, the prediction is consistent. Here, we also compare our model on liquid film boiling with the models developed for pool boiling.Since most pool boiling models are developed based on the micropi l lar, we select micropi l lars as the micro-structured surface to compare the predictions.The micropi l lar used in the modeling has the following geometric parameters: a pi l lar diameter of 60 μm, a porosity of 0.6, a thickness of 320 μm, and a wicking length of 2 mm, which are same as those used in the experiment conducted by Cai and Bhunia [25 ].As shown in Fig. 4 D, our model prediction for HTC shows better agreement with the experiment compared to the HTC models developed for pool boiling in literature [32 ,45 ].This is because natural convection contributes significantly to pool boiling [25 ,48 ], micropowder [21 ,24 ], and micromesh [16 ,23 ]. (A) h t , model vs. h t , exp .(B) q CHF ,model vs. q CHF ,exp . heat transfer, but is not as important in liquid film boiling due to the thinness of the capi l lary film.Although there are many CHF models for pool boiling that consider surface properties and liquid wicking [10 ,46 ,47 ], they are not suitable for predicting CHF in liquid film boiling (Fig. 4 E), due to the different mechanisms aforementioned.For pool boiling, the relatively large bubbles easily coalesce to form a vapor blanket, leading to the thermal-hydraulic CHF.However, in liquid film boiling CHF occurs because of surface dry-out, which is caused by capi l lary wicking failure. Model prediction In this section, we use our model to predict both HTC and CHF with unified microlayer area factor η = 2.15 × 10 −3 cm 2 W −1 for liquid film boi ling on various types of uniform micro-structured surfaces.The wicking structures include silicon micropi l lar arrays [25 ,48 ], packed copper micropowders [21 ,24 ], and sintered copper micromeshes [16 ,23 ] (Table 1 ).The liquid supply methods for these experiments include one-side, two-side, and all-around, with a heated area ranging from 4 to 100 mm 2 , a wick thickness ranging from 200 to 1200 μm, and a wick porosity ranging from 0.51 to 0.76. Figure 5 A shows that the HTC predictions from our model are in good agreement with experimental results for various uniform wicking structures, achieving a MAPE of 13.7%.For the HTC ranging from 3 0-3 00 kW m −2 K −1 , the model predicts the experimental data well, with an accuracy of ±30% due to its ability to calculate the heat dissipated by both nucleate boiling inside the wick and evaporation atop the wick.The experimental CHF with a range of 40-1200 W cm −2 could also be well predicted by our model with an error of 20% (Fig. 5 B).The MAPE for CHFs of all samples is calculated to be 11.9%.We note that HTC and CHF are affected by many aspects, including surface geometries, thermal conductivity, liquid supply methods, surface wettability, and total heat flux input.Our model has taken these aspects into account and experimentally determines a factor η to model the variation in microlayer area fraction during liquid film boiling.Therefore, our model can predict both the HTC and CHF for various wicking structures within a spread of ±30%. CONCLUSION In this work, a heat transfer model for predicting both the CHF and HTC is developed for liquid film boiling on various micro-structured surfaces.The model accounts for both evaporation atop the wick and nucleate boiling inside the wick.Evaporation atop the wick is calculated by analyzing the thermal resistance network at the thin-film region.Nucleate boiling is modeled using microlayer evaporation theory with an empirical factor η to describe the relationship between microlayer area fraction and heat flux as A ml /A total = ηq t .The scaling factor η is found to be independent of structural parameters and the value is determined to be η = 2.15 × 10 −3 cm 2 W −1 by our liquid film boi ling experiments.Our model can be reduced to prev ious models w ith simplifications.With the same value of η, the model predictions of both HTC and CHF are in good agreement with the reported experimental data for various uniform micro-structured surfaces, including micropillar, micropowder, and micromesh, with a spread of ±30%.This work provides a tool for designing micro-structured surfaces for advancing thermal management applications. Figure 2 . Figure 2.Pore-scale analysis of nucleate boiling inside the wicking structure.(A) Nucleate boiling inside the wick micropores, which can be equivalent to annular interconnected micropores[27 ]. (B) Nucleate boiling in a micropore unit is modeled as evaporation with uniform thickness of the thin liquid film in the previous model[27 ]. (C) Nucleate boiling in a micropore unit is modeled as microlayer evaporation at the microlayer region with an average thickness of δ ml in this work. Figure 3 . Figure 3. Determination of effective microlayer evaporation factor η by experimental results.(A) Schematic of the custom-made experimental setup for liquid film boiling measurement.(B) Determination of η = 2.15 × 10 −3 cm 2 W −1 with our experimental data on copper micromesh samples s1 and s2.(C) Comparison of experimental CHF and model-predicted CHF with η = 2.15 × 10 −3 cm 2 W −1 .The CHF and the maximum HTC as the function of (D) thickness δ w , (E) porosity ε w , and (F) spacing width s w .The red circles in (D-F) represent the experimental data and the black solid lines are the modeling results with η = 2.15 × 10 −3 cm 2 W −1 . Figure 4 . Figure 4. Model validation with experimental data and simplifications.(A) Comparison of model-predicted HTC with the experimental data and prediction from other models[27 ].The experimental data on micromesh sample s2 ( d w = 50 μm, s w = 160 μm, δ w = 264 μm and ε w = 0.71) are collected in this work, and the experimental data on micropowder ( d w = 250 μm, δ w = 900 μm and ε w = 0.64) are taken from Weibel et al .[24 ].Model prediction based on Sudhakar et al .[27 ] for our sample s2 measurement is also presented.A simplified model, RM1 (with h e = 0 and ηq t = 0.9), is developed based on our model by setting consistent conditions with the assumptions of Sudhakar et al .[27 ].Another simplified model, RM2, is developed based on our model with the assumption of h e = 0.The heat flux fraction q t /q CHF is adopted since the CHF is varied for different microstructures in the literature.(B) The fraction of the evaporation heat flux atop the wick to the total heat flux q e /q t as a function of heat flux fraction q t /q CHF .(C) Comparison of the model-predicted CHF with the experimental data and predictions from other models.The red solid line, purple triangle, and blue diamond represent our experimental result of sample s2, model prediction from Sudhakar et al .[27 ], and model prediction from Zhang et al .[26 ], respectively.(D) Comparison of measured HTC on micro-pillared surface with the predictions of this work and from pool boiling models[32 ,45 ]. (E) Comparison of measured CHF on micro-pillared surface with the predictions from this work and pool boiling models[10 ,46 ,47 ].The red circles in (D) and the red solid line in (E) are the experimental data on micropillar ( d w = 60 μm, δ w = 320 μm and ε w = 0.6) from Cai and Bhunia[25 ]. Table 1 . Summary of the experimental conditions and sample characteristics in the literature. Figure 5.Comparison between model predictions and experimental data on various types of wicking structures: micropillar
10,176.6
2024-03-08T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Aging imparts cell-autonomous dysfunction to regulatory T cells during recovery from influenza pneumonia Regulatory T (Treg) cells orchestrate resolution and repair of acute lung inflammation and injury after viral pneumonia. Compared with younger patients, older individuals experience impaired recovery and worse clinical outcomes after severe viral infections, including influenza and SARS coronavirus 2 (SARS-CoV-2). Whether age is a key determinant of Treg cell prorepair function after lung injury remains unknown. Here, we showed that aging results in a cell-autonomous impairment of reparative Treg cell function after experimental influenza pneumonia. Transcriptional and DNA methylation profiling of sorted Treg cells provided insight into the mechanisms underlying their age-related dysfunction, with Treg cells from aged mice demonstrating both loss of reparative programs and gain of maladaptive programs. Strategies to restore youthful Treg cell functional programs could be leveraged as therapies to improve outcomes among older individuals with severe viral pneumonia. Introduction Age is the most important risk factor determining mortality and disease severity in patients infected with influenza virus or SARS coronavirus 2 (SARS-CoV-2; refs. [1][2][3]. Global estimates of seasonal influenza-associated mortality range from 300,000 to 650,000 deaths per year, with the highest at-risk group consisting of individuals over age 75 (4). In the United States, influenza-associated morbidity and mortality have steadily increased, an observation linked to an expansion of the aging population. Pneumonia related to severe influenza A virus and SARS-CoV-2 infection results in an initial acute exudative phase characterized by release of proinflammatory mediators that damage the alveolar epithelial and capillary barrier to cause refractory hypoxemia and acute respiratory distress syndrome (ARDS) (5). If a patient survives this first stage, activation of resolution and repair programs during the ensuing recovery phase is crucial for restoration of lung architecture and function, which promotes liberation from mechanical ventilation, decreases intensive care unit length of stay, and extends survival. Immunomodulatory regulatory T (Treg) cells expressing the lineage-specifying transcription factor Foxp3 dampen inflammatory responses to endogenous and exogenous antigens. Aside from their role in maintaining immune homeostasis through their capacity to suppress overexuberant immune system activation, Treg cells reside in healthy tissues and accumulate in the lung in response to viral injury to promote tissue repair (6)(7)(8). Our group and others have shown that in murine models of lung injury, Treg cells are master orchestrators of recovery (9)(10)(11)(12). Treg cells are capable of promoting tissue regeneration and repair, at least in part through release of reparative mediators, such as the EGF receptor ligand amphiregulin (Areg), which induces cell proliferation and differentiation of the injured tissue (13). Epigenetic phenomena, including DNA methylation, modify the architecture of the genome to control gene expression and regulate cellular identity and function throughout the lifespan (14). Aside from being one of the best predictive biomarkers of chronological aging and age-related disease onset, DNA methylation regulates Treg cell identity through tight epigenetic control of Foxp3 and Foxp3-dependent programs (15). Biological aging is associated with a progressive loss of molecular and cellular homeostatic Regulatory T (Treg) cells orchestrate resolution and repair of acute lung inflammation and injury after viral pneumonia. Compared with younger patients, older individuals experience impaired recovery and worse clinical outcomes after severe viral infections, including influenza and SARS coronavirus 2 (SARS-CoV-2). Whether age is a key determinant of Treg cell prorepair function after lung injury remains unknown. Here, we showed that aging results in a cell-autonomous impairment of reparative Treg cell function after experimental influenza pneumonia. Transcriptional and DNA methylation profiling of sorted Treg cells provided insight into the mechanisms underlying their age-related dysfunction, with Treg cells from aged mice demonstrating both loss of reparative programs and gain of maladaptive programs. Strategies to restore youthful Treg cell functional programs could be leveraged as therapies to improve outcomes among older individuals with severe viral pneumonia. JCI Insight 2021;6(6):e141690 https://doi.org/10.1172/jci.insight.141690 mechanisms that maintain normal organ function, rendering individuals susceptible to disease (16,17). Because of their tissue-reparative functions, Treg cells are important modulators of the immune response that promotes tissue regeneration after injury (18). Whether age plays a key role in determining the prorepair function of Treg cells in the injured lung during recovery from viral pneumonia remains unknown. If aging indeed affects Treg cell-mediated recovery, is it a Treg cell-autonomous phenomenon or is it because the aging lung microenvironment is resistant to Treg cell-mediated repair? Using heterochronic (age-mismatched) adoptive Treg cell transfer experiments and molecular profiling in mice, we sought to determine whether the age-related impairment in repair after influenza-induced lung injury is intrinsic to Treg cells. Our data support a paradigm in which aged Treg cells activate maladaptive responses, fail to upregulate youthful reparative programs, and consequently exhibit a cell-autonomous impairment in prorecovery function, which delays resolution from virus-induced lung injury among aged hosts. Results Aging results in increased susceptibility to influenza-induced lung injury because of impaired recovery. To evaluate the age-related susceptibility to influenza-induced lung injury, we administered influenza A/WSN/33 (H1N1) virus via the intratracheal route to young (2-4 months) and aged (18-22 months) WT mice. Aged mice exhibited greater than 50% mortality when compared with young animals ( Figure 1A), impaired recovery of total body weight following a similar nadir ( Figure 1B), and more severe lung injury by histopathology at a late recovery time point, day 60 after infection ( Figure 1C). At this same time point, aged mice also displayed an increase in the total number of cells per lung ( Figure 1D), which were mainly composed of immune cells identified by the pan-hematopoietic marker CD45 ( Figure 1E), suggesting nonresolving tissue inflammation during recovery in older mice. Similar to prior reports (19,20), we confirmed that the increased susceptibility to influenza-induced lung injury in aged mice was observed despite having cleared the virus on day 14 after infection, the time of the greatest degree of weight loss among both groups ( Figure 1F). We next wanted to determine whether the age-related susceptibility to influenza-induced lung injury was due to a differential inflammatory response during the initial acute injury phase. Accordingly, we examined a different group of young and aged mice at a time point when viral clearance was complete and weight nadir was observed in both groups, 14 days after infection. Aged mice demonstrated increased mortality when compared with young animals at this time point (Supplemental Figure 1A; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.141690DS1), but other markers of acute inflammation, including weight loss (Supplemental Figure 1B), total lung cells (Supplemental Figure 1C), and total lung CD45 + cells in surviving animals were not significantly different between groups (Supplemental Figure 1D). Collectively, these results suggest that aging results in similar early injury but persistent lung inflammatory pathology during the recovery phase of influenza-induced lung injury. Aging results in deficient repair after influenza-induced lung injury. Having established that aging results in an increased susceptibility to persistent lung inflammation and injury after influenza infection, we explored whether the impaired recovery in aged mice was linked to a persistent failure to repopulate the structural components of the alveolar-capillary barrier (i.e., failure to repair). Flow cytometry analysis (Supplemental Figure 2) of lung single-cell suspensions on day 60 after influenza infection revealed an increased percentage of alveolar epithelial type 2 (AT2) cells (CD45 -T1α -CD31 -EpCAM + /CD326 + MHCII + ) and endothelial cells (CD45 -T1α -EpCAM -/CD326 -CD31 + ) ( Figure 2, A and B, and Figure 2, D and E, respectively) compared with the naive state. Compared with young animals, aged mice after influenza displayed a significantly lower total number of AT2 and endothelial cells ( Figure 2C and Figure 2F, respectively). In previous studies, investigators demonstrated that after influenza-induced lung injury, a population of cytokeratin 5 + (Krt5 + ) basal-like cells expand and migrate to the distal airspaces in an attempt to repair the injured epithelial barrier (21). These cells lack the capacity to transdifferentiate into functional AT2 cells, resulting in a dysplastic response that contributes to a dysregulated and incomplete repair phenotype after injury (22). Using a flow cytometry quantitative approach, we found that at 60 days after infection, aged mice exhibited a significant increase in Krt5 + cells compared with young animals (Figure 2, G and H). In summary, older mice failed to repair the injured lung during the recovery phase of influenza-induced lung injury. Aging determines the prorecovery function of Treg cells after influenza-induced lung injury. We and others have identified an essential role for Treg cells in orchestrating resolution and repair of acute lung injury (9-13). Having established that aged mice failed to repair the injured lung, we next sought to determine whether this finding is due to age-related features altering the lung microenvironment or is driven by cell-autonomous, age-associated Treg cell factors. Thus, we performed heterochronic (age-mismatched) adoptive transfer of 1 × 10 6 splenic young or aged Treg cells (~90% CD4 + CD25 hi Foxp3 + , Supplemental Figure 3A) via retro-orbital injection into aged or young mice 24 hours after influenza infection ( Figure 3A). Notably, adoptive transfer of young Treg cells into aged hosts resulted in improved survival compared with aged mice that received isochronic (age-aligned) adoptive transfer of aged Treg cells. Conversely, adoptive transfer of aged Treg cells into young hosts worsened their survival compared with isochronic adoptive transfer of young Treg cells ( Figure 3B). To further characterize the effect of age on the splenic Treg cells used for adoptive transfer, we performed flow cytometry characterization of their Treg cell phenotype. In the spleen and other tissues, Treg cells can be phenotypically subdivided into resting or central Treg (cTreg) cells, which comprise the majority of the Treg cell pool in lymphoid organs, and activated or effector Treg (eTreg) cells, which can migrate to nonlymphoid organs upon stimulation. Aged splenic Treg cells exhibited a significantly decreased percentage of cTreg cells and increased percentage of eTreg cells in both the naive and after influenza conditions (Supplemental Figure 3B). Additionally, to determine the rate of lung engraftment of splenic Treg cells after heterochronic adoptive transfer, we quantified the number of transferred splenic Treg cells in the lungs of recipient mice after influenza infection. Interestingly, we found a small but significant increase in the number of young Treg cells that engrafted into the lungs of aged mice compared with aged Treg cells that were recovered from the lungs of young recipients (Supplemental Figure 3C). We next turned to an inducible Treg cell depletion system using Foxp3 DTR mice in order to eliminate Treg cells from recipients and specifically determine the age-related effect of donor Treg cells on the susceptibility to influenza-induced lung injury ( Figure 3C). Heterochronic adoptive transfer of aged Treg cells into Treg cell-depleted Foxp3 DTR mice 5 days after infection resulted in increased mortality compared with isochronic adoptive transfer of young Treg cells ( Figure 3D). Combined, our findings demonstrated that the loss of Treg cell prorepair function in aged hosts was dominated by intrinsic, age-related changes in Treg cells and not conferred extrinsically by the aging lung microenvironment. Aging results in the loss of prorepair transcriptional programs in Treg cells during recovery from influenza-induced lung injury. To further explore the mechanisms underpinning the age-related loss of Treg cell prorepair function after influenza infection, we performed gene expression profiling using RNA-Seq on flow cytometry-sorted lung CD3ε + CD4 + CD25 hi FR4 + Treg cells during the naive state or late recovery phase from influenza (day 60 after infection; Supplemental Figure 3D, Figure 4A, and ref. 23). We confirmed that sorted lung CD3ε + CD4 + CD25 hi FR4 + cells from young and aged hosts expressed high levels of canonical Treg cell signature genes (Foxp3, Il2ra, Il2rb, Nrp1, Ikzf2, and Ctla4) (Supplemental Figure 3E). Principal component analysis (PCA) of 3,132 differentially expressed genes after multiple-group testing with FDR q value less than 0.05 demonstrated tight clustering by group assignment; PC1 reflected the transcriptional response to influenza infection and PC2 reflected age ( Figure 4B). K-means clustering of these differentially expressed genes demonstrated that cluster 2 was both the largest cluster and the one that defined the differential response to influenza infection between naive and influenza-treated mice ( Figure 4C). Notably, genes from this cluster were significantly upregulated among young Treg cells compared with aged Treg cells after influenza infection ( Figure 4D). Functional enrichment analysis revealed that this cluster was enriched for processes related to tissue and vasculature development and extracellular matrix formation (see Figure 4C, right), suggesting an enhanced reparative phenotype of young compared with aged Treg cells. We then performed an unsupervised analysis of the response to influenza infection from the naive state to recovery phase between young and aged Treg cells. This analysis revealed upregulation of 1,174 genes (log 2 [fold change] > 0.5, FDR q value < 0.05), mostly linked to lung development (including epithelial and endothelial cell differentiation), extracellular matrix organization, and wound healing (Foxp2, Hhip, Klf2, Tns3, Hoxa5, Epcam, Erg, Bmper, Ereg, Lox, Tnc, Lama3, and Spp1) in young hosts ( Figure 5, A and B). We also found increased expression of genes associated with specialized Treg cell function in the maintenance of nonlymphoid tissue homeostasis and regenerative function (Il1rl1 [encodes ST2, also known as IL33R], Il18r1, Il10, and Areg). Aged Treg cells demonstrated increased expression of cell cycle genes (Kif15 and Cdk1), neutrophil chemotaxis (Cxcr1, Cxcl1, and S100a9), and cytotoxic effector function (Gmzk). Gene set enrichment analysis (GSEA) of the pairwise comparison between young and aged Treg cells during the recovery phase after influenza infection revealed that aged Treg cells downregulated repair-associated processes such as epithelial-mesenchymal transition, myogenesis, and angiogenesis compared with young hosts (Figure 6, A and B). This pairwise comparison demonstrated that although young Treg cells exhibited significantly increased expression of genes associated with naive (resting) state and lymphoid tissue markers akin to central Treg (cTreg) cell phenotype (Lef1, Sell, Satb1, Bcl2, S1pr1, Gpr83, and Igfbp4), aged Treg cells upregulated genes implicated in effector Th1, Th17, and T follicular regulatory differentiation (Tbx21/Cxcr3, Hif-1a, and Sostdc1, respectively); cell cycle (Ccna2, Mmc3 and Msi2); T cell anergy (Rnf128); and DNA damage response (Xrcc5 and Rm1) ( Figure 6A). Collectively, these results revealed that aged Treg cells display a less robust reparative phenotype compared with young hosts and exhibit features of age-related maladaptive T cell responses, including effector T cell differentiation, cell cycle arrest, and DNA damage responses (24). Lung Treg cells from aged mice exhibit a less robust reparative response than lung Treg cells from young animals during recovery from influenza pneumonia. To further investigate the age-related reparative function of Treg cells during the recovery phase of influenza infection, we performed pairwise comparisons of young and aged Treg cells from the naive state and recovery phase (day 60 after infection; Figure 7, A and D). We found that during the Treg cell response to influenza, there were 1,678 upregulated genes in young mice and only 445 upregulated genes in aged mice compared with their respective naive state (FDR q value < 0.05). GSEA revealed upregulation of prorepair hallmark processes in young and aged Treg cells during recovery from influenza infection (Figure 7, B, C, E, and F). We next compared the age-related transcriptional response to influenza infection and found 342 shared genes between both age groups that were associated with prorepair processes (Figure 8). The remaining 1,336 uniquely upregulated genes in young Treg cells were linked to reparative processes, in contrast with the 103 uniquely upregulated genes in aged Treg cells. Altogether, these results suggest that although aged Treg cells demonstrated a nominal reparative response during recovery from influenza infection, it was not as robust as the response exhibited by young Treg cells. Previous studies have shown that Treg cells exhibiting a prorepair phenotype express high levels of the alarmin IL-33 receptor (ST2), proinflammatory cytokine receptor IL-18 (IL-18Rα or CD218α), and antiinflammatory cytokine IL-10 (13, 25). We measured the protein expression of these molecules by flow cytometry and found similar expression between young and aged Treg cells 60 days after influenza infection (Supplemental Figure 4A). We reasoned that measurement of only 3 markers linked to repair incompletely characterizes the complexity of the Treg reparative landscape during influenza-induced lung injury and recovery. Accordingly, we compared our young and aged Treg cell RNA-Seq dataset with previously published datasets describing reparative IL-10 + and IL-18Rα + Treg cells (13). We found that reparative IL-10 + and IL-18Rα + Treg cells were most similar to young Treg cells from our data set when compared with aged Treg cells (Supplemental Figure 4B). Indeed, the number of genes within the unique intersections of young and IL-10 + and IL-18Rα + Treg cells was significantly higher than the number of genes within the unique intersections of aged and IL-10 + and IL-18Rα + Treg cells. Genes within the unique intersections of young and IL-10 + and IL-18Rα + Treg cells were enriched for gene ontology processes linked to reparative processes, including extracellular matrix remodeling, vascular remodeling, and tube morphogenesis (Supplemental Figure 4B). These results underscore our finding that the youthful Treg cell response during recovery from influenza pneumonia was defined by a more robust reparative phenotype compared with aged hosts. Treg cells from aged mice demonstrate a proinflammatory effector phenotype during recovery from influenza pneumonia. Upon stimulation, Treg cells can exhibit phenotypic and functional adaptability through upregulation of transcription factors and chemokine receptors akin to effector T helper cell subsets (e.g., Th1, Th2, Th17, and T follicular regulatory; ref. 26). The resulting eTreg cells with Th-like phenotypes migrate to inflammatory nonlymphoid tissues and acquire transcriptional and functional programs that copy the T effector responses they intend to suppress (27,28). Having observed transcriptional upregulation of effector-associated factors in the aged Treg cell response during recovery from influenza infection, we next performed flow cytometry analysis to measure canonical effector-associated transcription factors and cytokines at the protein level. During the recovery phase after influenza infection, aged mice exhibited a higher percentage of eTreg cells in their lungs compared with young hosts ( Figure 9A). In the lung, compared with young CD4 + Foxp3 + cells, aged cells exhibited significantly higher expression of the transcription factors Tbet and Ror-γt, canonical master regulators of Th1 and Th17 responses, respectively ( Figure 9B). Surprisingly, intracellular cytokine profiling of these cells also demonstrated a significant increase in the percentage of Ifn-γ-and IL-17-producing CD4 + Foxp3 + cells in the lungs of aged mice compared with young mice ( Figure 9C). Profiling of multiple well-established Treg cell suppressive markers, including Foxp3, showed no significant age-related difference during the recovery phase of influenza infection (Supplemental Figure 5A). In addition, aged and young Treg cells exhibited similar suppressive function in vitro, although there was a trend toward increased suppressive function in aged cells, as has been suggested by a previous report (Supplemental Figure 5B and ref. 29). These data suggest that Treg cells in the lungs of aged animals upregulate inflammatory effector programs during recovery from influenza. Although these effector programs are ostensibly protective during acute influenza-induced acute lung injury, they may represent an age-related maladaptive Treg cell response, leading to unremitting lung inflammation and injury during recovery. DNA methylation regulates the transcriptional prorepair response to influenza infection. In addition to representing one of the hallmarks of aging, epigenetic phenomena such as DNA methylation regulate the development, differentiation, and functional specialization of T cell lineages, including Treg cells (14)(15)(16)30). Therefore, we reasoned that age-related changes to the Treg cell DNA methylome could inform the divergent prorepair transcriptional response observed between young and aged Treg cells after influenza infection. We performed genome-wide 5′-cytosine-phosphate-guanine-3′ (CpG) methylation profiling with modified reduced representation bisulfite sequencing of sorted lung Treg cells during the naive state or recovery phase after influenza infection (day 60, Figure 10A). PCA of approximately 70,000 differentially methylated cytosines (FDR q value < 0.05) revealed tight clustering according to group assignment; the main variance across the data set (PC1) reflected methylation changes due to age ( Figure 10B), consistent with prior studies (17,31). We next identified genes that were both differentially expressed and contained differentially methylated cytosines within their gene promoters (ANOVA, FDR q value < 0.05), finding 1,319 genes meeting this strict parameter threshold ( Figure 10C). K-means clustering of these genes' expression levels revealed similarity to the overall heatmap of differentially expressed genes in Figure 4C, suggesting a strong correlation between differential DNA methylation and transcription. GSEA of these genes demonstrated that this methylation-regulated gene expression program was associated with prorecovery processes and was significantly skewed toward young Treg cells ( Figure 10C). Combined, these results support the notion that age-related DNA methylation regulates the reparative transcriptional regulatory network during recovery from influenza-induced lung injury. Discussion We sought to unambiguously address the paradigm of how aging affects Treg cell function during recovery from influenza pneumonia. We used heterochronic (age-mismatched) Treg cell adoptive transfer after influenza infection to establish that the age-related prorepair function of these cells is determined by cell-autonomous mechanisms. While adoptive transfer of young Treg cells into aged or Treg cell-depleted hosts demonstrated a salutary effect, the transfer of aged cells into young or Treg cell-depleted hosts had a detrimental impact on mortality. Comprehensive transcriptional and DNA methylation profiling revealed age-related epigenetic repression of the youthful reparative gene expression profile and activation of maladaptive responses in lung Treg cells among aged hosts. The ongoing coronavirus disease 2019 (COVID-19) pandemic caused by SARS-CoV-2 represents an unprecedented challenge for the scientific community to identify novel pharmacotherapies and strategies for effective disease management. Most studies have addressed the early mechanistic events leading to viral pneumonia-induced lung injury, yet failed to develop efficacious therapies to improve outcomes among patients with viral ARDS. Thus, we focused our experimental design on the late stages of recovery from influenza infection with the goal of investigating potentially novel reparative pathways that could be leveraged to enhance lung resilience to viral pneumonia in older hosts. Both influenza A virus-and SARS-CoV-2-induced lung injury disproportionately affect the elderly, who comprise the greatest proportion of infection-related deaths (1,3,32,33). Here, we found that similar to human epidemiological data and previous preclinical murine studies, aged mice exhibited increased susceptibility and impaired recovery after influenza infection (4,19). Injury to alveolar epithelial type 1 and 2 and endothelial cells disrupts the tight gas-exchange barrier, causing accumulation of fluid and proinflammatory mediators in the alveolar space, a hallmark of ARDS pathophysiology (5). Notably, we found that during late recovery from influenza infection, aged hosts demonstrated a decreased number of AT2 cells and endothelial cells compared with young animals, suggesting that failure to repopulate the alveolar lining contributes to the observed age-related impairment in recovery. Severe influenza infection leads to a robust expansion of Krt5 + cells, which migrate distally to form cyst-like structures or pods intended to cover the damaged alveolar wall (22). These pods persist long after the initial infection, lack the capacity to generate a functional alveolar epithelium, and therefore constitute an insufficient reparative response to injury (22). Here, we found that aged animals displayed an increased percentage of Krt5 + cells during the recovery phase of influenza-induced lung injury, which reflects the dysregulated repair response among aged hosts. Over the past decade, Tregs have emerged as key mediators of wound healing and tissue regeneration (7,18,34). This group of specialized cells has been primarily known for the ability to suppress effector immune cell subsets, leading to resolution of inflammation, but these cells are also capable of directly affecting tissue regeneration through production of prorepair mediators, such as amphiregulin and keratinocyte growth factor (13,25,35). Investigators have demonstrated that aging can negatively affect the composition and function of the Treg cell pool throughout the lifespan, rendering them inefficient as mediators of tissue repair (36). This decline might occur through cell-autonomous mechanisms that result in altered trafficking to the lung and T cell maladaptive responses that lead to increased susceptibility to disease. For instance, loss of stemness accompanied by differentiation into proinflammatory Th1/Th17 phenotypes, activation of DNA damage responses, and the senescence secretome are among some of the T cell maladaptations that result from the mounting challenges to JCI Insight 2021;6(6):e141690 https://doi.org/10.1172/jci.insight.141690 which the T cell repertoire is exposed over a lifetime (24). These T cell maladaptive changes could also result from an age-related decline in stromal signals and circulating factors from the tissue microenvironment that either affect T cell function directly or render the microenvironment resistant to T cell responses. Our heterochronic adoptive Treg cell transfer experiments definitively address this paradigm, ascertaining that the observed age-related Treg cell dysfunction is due to cell-autonomous mechanisms and dominant over the aged pulmonary microenvironment. Our data demonstrated that aging not only imparted a loss of prorecovery Treg cell function, but also a gain of some of these maladaptive features compared with young hosts. Future studies should aim to address the respective contribution of loss of reparative function or gain of maladaptive function to the overall phenotype and function of aged Treg cells during virus-induced lung injury and recovery. What are the molecular mechanisms underpinning the age-associated Treg cell loss of reparative function in the lung after influenza infection? Gene expression profiling of lung Treg cells during the recovery phase of influenza infection revealed that young Treg cells significantly upregulated genes (compared with aged Treg cells) linked to biological processes associated with a robust prorepair signature, including extracellular matrix organization, alveologenesis, and vasculogenesis. Here, we demonstrated that the young Treg cell prorepair program was dominated by Areg expression and other genes related to the above-mentioned reparative processes. Interestingly, we found no difference when comparing the suppressive phenotype and function of young versus aged Treg cells after influenza infection and during steady state. These results suggest that the reparative program of Treg cells is separable and distinct from their suppressive program, an important observation that informs the development of Treg cell-based immunotherapies to target molecular pathways regulating their reparative function. In regard to aged Treg cells, we found that although capable of upregulating a prorepair program after influenza infection, the response was far less robust than the youthful reparative response. Moreover, aged Treg cells displayed increased expression of genes associated with an effector phenotype with increased expression of canonical Th1 and Th17 lineage markers (Tbet/IFN-γ and Ror-γt/IL-17, respectively). Whether this finding represents an age-related functional adaptability of Treg cells after influenza infection or it is the result of Treg cell lineage instability leading to effector differentiation remains unknown. For our studies, because of the practical limitations of aging Foxp3 genetic reporter animals, we identified Treg cells as CD3ε + CD4 + CD25 hi FR4 + , a strategy validated by our transcriptomics data (Supplemental Figure 3E) and previous work (23). Nevertheless, bulk RNA-Seq analysis averages gene expression profiles across a heterotypic Treg cell pool and presents them as a monotypic population, constituting a limitation to our data interpretation. Future studies should implement single-cell technologies to accurately describe the heterogeneity of the Treg cell landscape during virus-induced lung injury and recovery, which will allow investigators to identify Treg subpopulations that can be leveraged for cell-based therapies. Establishment of a Treg cell-specific DNA methylation pattern at key genomic loci is necessary to maintain the lineage stability and immunosuppressive function of Treg cells (15,30). Epigenomic profiling has revealed that Treg cell-specific alterations in methylation patterning modulate Treg cell transcriptional programs and increase susceptibility to human autoimmune diseases (37). Whether epigenetic phenomena have a similar regulatory role in modulating the Treg cell reparative gene expression program remains unknown. Here, we used an unsupervised bioinformatics analysis to uncover a Treg cell-specific methylation-regulated transcriptional program enriched for reparative processes during recovery from influenza infection. Our computational integrative approach provides inferential evidence that age-related DNA methylation can modify the expression of genes linked to prorepair processes in Treg cells but does not prove causality and therefore represents a limitation of our study. Future research could focus on leveraging epigenome editing technologies to establish the causality of age-related, Treg cell-specific DNA methylation patterns in controlling their regenerative function (38). In conclusion, our study establishes that aging imparts cell-autonomous dysfunction to the reparative capacity of Treg cells after influenza pneumonia. The youthful reparative transcriptional response of Treg cells is dominated by processes linked to epithelial and endothelial cell repair and extracellular matrix remodeling and demonstrates regulation by DNA methylation. Aged Treg cells exhibited a less robust reparative program and displayed features of maladaptive T cell responses. These findings carry important implications for the development of small molecule-and Treg cell-based therapeutics that promote restoration of lung architecture and function after viral pneumonia in our increasingly older population. Methods Mice. Young (2-4 months) and aged (18-22 months) C57BL/6 mice were obtained from The National Institute on Aging Aged Rodent Colony. Foxp3 DTR mice were purchased from The Jackson Laboratory (Jax 016958). Animals received water ad libitum, were housed at a temperature range of 20°C to 23°C under 14-hour light/10-hour dark cycles, and received standard rodent chow. Administration of influenza A virus and lung histopathology. WT C57BL/6 mice were anesthetized with isoflurane and intubated using a 20-gauge angiocatheter cut to a length that placed the tip of the catheter above the carina. Mice were instilled with a mouse-adapted influenza A virus (A/WSN/33 [H1N1]; 3 PFU/mouse or 2 PFU/mouse for Foxp3 DTR mice, in 50 μL of sterile PBS), as previously described (39). To prepare organ tissues for histopathology, the inferior vena cava was cut and the right ventricle was perfused in situ with 10 mL of sterile PBS. A 20-gauge angiocatheter was then sutured into the trachea via a tracheostomy. The lungs were removed en bloc and inflated to 15 cmH 2 O with 4% paraformaldehyde. Next, 5 μm sections from paraffin-embedded lungs were stained with H&E and examined using light microscopy with a high-throughput, automated slide imaging system, TissueGnostics (TissueGnostics GmbH). Tissue preparation, flow cytometry analysis, and sorting. Single-cell suspensions from harvested mouse lungs were prepared and stained for flow cytometry analysis and flow cytometry sorting as previously described using the reagents listed in Supplemental Table 1 (see also refs. 15,40,41). The CD4 + T Cell Isolation Kit, mouse (Miltenyi Biotec) was used to enrich CD4 + T cells in single-cell suspensions prior to flow cytometry sorting. Cell counts of single-cell suspensions were obtained using a Cellometer with AO/PI staining (Nexcelom Bioscience) before preparation for flow cytometry. Data acquisition for analysis was performed using a BD Symphony A5 instrument with FACSDiva software (BD Biosciences). Cell sorting was performed using the 4-way purity setting on BD Biosciences FACSAria SORP instruments with FACSDiva software. Analysis was performed with FlowJo v10.6.1 software. Cytokine measurements. Lungs were harvested from young and aged mice and a single-cell suspension was obtained. Red blood cells were removed with ACK Lysis Buffer (Thermo Fisher Scientific) following the manufacturer's instructions. Single-cell suspensions were plated on 12-well cell culture plates (Thermo Fisher Scientific) at a concentration of 1 × 10 6 cells/mL with RPMI plus 2 μL/mL Leukocyte Activation Cocktail with GolgiPlug (BD Biosciences) and incubated for 4 hours at 37°C. After incubation, cells were resuspended in PBS and stained with a viability dye and subsequently with fluorochrome-conjugated antibodies directed at IFN-γ (clone XMG1.2), IL-17 (clone TC11-18H1), IL-10 (clone JES5-16E3), and IL-4 (clone 11B11). Data acquisition and analysis were performed as described above. Treg cell isolation and adoptive transfer. Splenic CD4 + CD25 + Treg cells were isolated from euthanized young (2-4 months) and aged (18-22 months) C57BL/6 mice by use of magnetic separation with the EasySep Mouse CD4 + CD25 + Regulatory T Cell Isolation Kit II (STEMCELL Technologies) according to the manufacturer's instructions. A separate group of young and aged C57BL/6 mice were challenged with 3 PFU/mouse of influenza A virus as previously described (39). Twenty-four hours later, a single-cell suspension of isolated 1 × 10 6 splenic Treg cells in 100 μL of sterile PBS was obtained and transferred via retro-orbital injection into the influenza-treated mice. Foxp3 DTR mice were challenged with 2 PFU/mouse of influenza A virus. Diphtheria toxin (List Biologicals) was administered via i.p. injection in 100 μL of sterile PBS in the following doses and days relative to influenza A virus infection (day 0): 50 μg/kg on day -2 and 10 μg/kg every 48 hours starting on day 0 and ending on day 28 after infection. Five days later, 1 × 10 6 young or aged splenic Treg cells in 100 μL of sterile PBS were transferred via retro-orbital injection into the influenza-treated Foxp3 DTR mice. RNA-Seq. Flow cytometry-sorted lung Treg cells were pelleted in RLT plus buffer with 1% 2-mercaptoethanol and stored at -80°C until RNA extraction was performed. The Qiagen AllPrep DNA/RNA Micro Kit was used for simultaneous isolation of RNA and DNA (15,42). RNA quality was assessed using a 4200 TapeStation System (Agilent Technologies). mRNA was isolated from purified 1 ng total RNA using oligo-dT beads (New England Biolabs, Inc). The NEBNext Ultra RNA kit was used for full-length cDNA synthesis and library preparation. Libraries were pooled, denatured, and diluted, resulting in a 2.0 pM DNA solution. PhiX control was spiked in at 1%. Libraries were sequenced on an Illumina NextSeq 500 instrument using the NextSeq 500 High Output reagent kit (1 × 75 cycles). For RNA-Seq analysis, FASTQ reads were demultiplexed with bcl2fastq v2.17.1.14, trimmed with Trimmomatic v0.38 (to remove low-quality basecalls), and aligned to the Mus musculus or mm10 (GRCm38) reference genome using TopHat v.2.1.0. Resultant bam files were imported into SeqMonk v1.45.4 to generate raw count tables with
7,777
2020-06-05T00:00:00.000
[ "Biology", "Medicine" ]
Evaluation of Wound-Healing and Antioxidant Effects of Marantodes pumilum (Blume) Kuntze in an Excision Wound Model Marantodes pumilum (MP) is a great source of herbal medicine used traditionally by both men and women for various purposes. MP may have potential wound-healing effects due to its diverse biological properties. An extensive study was conducted in a normal male rat model for determining the effects of MP var. pumila (MPvp) and var. alata (MPva) on the wound healing process. Here, 126 male Sprague-Dawley rats were divided randomly into seven groups as follows: sham-operated (SH), vehicle dressing (VD), flavine dressing (FD), MPvp leaves (PL), MPvp roots (PR), MPva leaves (AL), and MPva roots (AR). The parameters studied were the percentage of wound contraction, histomorphology study by hematoxylin and eosin (H&E), Masson–Goldner trichrome (MGT), and immunohistochemistry (IHC) staining. In addition, the levels of enzymatic antioxidants and malondialdehyde were also measured in the wound tissue homogenates. Wounds treated with extracts (PL, PR, AL, and AR) showed significantly faster healing (p < 0.05) compared to untreated and control groups (SH, VD, and FD). Histological analysis among MP-treated groups revealed better re-epithelialization, higher collagen deposition, enhanced fibronectin content and fibroblast cells, and higher fiber transformation from collagen-III to collagen-I, accompanied with a significant surge in enzymatic antioxidant activities and a decline in lipid peroxidation. MP has antioxidant effects that may enhance wound healing in the rat model. Introduction Wounds are physical injuries associated with surgical procedures, falling, heat, infectious disease, or underlying pathological conditions of tissue that interrupt normal tissue functions [1]. Wound healing is a normal physiological process where a set of biomolecular events is involved. These biological processes are fundamental to restore the functional integrity of injured tissues associated with four sequential yet overlapping wound-healing phases; hemostasis, inflammation, proliferation, and remodeling [2]. At the start of the healing process, hemostasis, platelets are activated to the wound site and form coagulation by aggregating with fibrin protein to prevent further bleeding [3]. Immune cells release many inflammatory mediators that help to engulf pathogens and debris from the wound Preparation of Plant Extracts and Phytochemical Profiling Plant extractions and phytochemical profiling of the plant extracts were obtained based on the standardized aqueous extraction method [22,28]. In this study, both varieties of MP var. pumila (MPvp) and var. alata (MPva) were collected from the Bujang Melaka Forest Reserve in Malaysia. A botanist, Mr. Sani Miran from the Faculty of Science, Universiti Kebangsaan Malaysia, authenticated the plant. The voucher specimens of MPva (voucher number: UKMB 30006/SM 2622) and MPvp (UKMB 30007/SM s.n) were deposited at the Herbarium of Universiti Kebangsaan Malaysia. The plants were separated into two varieties and each variety was further divided into two parts: leaves and roots (consisting of stems and roots). Each part was thoroughly washed and air-dried under shade. The dried materials were ground and weighed before use. The ground materials were extracted in distilled water at a 1:13 ratio of material to solvent for leaves and a 1:10 ratio for roots at 60 • C for 2 h using the reflux method. Each mixture was cooled and filtered. Then, the filtrate was freeze-dried overnight to sublimate the water in the frozen extract and to obtain the dried extract. The dried crude extracts were stored at −20 • C until further use. Phytochemical profiling was performed on the crude extracts using liquid chromatography -tandem mass spectrometry (LC-MS/MS) based on the method described by Darmani, M. et al. [28]. Contents of selected compounds in the crude extracts were quantified from the calibration curves of six standard marker compounds, namely gallic acid, ellagic acid, caffeic acid, myricetin, apigenin, and quercetin, using LC-MS/MS. All standard calibration curves achieved good regression at >0.99. Topical Ointment Preparation Extracts were made into fine powder using a mortar and pestle. Fine particles have faster absorption rates and achieve better uniformity for ointment preparation. Cetomacrogol emulsifying ointment (Hovid Berhad, Malaysia) is a kind of paraffin used as a vehicle in these topical preparations. This paraffin is chemically inactive to the skin and is also inert and, therefore, would not react with the extracts. To optimize the concentration of plant extracts, a pilot study was conducted in which six different concentrations of 0.1%, 0.5%, 1.0%, 2.0%, 3.0%, and 4.0% from each extract were applied. The dose of 1.0% concentration for both leaf and root extracts of MPvp and 2.0% concentration for both leaf and root extracts of MPva exhibited the best effects to expedite for open-wound healing compared to the control group and other concentrations in the rat model. Based on the pilot study, 1.0% of MPvp leaf and root extracts and 2.0% of MPva leaf and root extracts were mixed with the vehicle to prepare the topical ointments [29]. Briefly, weighted paraffin and extract powder were put together into a clean glass plate and mixed with a spatula. Each mixing process of ointment was conducted three times to ensure uniformity. Then, the ointment was collected into a jar with a label and covered properly. Experimental Animals The Universiti Kebangsaan Malaysia Animal Ethics Committee approved all protocols of this animal experiment (project approval number: FP/FAR/2014/ISA/26-NOV./637-JAN.-2015-DEC.-2016). One hundred and twenty-six healthy male Sprague-Dawley rats were supplied by the Animal Research Center, Universiti Kebangsaan Malaysia, Titiwangsa, Kuala Lampur, Malaysia. All rats weighed between 200 and 250 g and were aged between 3 and 5 months. The rats were housed in plastic cages and acclimatized for one week to the laboratory environments with 22 ± 5 • C temperature, 80 ± 10% humidity, and 12-h day/night cycles. They were fed with food pellets (Gold Coin, Malaysia) and water, provided ad libitum. Rats were anesthetized intraperitoneally (IP) by injecting the cocktail preparation of ketamine hydrochloride (100 mg/mL) and xylazine hydrochloride (20 mg/mL) in a 1:1 ratio prior to all surgical procedures [30]. Excision Wound Model The excision wound model, as described by Latif, M.A. et al. [31], was used in this study to determine the wound-healing effect. Before administering the anesthetic, the rats were weighed in advance. The weight readings were used to determine the volume of anesthetic injection (volume of injection). In this study, the volume of injection administered to each rat was 0.1 mL/100 g body weight. The injection was administered intraperitoneally (IP). Once anesthetized, hairs on the dorsal surface of rats were removed by an electric trimmer and softened, and the skin was disinfected using 70% alcohol and povidone-iodine solutions. Four wounds were made on the dorsal surface. Wounds were at a distance of 1.0 to 1.5 cm from each other. Each wound was punched to be 6 mm in diameter and 2 mm in thickness of skin. Wounds were punched on the dorsal surface to prevent scratching and biting by the rat itself. Treatment was started on the same day. The 126 male rats (Sprague-Dawley rats, 200-250 g body weight) were randomly and equally divided into 7 groups, namely sham-operated (SH), vehicle dressing (VD), flavine dressing (FD), MPvp leaves (PL), MPvp roots (PR), MPva leaves (AL), and MPva roots (AR) groups. The SH group was treated as the normal healing process, while the VD and GD groups were treated with emulsifying ointment and flavine, respectively, as controls. A 1.0% aqueous extract of MPvp leaves and roots was applied on the wounds of PL-and PR-treated groups and 2.0% aqueous extract of MPva leaves and roots was applied on the wounds of AL-and AR-treated groups. Wounds were dressed daily and 0.1 to 0.01 g of extracts was applied around on each wound once a day until complete healing. The percentage of wound contraction was measured on days 0, 2, 5, 8, and 9 and daily thereafter until complete wound healing. Photographs of the wounds were taken for macroscopic observation after each measurement of wound contraction. Six rats were taken from each group on days 2 and 8 for the histomorphological analysis using hematoxylin and eosin (H&E) staining, Masson-Goldner trichrome (MGT) staining, and immunohistochemistry (IHC) staining; antioxidant enzymes' activities were assessed by measuring superoxide dismutase (SOD) and glutathione peroxidase (GPx) levels, and lipid peroxidation was assessed by determining malondialdehyde (MDA) levels. Wound Healing Measurement Macroscopic appearance and wound contraction were the two factors examined for excision wound healing observation. To examine macroscopic view and wound contraction, the wound area was calculated and photographs of the wound area on the skin were taken on day zero and the following 2, 5, 8,9,10,11,12, and 13 days after injury. A digital camera (Sony Cybershot, Japan) was used for taking photographs and digital caliper (General, China) was used for calculating wound size referred by the clock method [32]. The changes in wound size were calculated to determine the day the wound healed based on the equation by Ahmad, S.U. et al. [24]. Histological Analysis On day 2 and day 8 after surgery, a full-thickness skin biopsy was taken from the center of wounds with surrounding tissues. Tissue samples were fixed in 10% neutral buffer formalin for histological analysis. After processing of tissue samples using a series of graded alcohol, these were embedded in paraffin wax. The tissues were sectioned at 5 µm for hematoxylin and eosin (H&E) and Masson-Goldner trichrome (MGT) staining. H&E staining was used to determine the skin microstructure including epithelialization, fibroblast proliferation, inflammation cell infiltration, vascularization, and granulation tissue formation [33]. MGT staining was performed according to the MGT kit protocol (Merck, Germany) to measure collagen density. Photomicrographs were analyzed using a microscopic image analyzer (Leica Microsystems, Germany). Stained sections were scored by histology experts in a blind fashion using the modified 0 to 3 numerical scale [34]. Immunohistochemistry Analysis Immunohistochemistry (IHC) staining was performed using the mouse-specific Dako ARK™ (Animal Research Kit), the Peroxidase IHC staining kit (K3954, Agilent, Agilent, CA, USCalifornia, USA) for mouse monoclonal antibodies to fibronectin (ab6328, Abcam, Massachusetts, USA); collagen-III (ab6310, Abcam, Massachusetts, USA) and the rabbitspecific HRP/DAB detection IHC staining kit (ab80437, Abcam, UK) for rabbit polyclonal antibodies to fibroblast (ab28244, Abcam, USA), and collagen-I (ab34710, Abcam, Massachusetts, USA), according to the vendor's protocol. The tissues for IHC were sectioned to 3-µm thickness. After being dewaxed in xylene and dehydrated in series of alcohol concentrations, tissues were treated with antigen retrieval solution (Agilent Dako, California, USA) using a microwave oven at maximum temperature to reduce non-specific antibody binding. Endogenous peroxidase activity and unwanted proteins were quenched using the peroxidase and protein blocks. Tissue slides were then incubated with different types of primary antibodies including anti-fibronectin and anti-fibroblast followed by incubation with HRP conjugate as a secondary antibody. Finally, incubation was done with the DAB substrate chromogen and counterstained with hematoxylin. Images were taken of IHC staining using a microscopic image analyzer (Leica Microsystems, Germany) at 20× magnification, which assessed the intensity of specific protein activity ranging from 0 (negative cells/tissues) to 3 (deeply stained cells/tissues) by the blind method [34]. Biochemical Analysis Wound tissue samples were excised at the size of 1.0 cm × 1.0 cm on day 2 and day 8 post-wounding and biochemically analyzed to estimate endogenous antioxidant enzyme and lipid peroxidation activities. Each tissue sample was dried and weighed prior to use for analysis. Tissues were transferred to a bead tube with Tris-buffer and homogenized using a microtube homogenizer (Benchmark Scientific, USA). The homogenized tissues were then centrifuged at 6000 rpm for 20 min and we collected the supernatant for the biochemical estimations. Activity levels of endogenous antioxidant enzymes such as superoxide dismutase (SOD) (Item No 706002; Cayman) and glutathione peroxidase (GPx) (Item No 703102; Cayman) and lipid peroxidation, measured as malondialdehyde (MDA) content (Item No 10009055; Cayman), were measured following the vendor's protocol. Statistical Analysis All quantitative data are presented as mean ± standard error of the mean (SEM) and were analyzed using Statistical Package for the Social Science (SPSS, version 23.0). Data obtained were evaluated by one-way ANOVA followed by Tukey's HSD post-hoc test for statistical differences. p values < 0.05 were considered significant. Phytochemical Analysis LC-MS/MS chromatographic profiles of the extracts are shown in Figure 1. Chromatographic profiling is a useful tool that provided preliminary comparative information on the chemical composition and complexity of the four extracts of different plant varieties and plant parts. Based on the MS database, several different compounds were characterized from the extracts, indicating that they have varied chemical compositions (Table 1). Only gallic acid and ellagic acid were determinable in all extracts, with MPvp leaf extract having the highest amount of gallic acid (6.81 mg/g) and MPvp root extract having the highest amount of ellagic acid (0.335 mg/g) ( Table 2). However, only MPvp root extract had a quantifiable amount of caffeic acid (0.00052 mg/g). The contents of the other flavonoids were too low and not quantifiable using the LC-MS/MS method. Determination of Wound Contraction The results of wound healing parameters, including macroscopic observation and wound healing measurement, in the study are presented in Figures 2 and 3 and Table 3. In the macroscopic observations, it was noted that wound changes and healing patterns of the treatment groups were more advanced than the sham and control groups over the whole treatment period (Figure 2). Wounds treated with four different extracts (PL, PR, AL, and AR) healed on days 8 to 9, whereas the wounds of positive controls treated with flavine (FD) healed around day 13. A similar pattern of the wound healing process was shown between the sham and the negative control groups and both wounds healed approximately on day 11. Fluid secretion around the wound area was exhibited on days 1 to 2 for all the groups. However, a scab started to form in the treated groups on day 3 but was delayed in the sham and control groups. The scab started to remove from wound area on day 5 for the treated groups and was replaced by whitish fibrous tissue by days 8 to 9. An opposite trend was seen in the control groups in order to proliferate granulation tissues. It was proliferated between days 7 and 10 and followed by fibrous tissue growth on days 11 to 13. Figure 3 shows the mean value of the day of completed wound healing for all experimental groups. The healing rate of groups treated with PL and PR (p < 0.05) and AL and AR extracts (p < 0.001) was significantly higher than that of the untreated and control groups (SH, VD, and FD). Among the four treated groups (PL, PR, AL, and AR), there were no significant differences in terms of the day of complete wound healing, which was also similar among the untreated and control groups. Histological Evaluation Histopathological examination of the wound tissues was carried out using hematoxylin and eosin (H&E) for evaluation of skin morphology and Masson-Goldner trichrome (MGT) staining for estimation of collagen deposition. In Table 4, the histological findings are similar for each group in terms of regeneration of epithelial cells, granulation tissue formation, proliferation of fibroblast cells, and neovascularization, except for inflammatory cell infiltration which demonstrated a different trend. At the early stage (day 2), parameters of histology such as proliferation of epithelial cells, vascular cells, and granulation tissues in wounded skin were immature and the score was around 1. By day 8, most of the parameters were matured in the treated groups with a score of 2 to 3, whereas the control groups were less matured than treated groups on the corresponding days. An opposite trend has been seen in the inflammatory cell infiltration. The score of inflammatory cells was higher at the early stage and lower at the later stage (day 8) in both the control and the treated groups on corresponding days. The scores of three treated groups (PR, AL, and AR) were significantly different compared to the FD group. Figure 4 shows the histomorphological view of the wounded skin by H&E staining of the SH, VD, FD, PL, PR, AL, and AR groups on days 2 and 8 post-wounding. The pictures of skin histology illustrated that on day 2, there were many inflammatory cells on necrotic cells layer. The skin microstructure at the wound site such as epidermal, dermal, and hypodermal was not prominent at an early stage, as indicated by the lacking proliferation of epithelial cell, fibroblast, and granulation tissue formation. However, on day 8, inflammatory cells still appeared in the wounding area and the epidermal layer was not mature enough in the control groups whereas these were improved significantly in the treated groups. The wound healing process in the treated groups was recovered by epithelialization with the formation of keratinocytes at the same time. Newly formed capillary, fibroblast, collagen, and connective tissues were seen in the treated groups on day 8, whereas the formation of fibroblast cells, granulation tissue, and, eventually, the re-epithelialization of wound skin in the control groups were delayed and formed on days 11 to 13. Overall, the skin structure was more matured in the treated groups compared to the control groups at the later stage. Collagen deposition by MGT staining of the experimental groups was scant at the early stage (Table 3). However, on day 8, the score was significantly higher in the treated groups compared to the untreated and control groups (p < 0.05 versus SH, VD, and FD). The score for the treated groups was nearly 2.58 ± 0.20, while that of the FD control group was approximately 1.42 ± 0.20. Figure 5 demonstrates that collagen deposition in the wounded skin tissue evaluated by MGT staining on days 2 and 8 post-wounding. At the early stage (day 2), the appearance of collagen was poor for all the groups. Collagen deposition enhanced with time. It was mature and significantly increased in all the treated groups on day 8, whereas it was mature but less significant in the control groups. Table 4. Mean score from histological evaluation by hematoxylin and eosin (H&E) and estimation of collagen deposition by Masson-Goldner trichome (MGT) staining of seven male rat experimental groups: SH, VD, FD, PL, PR, AL, and AR. All data are given as mean ± S.E. for six animals in each group. Statistically significant results are indicated as (*) p < 0.05 versus untreated and control groups (SH, VD, and FD) and (a) p < 0.05 versus FD group. SH: sham control group (without treatment); VD: treated with vehicle (Cetomacrogol ointment); FD: treated with flavine (Acriflavine 0.1%); PL: treated with 1.0% concentration of MPvp leaves extract; PR: treated with 1.0% concentration of MPvp stem-roots extract; AL: treated with 2.0% concentration of MPva leaves extract; AR: treated with 2.0% concentration of MPva stem-roots extract. Table 5 showed the scores for fibronectin, fibroblast, collagen-III, and collagen-I antibodies obtained by IHC staining of all the groups at day 2 and day 8 after wound induction. The scores indicate the intensity of antibodies' expression in the wounded skin tissue. The score of fibronectin was from 2.58 ± 0.20 to 3.00 ± 0.0 for the treated groups on day 2, whereas the reactivity decreased with extended time, and the score was less than one for all the groups on day 8. The intensity of fibronectin and collagen-III expressions was significantly higher at the later stage (day 8) in treated groups (p < 0.05) compared to the untreated and control groups. Figure 6 indicates that the intensity of fibronectin was extensive in almost all of the groups. It decreased with extended time. Collagen-III was less abundant in the control groups on day 8, whereas there was minimal appearance in the treated groups (Figure 7). In contrast, an opposite trend was seen in both the score and intensity of fibroblast and collagen-I. At the early stage (day 2), the intensity of fibroblast and collagen-I expressions was lower for all the groups (Figures 6 and 7). However, the reactivity was enhanced with extended time and varied between each group. However, there was no significant difference in the scores of fibroblast and collagen-I intensity on day 8 between the control and treated groups. Table 5. Mean score of proteins expression obtained by immunohistochemistry (IHC) staining of seven male rat experimental groups: SH, VD, FD, PL, PR, AL, and AR. All data are given as mean ± S.E. for six animals in each group. Statistically significant results are indicated as (*) p < 0.05 versus untreated and control groups (SH, VD, and FD) on the same day. SH: sham control group (without treatment); VD: treated with vehicle (Cetomacrogol ointment); FD: treated with flavine (Acriflavine 0.1%); PL: treated with 1.0% concentration of MPvp leaves extract; PR: treated with 1.0% concentration of MPvp stem roots extract; AL: treated with 2.0% concentration of MPva leaves extract; AR: treated with 2.0% concentration of MPva stem roots extract. Figure 8 shows the mean values of SOD levels in skin tissue on post-wounded day 2 and day 8 of the seven rat experimental groups: SH, VD, FD, PL, PR, AL, and AR. At the early stage of wound treatment (day 2), the SOD levels ranged from 0.019 ± 0.003 to 0.024 ± 0.004 U/mL, and there were no significant differences between the groups. The SOD levels increased gradually after day 2 for all the groups. On day 8, the SOD levels in three treated groups (PR = 0.039 ± 0.002 U/mL; AL = 0.041 ± 0.002 U/mL; and AR = 0.038 ± 0.002 U/mL) were significantly higher than the SH (0.031 ± 0.002 U/mL), VD (0.026 ± 0.002 U/mL), FD (0.027 ± 0.002 U/mL), and PL (0.031 ± 0.002 U/mL) groups. Figure 9 shows the mean values of GPx levels in skin tissue on post-wounded day 2 and day 8 of the seven rat experimental groups: SH, VD, FD, PL, PR, AL, and AR. Similar to the trend of SOD levels during the early stage of wound treatment (day 2), GPx levels ranged from 4.58 ± 3.30 to 14.50 ± 5.63 U/mL and there were no significant differences among the groups. After day 2, GPx levels increased gradually for all the groups. At day 8, GPx levels in the AL-treated group (AL = 25.78 ± 8.89 U/mL) were significantly higher than in the SH (12.45 ± 2.34 U/mL), VD (14.18 ± 5.54 U/mL), and FD (15.96 ± 6.37 U/mL) groups. However, there were no significant differences between the other treated groups (PL = 20.35 ± 7.85 U/mL; PR = 18.92 ± 6.57 U/mL; and AR = 19.89 ± 6.89 U/mL) and control groups or between the treated groups. Discussion Cutaneous wound healing is a major interest for public health because it is the most common morbidity in the daily life of people. Natural medicinal plants have become a reliable source for therapeutic agents. However, scientific evidence on topical agents and their therapeutic effects on skin wound healing is limited [35]. Groups using the aqueous extracts of MP showed faster healing and earlier wound contraction compared with untreated and control groups, relating to MP's various biological effects on different phases of the wound healing process. Visual observation indicated that wounds treated with MP form scabs earlier than those in the untreated and control groups, consuming the fluid secretion rapidly from the inflammatory cells. The results suggested that MP has anti-inflammatory activities which can resolve the inflammatory process in wounds to accelerate the healing process. Persistence of inflammation delays the healing process and induces a pathological condition [36]. The formation of scabs in the MP-treated groups started from day 3 and they were covered with new epithelial cells shortly after. Scabs in the untreated and control groups were retained for longer time, which prevented proliferation of new epithelial cells, eventually delaying the healing process [37]. The proliferation phase starts with the formation of scab in skin tissue, where many cellular events are involved in the contraction of wound area [6]. Although wounds observed on day 9 and day 13 were found to be healed macroscopically in the study, the proliferation phase of wound healing could take up to three months and the modeling phase will continue for several months after the wound is formed [38]. The histopathological findings of re-epithelization activity, inflammatory cell infiltration, fibroblast cell proliferation, angiogenesis, fibronectin fiber formation, collagen deposition, and granulation tissue formation are good indicators for the wound contraction process by MP. In the current study, inflammatory cells were significantly higher at the early stage and diminished at the later stage of wound healing. The results correlate with a previous study that postulated inflammatory cell infiltration at the injury site after forming a stable clot in the dermal tissues [39]. Neutrophils are the first inflammatory cells that arrive at the wound site to eliminate microorganisms and initiate the inflammatory process [40]. Neutrophils prepare the wound bed for healing by removing necrotic tissue, debris, and bacterial contaminants as well as deriving growth factors and activating fibroblasts. The process of angiogenesis occurs concurrently with fibroblast proliferation when endothelial cells migrate to the wound area [41]. As fibroblast and epithelial cell activities require oxygen and nutrients, angiogenesis is imperative for other stages of wound healing such as epidermal and fibroblast migration. Our H&E findings showed that even though fibroblast and vascular endothelial cells appeared from the early stage, their numbers increased significantly at the later stage of wound healing in the treated groups compared to the untreated and control groups. Our histological analysis of fibroblast intensity also showed a similar trend. A study suggested that fibroblasts begin entering the wound site during the late inflammatory phase [42]. Usually, fibroblasts are derived from the adjacent uninjured cutaneous tissue to the wound site. They can also be derived from blood-borne circulating adult stem cells/precursors [43]. At the late stage of the inflammatory phase, fibroblasts migrate to the wound site adhered to fibrin through fibronectin [44]. This study showed that the intensity of fibronectin in the treated groups was higher than in the control groups at the early stage, whereas the intensity decreased significantly in the treated groups at the later stage of wound healing. These results are supported by a previous study which demonstrated that fibronectin has profound effects on the wound healing process [45]. It can induce the growth and migration of extracellular matrix during the development and organization of granulation tissue [46]. During the early stage of wound healing, fibroblasts are deposited into the wound bed and produce collagen for migration toward the wound site [47]. Collagen deposition is important because it increases the strength of the wound. Most of the collagen found in skin is type-I and type-III. Collagen-III appeared at the beginning of the wound healing process and was replaced by the stronger type-I collagen during the proliferation and maturation phases [48]. It controls many cellular functions, including cell shape and differentiation [49,50], migration [51], and synthesis of a number of proteins [52]. The current study, in both MGT and IHC analyses, also showed similar results. The findings in the study demonstrated that collagen became denser at the later stage of wound healing. Granulation tissue consists of many fibers such as collagen, extracellular matrix, blood vessels, and various cells such as inflammatory cells, fibroblasts, myofibroblasts, and endothelial cells. These components will be aligned and rearranged in the maturation and remodeling phase [53]. At the end of the granulation phase, fibroblasts begin to undergo apoptosis, converting granulation tissue from an environment rich in cells to one that consists mainly of collagen [2]. Epithelization induces proliferation and migration of epithelial cells across the wound bed [54]. Therefore, a higher re-epithelization score in the MP extracts groups might be due to facilitated proliferation of epithelial cells and/or increasing viability of epithelial cells [55]. Reactive oxygen species (ROS) or free radicals are also major contributors to oxidative stress, which will delay wound healing [56]. Leukocytes and many inflammatory cells, including neutrophils and macrophages, release ROS at wound sites [57,58]. Overproduction of ROS causes breakdown of collagen fiber and extracellular matrix, which leads to chronic wounds [9,59]. Many cellular enzymatic antioxidants such as SOD and GPx hasten the process of wound healing by destroying free radicals [60]. The SOD and GPx levels of the treated groups were significantly higher compared to the untreated and control groups at the late stage of wound healing. This suggests that the antioxidant activity of MP may enhance its wound healing activity. The excessive production of ROS also induces lipid peroxidation which affects various cellular functions such as granulation tissue formation, collagen and fibroblast metabolism, angiogenesis, and epithelialization [61]. The significant reduction in MDA levels, an index of lipid peroxidation, in the granulation tissue may accelerate the wound healing process. Several studies have shown that antioxidant enzymes can reduce free radicals in the body and maintain lipid peroxidation [62]. MP has antioxidative scavenger activities which are effective for reducing lipid peroxidation during wound healing. The antioxidative properties of MP may be contributed to by its phytochemical contents such as gallic acid, ellagic acid, and caffeic acid. Gallic acid has valuable antioxidant properties by revealing free-radical-neutralizing capabilities [63]. Ellagic acid has also been demonstrated to possess a strong ability to scavenge free radicals both in vivo and in vitro [64,65]. The antioxidant activity of phenolic agents is mostly because of the redox attributes that let them act as hydrogen donors, singlet oxygen quenchers, and reducing mediators [66]. Phytochemicals may enhance the wound healing process by neutralizing oxygen anions and inhibiting peroxyl-free radicals [67]. MP has a role in the wound healing process as indicated by the presence of its phytochemicals. Conclusions MP facilitated the wound healing process in the male rat model. A histological analysis among MP-treated groups revealed better re-epithelialization, enhanced fibronectin content and fibroblast cells, as well as higher fiber transformation from collagen-III to collagen-I accompanied with an abatement of inflammatory cells in the granulation tissues. Moreover, MP administration caused a significant increase in enzymatic antioxidant activities and a decline in lipid peroxidation. It can be proposed that the high content of phenolic compounds in the MP extracts may be responsible for their antioxidative properties. Its high antioxidant activity suggests that the plant can be used as an effective wound-healing agent. Further studies with purified constituents are needed to understand the complete mechanism of wound healing activity using MP extracts.
7,001.4
2021-01-01T00:00:00.000
[ "Biology" ]
Modelling Deaths Associated with Road Traffic Accidents and other Factors on Great North Road in Zambia between the Years 2010 and 2016 Using Poisson Models Background: According to the World Health Organization (WHO), 1.24 million people die annually on the world’s roads, with 20-50 million sustaining nonfatal injuries. More than 85% (1.05 million) of the global deaths due to injuries occur in the developing world. Road traffic deaths and injuries are a major but neglected public health challenge that requires concerted efforts for effective and sustainable prevention. The objectives of the study were to estimate the incidence rate of death from RTAs, to determine factors associated with serious and fatal Road Traffic Accidents (RTAs) and to determine which of the poisson models fit the count data better. INTRODUCTION The World Health Organization (WHO) defines a Road Traffic Accident (RTA) as a collision involving at least one of the major causes of deaths, injuries and disabilities globally.According to WHO, the epidemic of road traffic injuries is increasing in most regions of the world [2].In fact, it has a great impact on the Disability-adjusted Life Years (DALYs).As a result, it is now a public health problem, particularly in developing countries.A DALY is a measure of overall disease burden, expressed as the number of years lost due to ill-health, disability or early death [2]. The WHO reports that about 1.24 million people die annually on the world's roads, with 20-50 million sustaining non-fatal injuries [2].Globally, road traffic injuries are reported as the leading cause of death among young people aged 15-29 years and are among the top three causes of mortality among people aged 15-44 years [3].More than 85% (1.02 million out of 1.24 million deaths) of the global deaths due to injuries occur in the developing world, consuming substantial health sector resources [4].Further, road traffic deaths and injuries are a major but neglected public health challenge that requires concerted efforts for effective and sustainable prevention as the people dying on the roads keep increasing worldwide.The increased burden from road traffic injuries and deaths is partly due to economic development, which has led to an increased number of vehicles on the road [4]. The dynamic nature of this multi-causal phenomenon affects victims to different degrees depending on the type of accident (run over pedestrian, motorbike accident or another type of accident involving a vehicle or motorbike) and demographic characteristics (sex, age, skin colour, marital status and level of education) [5,6].There are several factors that contribute to the occurrence of a RTA.Factors that contribute to accidents among others are socio-demographic factors such as the age of the driver and the sex of the driver.The vehicle condition also contributes to these accidents because a vehicle which is not roadworthy is prone to accidents.Driving under the influence of alcohol is also one factor that has been documented.There is also an appreciation that inclement weather is associated with more hazardous driving conditions.Various studies show that precipitation in the form of rainfall and snowfall generally results in more accidents [7,8].Another study [9] supports these findings and added that causes of RTAs' among others include human or driver errors, vehicle characteristics, traffic infrastructures including engineering design, road maintenance and traffic regulation.Driver attitude including road courtesy and behaviour, driving under the influence of drugs especially alcohol, male sex, use of seat belts, driver age (teenage drivers and elderly drivers), are among the recognised human factors [10].Another study [11] looked at urban RTA risks for the city of Zagreb, Croatia, from 1999 through 2000.The accidents were analysed with the aim of reducing the increasing injury incidence and results indicated that more fatal accidents occurred during night hours, on urban road links, and at exceeding the speed limit. In estimating the incidence rate [12], examined the effect of age on driver performance and safety in professional heavy vehicle drivers.In their study, they modelled the incidence rate ratios involving male drivers of rigid trucks 45-54 year olds compared to those older than 65 years old, results showed that older drivers 65 years of age and older were significantly less likely to have a crash.For drivers in the 55-64 age group there was no difference between their crash rate and their younger peers. Globally, Zambia is ranked 29 th in the world in RTAs and has a death rate of 26.51 per 100, 000 people [2].Statistics from Zambia Police [13] indicate that Road deaths in Zambia have increased by 85% between 2012 and 2014 from 1,000 to 1,858 respectively.Despite the growing body of literature on factors (for example., socio-economic and demographic factors, road way geometric and environmental characteristics along with human behaviours) that are associated with RTAs on highways in other countries, to our knowledge, there have been no appropriate modelling techniques employed to estimate the incidence rate of death from RTAs and identification of factors associated with RTAs on the Great North Road (GNR) highway from Lusaka to Kapiri-Mposhi in Zambia. Statistical techniques known and applied to model these scenarios are limited to basic statistics such as linear and Poisson regression that do not account for over dispersion.Further, Road Traffic Accident data violates most of the assumptions that standard Poisson regression models is based.Appropriate extensions of this model, even though available, are rarely used by most applied statisticians.For modelling approaches in count data, several studies have used different models in different scenarios, in this regard [14] studied the relationship between highway geometric factors and truck accidents in Virginia using both linear and Poisson regression models.In comparing these regression models, they concluded that linear regression techniques used in their research did not describe the relationship between truck accidents and the independent variables adequately but that the Poisson models did.In addition to this, recent research has shown that the NB model can be significantly affected by datasets characterized by a heavy tail [15]. The main objective of this study was to apply Poisson models in estimating the incidence rate of death from RTAs and to identify factors associated with death from RTAs.In modelling the number of deaths associated with RTAs and other factors on the GNR between the years 2010 and 2016 using Poisson models, the study also determined which of the models fitted the data on RTAs better.This work therefore utilizes a series of Poisson models which include among others the Poisson model, Negative Binomial (NB), Zero-Truncated Poisson (ZTP), the Zero-Truncated Negative Binomial (ZTNB) model to analyse the impacts of various explanatory variables on daily serious and fatal crash frequencies on GNR over a seven-year period (2010)(2011)(2012)(2013)(2014)(2015)(2016) in Zambia.Findings of this study may assist policy makers to know and understand the areas they need to focus on in order to enhance the planning and evaluation of policies in the transport sector to prevent deaths from RTAs and to improve in the transport system in Zambia.The study will also help other researchers dealing with count outcomes to know when best to apply these models. Study Design The study design was a cross sectional study in which secondary data was used to model the number of deaths associated with road traffic accidents on the great north road between the years 2010 and 2016. Study Site The study used secondary data on RTAs that had occurred between Lusaka and Kapiri-Mposhi highway.This data was obtained from Zambia Police traffic section at four police stations along the GNR road these include Emmasdale, Matero, Kabangwe, Chisamba, Prospect, Kasanda and Kapiri-Mposhi Police stations.The total distance from Lusaka to Kapiri-Mposhi is approximately 200.8 kilometres.This study site was selected due to the high number of deaths from accidents that have been recorded.The stretch on the GNR under study is a single carriageway, approximately 204 kilometres and was divided into five stretches (Fig. 1), this was done so as to determine which stretch of the road had a higher/lower incident rate of death as compared to other stretches.The five stretches are Lusaka to Katuba, Katuba to Landless corner, Landless corner to ZNS, ZNS to Mulungushi and Mulungushi to Kapiri-Mposhi.The five stretches are not of equal distances as these mainly depended on the coverage of these different Police stations where the data was collected. Study Variables The outcome variable in this study was the number of deaths.It is a count because a number of deaths are nonnegative and they take whole numbers only.The explanatory variables in the study included sociodemographic factors, such as age and gender of driver.Other variables included time of accident, quarter of the year, vehicle type, cause of the accident and stretch of the road where an accident occurred.For the purposes of this study, the cause of accident in this paper refers to the fault that gave rise to a particular accident. Sampling and Sample Size The minimum sample size was calculated using the prevalence formula. Where; n=sample size, z= (1.96), d=degree of error (0.05), P= Proportion (0.5).This gave a sample size of 384.16 deaths which is approximately 385 deaths.In this study, all fatal and serious RTAs that had occurred on this stretch between the years 2010 to 2016 along the GNR (Lusaka to Kapiri-Mposhi) were considered, as a result, we had a sample size of 1, 023.This large sample size will in turn increase the power of the study to detect the size effect. Statistical Methods The outcome variable being the number of deaths is a count variable.The aim of regression analysis in such instances is to model the dependent variable (deaths) as the estimate of outcome using some or all of the explanatory variables (in mathematical terminology estimating the outcome as a function of some explanatory variables).In this instance, the Poisson distribution (rather than the Normal) is more appropriate since the Poisson mean is always greater than or equal zero.The normal mean can be less than zero.One of the main assumptions of the Poisson model is that the mean should be equal to the variance.However, other Poisson models which do not have the assumption of the mean being equal to the variance such as the NB, ZTP and the ZTNB were also explored so as to select the best fit model.The zero truncated models were explored in this case because there were very few accidents which had no deaths, hence the data generating process naturally truncated zero counts.methods for count data have been advanced and these include the Poisson Model, the Zero-Truncated Poisson (ZTP), the Negative Binomial (NB) and the Zero-Truncated Negative Binomial (ZTNB).We explored all these models and the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) statistics were used in selecting the best fit model.The four models, the Poisson, ZTP, NB and ZTNB were compared to see which one fits the data well.All p-values reported were two-tailed, and values below 0.05 were considered statistically significant.All analyses were performed using STATA software, version 14.0 SE (Stata Corporation, College Station, TX, USA). RESULTS Due to the presence of overdispersion in the data (Variance greater than the mean), the NB model was favoured over the Poisson model.The equi-dispersion of crash data is unlikely to be truly observed, as crash-frequency data are typically overdispersed.Unobserved dispersion arises when the covariates are not fully capable of capturing the heterogeneity across the cities in the country [16].In addition to the overdispersion of the outcome variable, the number of zeroes in the data was very minimal as most of the accidents were fatal, this is another problem often faced with count data and a more robust model such as the ZTNB model was employed.Table 1 gives the characteristics of the accidents that were analysed. Descriptive Statistics A total of 1,023 RTAs were analysed in which 1, 212 people died, 7% (82/1, 212) Juveniles and 93% (1,130/1, 212) adults.Accidents that happened as a result of pedestrians crossing the road accounted for 30% (310/1,023) and 29% (295/1,023) of the RTAs were as a result of driver's excessive speed.The mean age of the drivers was 37 years and standard deviation of 9.7 with minimum age 15 years and maximum of 76 years.The mean deaths were 1.2 and variance was 4.6 (variance>mean, overdispersion, Poisson cannot be used).The distribution of deaths over the years is given in the bar chat (Fig. 2). Model Explorations In order to model these traffic deaths there is need for a careful selection of one or more models that may provide a good description of the traffic type, estimation of parameters such as mean and variance for the selected models and statistical testing for selection of one of the considered models and analysis of its suitability to describe the traffic type under analysis. Since RTAs are non-negative integers, and random event count, the distribution of such events follow a Poisson distribution.The methodologies to model accident counts are well developed.Since in this study the variance was greater than the mean, resulting in over-dispersion, the negative binomial was used we therefore applied a Negative Binomial (NB) regression model which is a Poisson-gamma mixture [17 -19].In our data the numbers of zeros were very minimal as most of the accidents had at least one person dying.Due to this, we applied the Zero-Truncated Poisson and the Zero Truncated Negative binomials.The results from these models were compared to select the best fit model for this data using Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC).The ZTNB was the best fit model which gave the lowest values of AIC and BIC.The two competing models were the ZTP and the ZTNB.The ZTP had AIC=1304.55,BIC= 1336.55 whereas the ZTNB had AIC=742.25 and BIC=819.69.This indicated that the ZTNB with the lower AIC and BIC was the best fit model for the data.(Table 2) shows the results from the best fit Model ZTNB. Socio-demographic and Social-economic Factors The social demographic factors that were considered in this study included age of the driver, sex of the driver and a dummy variable adults (1=adult, 0=Juvenile) whereas the accident related factors included quarter of the year the accident happened, time of accident, vehicle type, cause of accident and stretch of the road.The description of each of these factors is given in the next sub-section. Age of Driver Due to the missing data problem encountered as a result of hit and run by the drivers in this study, the complete case analysis that was used had 800 drivers information.The mean age of the drivers was 37 years, standard deviation of 9.7, minimum age 15 years and maximum age of 76 years.Fr the age variable, we found no evidence of an association between the age of a driver and having a serious or fatal crash that will lead to injury or death (IRR=1.01,95% CI=0.99, 1.03), p-value=0.358.The sex of the driver is a crucial variable in the analysis of Road traffic crashes. Quarter of the Year The accidents in the study were evenly distributed throughout the year, 27.1 percent (277) of the accidents happened in the third quarter (July-Sept).In the fourth quarter, there were 25.8 percent happened in this particular quarter (October to December).Further, 251 (24.5%) accidents out of 1, 023 happened in the first quarter of the year (January to March).However for the inferential statistics, the quarter of the year in this study did not contribute in the final model. Sex of Driver A total of 830 drivers whose information on sex was available was abstracted of which, 803 (96.7%) were male drivers and 27 (3.30%)were female drivers.Due to the fact that these are deaths on the spot, there were instances where the sex of the driver was indicated for a particular RTA but the age of the driver for the same RTA was not indicated.This is because it is easy to identify the sex of the deceased (especially in cases where the driver dies) individual unlike knowing the age of the deceased.As a result of this, we had more observations with the sex of the driver (830) than observations with age of driver (800).The sex of the driver is a crucial variable in the analysis of Road traffic crashes.Results from this study indicated an increase rate of death if one is a male driver compared to female driver (IRR=9.57,95% CI=0.96-95.46)with borderline evidence, p-value=0.054. Time of Accident Time of the accident in this study was categorised as Early morning, Morning,Afternoon and Night (between AM and 6AM,between 7AM and 12PM,between 1PM and 6PM and between 7PM and 12AM) respectively.With this time of the accident variable, 403 out of 1,021 (39.50%) of the accidents happened in the night between 7PM and 12AM, 29.4% (300/1021) of the accidents happened in the afternoon between 1PM and 6PM, 15.8% (161/1,021) happened in the early morning (1AM-6AM) and 15.4% (157/1,021) happened in the morning (7AM-12PM).Results further showed that driving in the early hours of the day (between 1AM and 6AM) was significantly associated with a high incidence rate of death (IRR=2.1,95% CI=1.01-4.41),adjusting for all other variables in the model. Cause of Accident The cause of the accident in this paper refers to the fault that gave rise to a particular accident.This cause of accident included pedestrians crossing the road inappropriately, excessive speed/overtaking inappropriately by the driver, cutting in or Failure To Keep Near Side (FTKNS), unlicensed or inexperienced driver and unknown causes or cause not traced due to hit and run cases.The majority of these accidents were as a result of pedestrians crossing the road inappropriately which were at 30.30% (310/1,023).Road Traffic Accidents that were as a result of Excessive speed and overtaking inappropriately accounted for 28.84% (295/1023).Results indicated that there was a statistically significant reduction in the incidence rate of death from RTAs for pedestrians crossing road compared to excessive speed (IRR=0.04,CI=0.01-0.12),p-value<0.0001.There was also a reduced incidence of death from a RTA from cutting in or FTKNS compared to excessive speed (IRR=0.17,CI=0.07-0.42).This finding was statistically significant, p-value<0.0001. Stretch of the Great North Road (GNR) The stretch on the GNR under study is 200.8 kilometres and was divided into five stretches (Fig. 1), this was done so as to determine which stretch of the road had a higher/lower incidence rate as compared to others.The five stretches are Lusaka to Katuba, Katuba to Landless corner, Landless corner to ZNS, ZNS to Mulungushi and Mulungushi to Kapiri-Mposhi.The five stretches are not of equal distances as these mainly depended on the coverage of these different Police stations where the data was collected.The greater number of the accidents (39.20%) (401/1,023) occurred on the stretch between Mulungushi University and Kapiri-Mposhi with the fewest (8.50%), (87/1,023) being between Katuba and Landless corner (Table 1).The study further found an increased incidence of death if one is driving between Katuba and Landless corner compared to driving between Lusaka and Katuba (IRR=4.41,CI=1.39-14.01)and this was statistically significant p-value 0.012.The results also revealed that there was an increased incidence of death between Landless corner and ZNS as compared to driving between Lusaka and Katuba (IRR=9.06,CI=3.29-24.62)p-value<0.0001.There was also over five times increase in the incidence of death if one is driving between Mulungushi University and Kapiri-Mposhi compared to one driving between Lusaka and Katuba (IRR=5.73,CI=2.23-14.73),p-value<0.0001. Mode of Transport The common mode of transport on this road was private, trucks and public transport.Out of all the accidents that happened in this period, the majority 466 out of 858 (54.31%) involved Private transport whereas trucks accounted for 30.30%,(310/858).Results from the best fit model (ZTNB) revealed that public transport as compared to private transport had an increased incidence of death from RTAs (IRR=5.65,95% CI=2.97-10.73),p-value<0.0001. DISCUSSION The incidence rate of death from RTAs was estimated in this study and factors that were associated with an increased incidence rate of death were male sex of the driver, driving in the early hours of the day, using public transport and trucks.Other factors that were associated with an increase in the incidence rate of death from RTAs included the stretch between Katuba and landless corner, Landless corner to ZNS, ZNS to Mulungushi University and Mulungushi University to Kapiri-Mposhi as compared to Lusaka to Katuba. It has been shown in this study that the number of deaths on this stretch of the road (GNR) has been increasing over time from 2010 to 2016.One of the studies [20] has documented a high likelihood of younger drivers being involved in RTAs and this study found that young people were at a high risk of Road Traffic Injuries among car and motorcycle users, while among bicycle and public transport users, the risk was greater in older people.However in this study, there was no evidence of an association between the driver's age and the incidence of death from RTAs. The sex of the driver is a crucial variable in the analysis of Road traffic accidents.Results from this study indicated an increase rate of death if one is a male driver compared to female driver.This finding is with borderline statistical evidence and we cannot rule out chance finding, further the wide confidence interval is an indication that this finding is not very reliable.On the other hand, this finding established here regarding greater risk of serious and fatal injuries in males is consistent with other studies that used travel time [21] and found that males compared to females were more likely to be involved in road traffic accidents. Driving in the early hours of the day (between 1 AM and 6 AM) as compared to driving in the night (7PM-12AM) had a significant increase in the incidence rate of death from RTAs adjusting for all other variables in the model.This increase in the incidence at these hours could be due to driver's fatigue or excessive speed as there is less traffic in the early hours of the day.Contrary to what has been found in this study, a study by [22] in which they grouped the time as daylight and night found greater risk of injury for drivers traveling during daylight hours more than those driving at night.The study found an increased incidence rate of death if one is driving between Katuba and Landless corner compared to driving between Lusaka and Katuba.The results also revealed that there was an increased incident of death between Landless corner and ZNS as compared to driving between Lusaka and Katuba.This increase in the incidence of death on this stretch could be as a result of the curvature, a blind spot as the road is not straight on this particular stretch.Several other studies have analysed these accident-prone road sections [23], considered one of the basic steps to reduce road accident rates.In this direction, several methods to identify blackspots have been proposed which include accident frequency method; accident rate methods; quality control method; empirical Bayesian method; and many more [24 -28].Further, Geographical Information Systems (GIS) have been incorporated in the analysis of blackspots [23] the effectiveness of blackspot programs has been evaluated in different countries [29,30] and it still remains as an active field of research.Further research is needed in Zambia on these blackspots especially on the highway roads. Results also revealed that Public transport compared to private transport had an increased incidence of death from RTAs, this finding was statistically significant.Similar Studies done in developing countries also show that public (that is, bus/minibus) transport has serious safety concerns as a result of frequent involvement in severe accidents [31].In these countries, bus/minibus accidents are rampant with alarming consequences [32,33].This could have been due to the fact that private vehicle owners tend to be more careful on the roads as compared to bus drivers as the later drive long distances and are fatigued hence they are prone to have a crash.This finding is important to influence government policy so as to limit the number of kilometres as well as the number of hours a public vehicle driver can handle in a day to reduce on numbers of RTAs.The study further showed that the ZTNB fits the fatal and serious accident data well as compared to the ZTP, NB and poisson models.This finding is in agreement with findings from [34] where the ZTNB model when applied to total number of vehicles involved in the accident and the number of casualties in a particular accident, ZTNB was found to be the best fit model compared to other models. CONCLUSION The study showed that there was an increased incidence of death if the driver is male, driving in the early hours (1AM-6AM) of the day and using public transport.There is also an increased incidence of death if one is driving between landless corner and ZNS compared to driving between Lusaka and Katuba.The study further revealed that the ZTNB is the best fit model for data in which there are few zeros as is the case with serious and fatal RTAs.The majority of these accidents on this particular stretch happen as a result of human error which include excessive speed, FTKNS to mention but a few. RECOMMENDATIONS From the findings of this study, expansion of highways like GNR is highly recommended as the number of vehicles in the country has been increasing while the roads have remained the same.As a result of this, there is heavy traffic on this stretch especially on peak hours and inpatient drivers tend to overtake unnecessarily especially on blind spots and this lead to RTAs.This high number of RTAs could be due to lack of enough pedestrian crossing on this highway especially in built-up areas like Chibombo and Kabangwe areas.There is a need for massive sensitization to citizens especially pedestrians as most of these deaths are as a result of pedestrians crossing the road inappropriately.There is a need for the Road development Agency to put speed humps especially in built-up areas, such as Chibombo and Kabangwe areas.The government needs to limit drivers on the number of kilometres one can drive per day as driving more kilometres result in RTAs. LIMITATIONS OF THE STUDY The study had limitations as the data used was collected for different purposes and not specifically to answer our research questions.As a result of this, problems encountered included having inaccurate/missing data on the observations and some vital variables such as drinking of alcohol/use of drugs while driving and use of hand held mobile phones whilst driving.Administrative data, which is not originally collected for research, were not available in the usual research formats and in this case the variables to consider were limited to the variables that were found and as recorded by the ZP traffic division.Despite these limitations, the study sample was large enough to make inferences and also considering the years of these accidents, a good number accidents for different periods/times were captured in this study.Statistical theory suggests that the larger the sample, the more reliable the estimates are going to be with respect to the population where the data arises from.Hence we used this principle in analysing a larger sample than the minimum sample size calculation suggested.The methodologies that were employed in this study were robust methods and are appropriate methods for modelling count data. AUTHORS CONTRIBUTIONS RF came up with the idea on the research area (RTAs), RF and PM designed the research problem.RF acquired the data, performed the analysis, and drafted the manuscript with the help of PM.CN and CM helped extensively in editing the work.All authors discussed the results and implications and commented on the manuscript at all stages.All authors contributed extensively to the work presented in this paper.All authors read and approved the final manuscript. Fig. ( 1 Fig. (1).Figure showing a snap shot of the GNR stretch under study between Lusaka and Kapiri-Mposhi in Zambia (Source; Google maps). Descriptive statistical analysis was done to estimate the counts, giving frequencies and percentages.Many of the Table 1 . Characteristics of the accidents Note:*Missing values encountered, FTKNS: Failure to keep near side, ZNS: Zambia National Service, b: Total number of deaths.Fisa et al. Table 2 . Multivariable analysis with the ZTNB model. The best predictors' model for number of deaths from RTAs. Significant variables at 0.05 level of significance (Adjusted estimates), Ref: Reference/comparison group, IRR: Incident rate ratio, 95% CI: 95% Confidence Interval.FTKNS: Failure to keep near side, ZNS: Zambia National Service.
6,483.4
2019-03-19T00:00:00.000
[ "Environmental Science", "Engineering", "Medicine" ]
Radiation-driven rotational motion of nanoparticles The observation of the effects of radiation pressure of a Gaussian beam by tracking the rotation motion of single-crystal nanoparticles is presented. Introduction Measurement of the rotational motion of nanometer-sized tracer particles is a useful way to measure viscous and viscoelastic media (Cheng & Mason, 2003;Andablo-Reyes et al., 2005). Consider a small single crystal suspended in a fluid exposed to an X-ray beam of intense synchrotron radiation. When it happens to be oriented at its Bragg angle, there will be measureable diffraction intensity on an area detector aligned at the proper 2 scattering angle. Should the crystal rotate about an axis parallel to the incident beam, the Bragg condition is maintained, and the resulting change in position of the diffraction peak can be monitored with extremely high sensitivity. Interpreting the resulting motion of the diffraction signal on the detector as a function of time is the mechanism behind the technique of Rotational X-ray Tracking (RXT) (Liang et al., 2014). The rotational mean-squared displacement (MSD) of the particle can reveal important microscopic details about the system -a purely viscous medium would have a linear MSD with a slope based on the viscosity, whereas the presence of a driving force on the particle would introduce a quadratic dependence, and a complex functional dependence would indicate a viscoelastic medium. RXT can have a resolution of tens of microradians, three orders of magnitude higher than that of optical particle tracking techniques (Anker & Kopelman, 2003;Fakhri et al., 2010). The time and angular resolution of RXT is dependent on both the pixel size and readout rate of the detector and the intensity of the incident X-ray beam. There must be sufficient photons scattered and detected within the time span of interest to identify the position of the Bragg peak. This suggests that higher flux is required for better time resolution. Higher flux, however, means increased radiation effects in terms of heating, radiation pressure and damage. In order to explore these effects in more detail, we performed systematic studies to understand how beam parameters effect the rotational motion of particles in suspension. The rotational motion of particles can be affected by an X-ray beam in three major ways: (1) heating of the entire system by the beam, which causes a local change in the thermal energy of the system, and can also change the viscoelastic properties of the surrounding medium; (2) radiation pressure due to a non-planar X-ray wavefront which induces a torque on particle; or (3) damage to the system, medium or particle, which is typically irreversible. The topic of radiation damage has been an area of much interest and effort, experimentally and theoretically, especially in the context of protein crystallography, so we refer to the established literature (Warkentin et al., 2011;Holton, 2009). In the context of Brownian motion, rotational motion of a probe particle has a third-power size dependence as compared with a linear dependence for translational motion (Perrin, 1928). Such dependence means that, for a small particle, the rotational motion is especially sensitive to any changes in viscosity due to heating, or to small torques due to radiation pressure. Less than an attonewton of radiation pressure force has been observed on gold nanocrystals attached to substrates with actin filaments using diffractive X-ray tracking (Sasaki et al., 2000) and radiation pressure was observed to act on Pd nanocubes and a Ni nanowire (Kim et al., 2016). The rotational motion seen by RXT at a third-generation source with a standard detector is sensitive to torques of order 10 -24 N m, as computed in a later section of this article, providing the necessary sensitivity to observe radiation pressure force gradients. Here we study, and quantify, the effects of synchrotron X-rays on the rotational motion of particles with respect to heating and radiation pressure by studying the motion at different X-ray fluxes for both temperature sensitive and temperature independent systems to decouple the effects of temperature and radiation pressure. Radiation heating To investigate radiation heating, we chose glycerol, which has a strong temperature-dependent viscosity, and performed X-ray flux studies of the rotational motion of 42 AE 3.5 mm alumina crystals in a planar X-ray beam profile. The sample was measured in a sealed capillary in transmission geometry at the X11 beamline at Doris III with an unfocused 15.2 keV beam of 10 13 photons s À1 in a spot size of 1 mm  1 mm. Attenuators were employed to reduce the incident photons in a controlled way. Diffraction of the (104) Bragg reflection of alumina was measured with a Mythen strip detector with time steps of 30 ms and 50 ms at a sample to detector distance of 0.7 m. The angular MSD versus time h 2 (t)i was measured for different X-ray flux levels using RXT data analysis methods as shown in Fig. 1. The differences in the viscosities are calculated from the MSD, assuming rotational diffusion for spherical particles h 2 (t)i = k B T/8R 3 , where is the dynamic viscosity and R is the radius. The resulting values are 7.9  10 À1 Pa s for the case of 1 mm aluminium (Al) attenuation in the X-ray beam (14% transmission) and 4.2  10 À1 Pa s for the 0.5 mm Al attenuation of the X-ray beam (35% transmission). For the 99% pure glycerol used, the viscosity value measured with the greater X-ray attenuation (1 mm Al) corresponds to a temperature of 27 C and the more intense beam (lower attenuation) corresponds to 33.5 C, based on known data for the change in viscosity as a function of temperature and the difference in thermal energy (k B T ). Such temperature increases are consistent with previous studies on heating with synchrotron beams (Snell et al., 2007). Heating can also lead to convection or flow effects, which cause deviations from linear behavior in the MSD versus time graph by the addition of a quadratic component. At a time lag of 0.3 s, some deviation is observed; however, the largely linear trend of MSD versus time at times shorter than 0.3 s allows us to conclude that the motion at these shorter time lags is primarily diffusive. Radiation pressure Radiation pressure was observed over a century ago in optical light (Nichols & Hull, 1903;Frisch, 1933) and has been utilized in laser cooling for atomic traps (Phillips, 1998), but the relatively low power of X-ray beams compared with optical lasers has limited the study of X-ray radiation pressure mainly to astrophysics (Nichols & Hull, 1903). Radiation pressure due to a planar beam would result in a force that would drive a particle in translation but should not cause rotation. However, a beam with a non-uniform intensity profile could induce a torque that would drive rotational motion by exerting a greater force on one side of an illuminated crystal. Due to the angular sensitivity of RXT, it is possible to investigate radiation pressure effects due to a non-homogeneous beam profile. It is important to decouple the temperature effects, which would change the rotational motion due to a change in the thermal energy and viscosity, as shown for glycerol above, from the pressure effects. Single-crystal 340 nm -alumina crystals (sample detail in Liang et al., 2014) were suspended in decanoic acid at a 40-50% volume fraction forming a colloidal gel (Liang et al., 2014;Bell et al., 2005). The colloidal gel is viscoelastic and has a viscous modulus nine orders of magnitude higher than pure decanoic acid (Liang et al., 2014). The sample was measured with a five-circle diffractometer at sector 34ID-C at the Advanced Photon Source (APS), USA. The monochromatic X-ray beam was focused with a Kirkpatrick-Baez mirror pair to an approximately Gaussian profile with FWHM of $ 1.0 mm  1.6 mm. Both 8.9 keV and 11 keV beams, with 10 9 photons s À1 , were incident on the droplet of suspended particles in transmission geometry. Photon flux was controlled by tapering the undulator and with attenuators. Diffraction of the (104) Bragg reflection of the alumina was measured with a Princeton Instruments charge coupled device (CCD) camera. We also utilized a temperature-controlled stage and performed experiments at constant X-ray flux for several temperatures. The MSD versus time results (Fig. 2) showed no systematic dependence on temperature. Thus, we conclude that temperature is not the main cause for the observed difference in MSD for the attenuated and un-attenuated X-ray beam measurements. Even with a temperature-controlled stage, there can be local heating, so the MSD studies are more conclusive to eliminate temperature dependence as the main effect on the rotational motion. The lack of temperature dependence of the viscoelasticity in this study is consistent with studies finding that heating of silica gels irreversibly increases the elastic modulus due to restructuring to a more tightly packed structure (Wu et al., 2012). These systems are distinct from the better-known thermoreversible nanoparticle gels such as silica colloidal gels where temperature dependence is accurately modelled by Naïve Mode Coupling Theory (NMCT) model (Ramakrishnan & Zukoski, 2006). Our sample preparation included heating to 50 C for > 4 h to ensure proper dispersion of the alumina, which as a powder is highly aggregated, sufficient for the restructuring that would lead to our observed lack of temperature dependence. Given that temperature appears to have little or no effect on the viscosity of the alumina/decanoic acid system, we turn our attention to effects of radiation pressure. The MSD of a single particle was studied with an X-ray beam energy of 8.9 keV at two different attenuations (Fig. 3a). A 200 mm Al attenuator ($ 15% transmission) was inserted into the beam to control X-ray flux while keeping the particle in diffraction. Averaged data taken from multiple particles were also obtained to obtain better statistics. The average MSD shown in Fig. 3(b) was computed from data sets measured at an X-ray energy of 11 keV both with and without a 25 mm Mo attenuator ($ 20% transmission). Unlike Fig. 3(a), these are not guaranteed to be the same particles with and without attenuator but the measurements are taken from a single sample. In both the single-particle study and average measurements (Fig. 3) it is clear that at higher X-ray flux the rotational motions of the particles have a larger MSD versus time. We observe that a particle subject to a higher X-ray flux, resulting in greater radiation pressure, has a larger rotational drift component, seen as a quadratic component in the MSD. Since heating was not the main contributor, the difference is due to radiation pressure. From Hasnain & Donald (2006), we treat Brownian and drift components as separate and linear. Thus we describe the MSD of the particle motion by where is the angular velocity and D r is the rotational diffusion coefficient. This angular velocity is the rotational drift of a particle due to radiation pressure, which can be determined by measuring the MSD versus time at the two X-ray flux levels. From Perrin (1928), we know that the rotational drift in a viscous medium is directly proportional to the torque . The torque of the particle with the attenuated beam, 2 , is related to the torque on the particle at full flux, 1 , by 2 = 0:2 1 . Assuming that the Brownian components are equal, the rotational drift of the low flux particle is Using the MSD measured with the attenuated and non-attenuated X-ray beams, and taking the median value of 2 , we obtain a rotational drift of 1.75  10 À6 rad s À1 for the particle in the high-flux case and 3.5  10 À7 rad s À1 for the low-flux case. By subtracting the drift value for both high-and low-flux MSD curves, we can obtain an estimate of the pure diffusive component, i.e. the MSD that would be measured without beam effects. As can be seen in Fig. 4, the two trajectories collapse on each other with the drift component removed. This represents a refinement of the RXT technique whereby we can normalize for radiation pressure effects that contribute to rotational drift. Torque field in a Gaussian beam To illustrate how radiation pressure can cause a torque, consider the torque from a surface perpendicular to a linear gradient. In reflection, the momentum transfer is in the direction of the incident beam and more incident photons on one side of a particle would induce a net torque. In our situation, with Bragg reflection of a particle in a Gaussian gradient (Fig. 5), the momentum transfer is elastic and in the direction of the scattering vector k f -k i , where k i is the incident scattering vector and k f is the outgoing scattering vector. The same argument for more photons on one side of the particle remains, but the direction of the net torque is no longer normal to the plane defined by the gradient direction and the incident beam direction, but rather the plane defined by the gradient direction and the scattering vector. Fig. 5 shows the magnitude of torques on a 340 nm particle at various positions along a Gaussian beam central axis, showing a maximum torque of 2  10 À24 N m as computed later in this section. An incident beam of electromagnetic radiation exerts a force on a particle both through absorption and reflection of the beam. In our case, the absorption will be minimal as the crystals are a small fraction of the extinction length of the X-rays at the wavelengths used. For a crystal oriented at the Bragg condition, the Bragg reflected beam can be quite strong. We have estimated that the (104) Bragg reflection of a 340 nmdiameter alumina particle will contain 0.15% of the incident intensity using the reflectivity of 1300 layers of the (104) Bragg planes. This value agrees within an order of magnitude with the observed intensity of the Bragg peak on the detector taking into consideration transmission through air, droplet and attenuators. Other sources of scattering, such as diffuse scattering and small-angle scattering about the direct beam, will exert a much smaller net force and be relatively isotropic compared with the highly directional nature of the Bragg beam. We do not expect contributions from the excitation of other Bragg peaks due to the -Al 2 O 3 crystal size and lattice parameters. An analysis showed that less than 2% of crystals oriented at one of the (104) reflections would have any contribution from another Bragg peak. The percentage is further reduced when considering that we only observe crystals that are well centered on the rocking curve resulting in strong diffraction. If the particle is off center in the beam profile and undergoing Bragg diffraction to a specific direction, an asymmetric Torque on a spherical particle as a function of radial distance (nm) from the center of a Gaussian beam along a central axis of the beam. intensity gradient will exist across the particle that gives rise to a gradient in applied force, as illustrated in Fig. 6(a), where the sphere is sitting at the lower waist of the incident Gaussian profile beam. The top of the sphere will experience a greater net force than the bottom due to cumulative momentum transfer of the greater number of Bragg reflected photons. As a result, a net torque will be applied about the center of mass of the particle. We estimate the torque on a 340 nm spherical particle oriented at the Bragg angle within a Gaussian beam approximating the parameters of the beam used in our RXT experiment (FWHM = 1.0 mm containing 5  10 9 photons s À1 ). Fig. 6 illustrates the model used where the Bragg reflection from the crystal lies in the x-z plane, similar to the experimental geometry. The incident Gaussian beam (with direction shown as k i in Fig. 6a) propagates along the z-axis and the Bragg reflected beam is in the direction given by k f . The force on a scattering element, which we define as a 1 nm 3 volume element of the crystal, oriented at the Bragg angle ( Bragg ) is given by where n p is the number of photons per second, h is Planck's constant and is the wavelength. The force is applied in a direction perpendicular to the lattice plane and anti-parallel to the momentum transfer from the crystal to the photons. We scale the reflected intensity of each scattering element by the computed reflectivity of the (104) Bragg reflection. The force on a volume element, shown in Fig. 6(a) as a red arrow, will be anti-parallel to the momentum transfer given by k f À k i . The torque about the center of mass of the crystal (r  F) is illustrated as the green arrow in Fig. 6(a). Fig. 6(b) illustrates the torque vector field on each volume element of a particle at the lower waist of the incident beam profile. The color of the vectors in Fig. 6(b) describes the z-component of the torque exerted at that location. The net torque on the particle at this location, shown as a yellow arrow in Fig. 6(b), is obtained by integrating the torque vector field throughout the volume of the sphere. This approximates equation (4) which gives the net torque as a function of position r(x, y) in a twodimensional Gaussian beam centered at the origin on a spherical particle of radius R, The torque is primarily along the z-axis at the particle location in Fig. 6(b) and will have the net effect of rotating the crystal on an axis almost parallel to the incident beam. The same simulation procedure is repeated at each position in the incident Gaussian beam giving the torque on a crystal at each location, shown in Fig. 6(c). When the crystal is shifted along the X-axis from the center of the beam, the torque is parallel to Y, which rotates the crystal about the y-axis. This rotation will move the particle across the Debye-Scherer cone and out of diffraction of the monochromatic beam. As the crystal is shifted along the Y-axis, from the center of the beam, there will be a significant component of torque (on the scale of 10 À24 N m) parallel to the X-ray beam (z-axis) direction. The induced rotation would cause the Bragg peak to move around A spherical crystal in a Gaussian X-ray beam oriented at a Bragg angle will experience a radiation-pressure-induced force due to the reflected beam. Both the force and torque exerted on each volume element as a result of the Bragg scattered beam are computed as shown in (a). The torque vector field for each volume element of the crystal is computed (b). The sum gives the net torque of magnitude 2  10 À24 N m, illustrated as a yellow arrow, on the crystal at a given location in the Gaussian beam (b). The net torque due to Bragg diffraction from a 340 nm alumina crystal at each position in a Gaussian X-ray beam (c). The beam is 1 mm FWHM and the Bragg angle is in the XZ plane. The vector color represents the Z component of torque on the crystal at that location (c). the Debye-Scherrer cone, resulting in the trajectories that we measure with RXT. Steady-state viscosity By modelling the torque field on the particle in a beam, we have the opportunity to glean additional information about the decanoic acid/alumina colloidal gel beyond what was obtained by Liang et al. (2014) by estimating the steady-state viscosity. The rotational drift, , of a spherical particle of radius R, embedded in a medium with steady state viscosity , subject to a torque , is described by = /8R 3 (Perrin, 1928). A colloidal gel is viscoelastic and has no simple, single viscosity value but rather viscous and elastic components that are frequency (!) dependent. The viscoelasticity as a function of frequency, studied with traditional rheology and microrheology via the motion of embedded particles, is well established (Ferry, 1980;Mason & Weitz, 1995). The viscous and elastic moduli for alumina/decanoic acid gels was calculated from the MSD versus time previously using RXT showing good agreement with rheometry measurements (Liang et al., 2014). In principle, the ! ! 0 limit of the viscous moduli is an estimate for the steady-state viscosity value (Ferry, 1980) but in practice the elastic and viscous modulus of the material are often described by a power-law fit which makes the steadystate value elusive for traditional rheology data. A type of steady-state viscosity can be obtained from an estimate of the torque from the radiation pressure of a Gaussian beam as calculated above. If we assume that the particles are uniformly distributed with a cut-off at twice the FWHM (beyond which there would be little diffraction intensity from these crystals), we see that the 'average' torque is 1.5  10 À24 N m. This torque and the full flux drift value of 1.75  10 À6 rad s À1 found above would give a steady-state viscosity value of 7.0 Pa s for the alumina/decanoic acid colloidal gel. In conclusion, we studied the thermal and pressure effects of synchrotron beams in systems of suspended particles by imaging the rotational motion using rotational X-ray tracking. X-ray generated thermal effects on the rotational motion of a particle are well understood given the change in viscosity and differences in thermal energy in the system. We have investigated in detail the torque caused by a Gaussian beam profile on constituent alumina particles in a colloidal gel without strong temperature-dependent viscoelasticity. The results show the effects of radiation pressure manifesting as rotational drift due to torques on the order of 10 À24 N m. Quantifying these radiation effects also presents a way to normalize for radiation pressure from a non-uniform beam in RXT data by measuring the system at two different known fluxes. We note that the situation of very high X-ray intensities in small focal spots will be important and more common in the future. Obtaining such 'nanoprobe' beams is precisely the long-term goal of many X-ray facilities, motivating machine upgrades of their electron storage rings. The driven rotation mechanism explained in this work will be exacerbated considerably with these technological advances. It represents a new damage mechanism of crystalline materials exposed to the high-flux gradients associated with strongly focused beams. It has been observed, for example, that particle rotations can be driven in materials normally considered as 'solids', such as baked enamel paints, where the crystalline pigment particles are found to rotate whenever they are examined with focused X-ray beams.
5,268.2
2018-04-25T00:00:00.000
[ "Materials Science", "Physics" ]
Coexistence of superconductivity and charge-density wave in the quasi-one-dimensional material HfTe3 We present the first experimental evidence for metallicity, superconductivity (SC) and the co-existence of charge density waves (CDW) in the quasi-one-dimensional material HfTe3. The existence of such phenomena is a typical characteristic of the transition metal chalcogenides however, without the application of hydrostatic pressure/chemical doping, it is rare for a material to exhibit the co-existence of both states. Materials such as HfTe3 can therefore provide us with a unique insight into the relationship between these multiple ordered states. By improving on the original synthesis conditions, we have successfully synthesised single phase HfTe3 and confirmed the resultant structure by performing Rietveld refinement. Using low temperature resistivity measurements, we provide the first experimental evidence of SC at ~1.4 K as well as a resistive anomaly indicative of a CDW formation at ~82 K. By the application of hydrostatic-pressure, the resistivity anomaly shifts to higher temperature. The results show that HfTe3 is a promising new material to help study the relationship between SC and CDW. . However, it is rare that the materials without chemical/physical modification exhibit the co-existence of both states. ZrTe 3 is a material which shows the coexistence of a CDW at ~63 K and filamentary SC at 2 K 11 as does NbSe 2 3 . In the case of ZrTe 3 , by the application of pressure, intercalation of Cu 12 and Ni 13 or the substitution of Se at the Te site 14 , the CDW can be suppressed and bulk SC induced at ~5 K 15 . The electronic structure of ZrTe 3 is unique amongst the MQ 3 family owing to the strong contribution of the Te-Te p σ* band at the vicinity of the Fermi level 16 , therefore the inter-chain interactions affects the electronic structure as well as the physical properties. Similar cross-chain interactions are absent in other members of the MQ 3 family (when M = group IV transition metal and Q = S/Se) 17 . Of the MTe 3 materials, HfTe 3 is the only other material expected theoretically 18,19 . There are no known reports for TiTe 3 nor Nb/TaTe 3 . However, by using the reaction conditions outlined by Brattås et al. we found that the successful synthesis of HfTe 3 18,19 was irreproducible. Therefore, although theoretical band structure calculations have predicted HfTe 3 to be metallic 16,20,21 there is currently no experimental confirmation. As far as the authors are aware, the available experimental data for HfTe 3 include the original structural characterization 18 , and the determination of its basic magnetic properties (temperature-independent diamagnetism) 19 . In addition, it has been recently reported by scanning tunneling spectroscopy that Hf/HfTe 5 /HfTe 3 films exhibited a superconducting gap-like spectra 22 . HfTe 3 and ZrTe 3 are iso-structural materials whose features raise the possibility that HfTe 3 may also exhibit the coexistence of SC and CDW state. Therefore, it would be an important task to synthesize the high quality bulk compound, and to explore the aforementioned electrical phenomena. By modifying the original synthesis conditions 18,19 , polycrystalline HfTe 3 samples have been successfully synthesized. The crystal structure has been analyzed using Rietveld refinement and the first experimental evidence of metallicity in this material is reported. The resistivity data exhibits an anomaly suggestive of a CDW formation at ~82 K and subsequently zero resistivity below 2 K. By the application of hydrostatic pressure, the resistivity anomaly shifts to higher temperature. In addition, we note that HfTe 3 is highly air-sensitive, where the behaviour of ρ -T characteristics changes from metallic to insulating upon exposure in air (See Supplementary information). Results and Discussion Key requirements to synthesise single phase HfTe 3 . Suitable reaction conditions to produce single phase HfTe 3 crucially depend on the maximum reaction temperature 19 . During this investigation it has been found that a slow cooling rate is also a key requirement. In brief, the favoured phase was HfTe 2 at a higher temperature range (≥ 530 °C) and HfTe 5 at lower temperature regions (≤ 470 °C). As reported by Brattås et al., we confirmed that the sintering condition of c.a. 500 °C indeed favours the growth of the HfTe 3 phase 18 . However, when rapid cooling from 500 °C (e.g. quenching in water) was applied 19 the majority phase became HfTe 2 together with unreacted tellurium. On the other hand, when slow cooling was performed (approx. − 0.25 °C/h) until 470 °C after which the ampoules were cooled to room temperature at a rate of approx. − 5 °C/h, then single phase HfTe 3 could reproducibly be synthesised. The results suggest that HfTe 3 primarily forms by reaction with the tellurium vapour upon cooling. If the reaction vessel is quenched, the solidification of the tellurium prevents its uptake and HfTe 2 becomes the preferred phase. Namely, it is found that HfTe 3 is the least thermodynamically stable phase within the Te-rich Hf alloys and as a result in order to inhibit the formation of trace amounts of HfTe 2 /HfTe 5 , it is necessary to control precisely both the sintering temperature and the cooling rate. Figure 1(a) shows the powder X-ray diffraction (PXRD) result for HfTe 3 together with the result of the Rietveld refinement using ZrSe 3 as a reference model 23 , where the result was consistent with the monoclinic crystal symmetry (space group P2 1 /m). Figure 1(b) represents the crystal structure of HfTe 3 which is the pseudo-one-dimensional (1D) structure. As seen in Fig. 1(b), MQ 6 trigonal prismatic units propagate along the b-axis resulting in chain-like anisotropic crystal growth. By projection down the b-axis it can be clearly seen how the chains are bonded together by Van der Waals forces (see Fig. 1(c)). Reasonable values of R wp = 8.47%, R p = 6.60% and χ 2 = 1.544 were obtained. Refined lattice parameters of HfTe 3 , a = 5.8797(9) Å, b = 3.8999(9) Å, c = 10.0627(3) Å agreed with the previously reported values 19 . On the other hand, the angle β = 98.38(8)° showed a slight expansion from the originally reported angle of β = 97.98°1 8 . The refinement results are summarized in Table 1. It was confirmed from X-ray fluorescence (XRF) results that the composition ratio of our HfTe 3 was Hf:Te = 26:74 (at%). Coexistence of SC and CDW. Resistivity of non-air-exposed HfTe 3 reproducibly exhibited metallic behaviour in the temperature range between 0.3 and 300 K as shown in Fig. 2(a). The residual resistivity ratio (RRR) defined as ρ (275 K)/ρ (4 K) is ~2.4, which is lower than that of single crystal ZrTe 3 11 but is larger than that of polycrystalline-ZrTe 3 24 , in which the lower RRR value is thought to arise from strong grain boundary effects. Therefore the influence of grain boundaries is likely to play a role in the reduction of RRR. The inset of Fig. 2(a) shows the temperature derivative of the resistivity dρ /dT and reveals a resistivity anomaly at 82 K assumed to be indicative of a CDW formation, where the CDW formation temperature T CDW is defined as the temperature at which dρ /dT exhibits a minimum. At T CDW the CDW gap is developed and the resistance anomaly appears owing to a reduction in the density of states at E F due to the CDW formation. Below 2 K, the resistivity showed a sharp drop exhibiting a SC transition at 1.8 K (T c onset ) and reached zero (T c zero ) at 1.45 K as can be clearly seen in Fig. 2(b). By increasing the applied current, a broadening of the SC transition was observed and it was accompanied by a downward shift in T c onset and T c zero , whereas the normal state resistivity remains unchanged. The result suggests a weakening in the SC state as well as a decoupling of the Josephson junctions between individual SC grains of the polycrystalline material. I-V characteristics measured at T > T c and T < T c revealed ohmic and non-ohmic behaviour, respectively. N.B. In the present study, we observe that HfTe 3 shows a rapid weakening of its metallic state within minutes of exposure in air (see Supplementary Fig. S1). This is likely the result of an insulating layer (such as tellurium oxides) forming around the individual grains of the polycrystalline material. The results emphasize that if one is to observe the intrinsic properties of HfTe 3 any measurements must be conducted in the absence of air. Behaviour under high-pressure. By the application of hydrostatic pressure (P), the resistivity anomaly gradually shifted to higher temperatures up to ~99 K for P approaching 1 GPa as shown in Fig. 3. Similar behaviour has been reported for ZrTe 3 where in the case of an application of P ≤ 2 GPa the T CDW was increased and the SC suppressed. At P ≥ 5 GPa the CDW was fully quenched and gave way to reemergent SC, where T c increased to ~4.5 K when P~11 GPa 15 . In addition, in the case of HfTe 5 , SC appeared by applying P~5 GPa and a maximum T c of 4.8 K was attained by applying at P~20 GPa 25 . This suggests the possibility that HfTe 3 is likely to follow the same pattern as other members of the group IV-MTe x alloys. Namely by further application in pressure, it is expected that the T CDW will eventually be suppressed and T c will be enhanced. Electronic structure. Studies regarding the electronic structure of HfTe 3 are limited, but the issue is briefly reported by Felser et al. who determined an electronic structure similar to that of ZrTe 3 , i.e. a metallic state resulting from a large contribution of the Te p-bands at the Fermi level 16 . These characteristics are supported by later density of states (DOS) calculations 20,21 . ZrTe 3 exhibits a multi-component Fermi surface with contributions from the Te forming quasi 1D electronic sheets at the boundary of the Brillion zone and from the Zr a 3D-hole character sheet around the Г point. The resultant nesting characteristics at the Fermi surface have been determined to be responsible for the CDW formation in ZrTe 3 16,[26][27][28] . If one considers the iso-structural/electronic relationship between HfTe 3 and ZrTe 3 , it is likely that similar interchain interactions between neighbouring Te(2) and Te (3) atoms (see Fig. 1(c)) play a dominant role in the metallicity of HfTe 3 16 which in turn would give rise to the similar Fermi surface with nesting features reported for ZrTe 3 . However, it cannot be categorically asserted that the observed resistivity anomaly is due to a CDW formation from our results only. As in the case of ZrTe 3 , it would be necessary to confirm any coincidental low-temperature lattice distortions 29 as well as to observe the features of the Fermi surface around the temperature of the anomaly 26 . However the similarities between HfTe 3 and ZrTe 3 in the electronic structure as well as the results of the temperature/pressure dependence of ρ are strong indication that the observed resistivity anomaly for HfTe 3 is indeed the result of a CDW formation. Conclusion In summary, we have established a reproducible synthesis method for high-quality polycrystalline HfTe 3 and showed that it is an acutely air-sensitive material. By using high-quality HfTe 3 we found that the quasi-1D HfTe 3 is a novel SC with T c ~ 1.4 K, and the SC state coexists with the CDW state which appears at T CDW~ 82 K. Furthermore, we provided the first accurate crystallographic data by Rietveld refinement of the PXRD of HfTe 3 . Methods Single-phase polycrystalline HfTe 3 samples have been prepared using standard chemical vapour transport techniques. Ground mixtures of a 1:3 molar ratio of powdered Hf and Te were sealed in silica ampoules under a vacuum of c.a. 3 mTorr using a rotary pump. The ampoules were heated in a box furnace using the reaction procedure described in the results and discussion. To prevent exposure to air, all sample preparation was conducted in an argon filled glovebox. PXRD was carried out using a Rigaku Smartlab diffractometer in flat plate geometry with a Cu K α radiation (λ = 1.54056 Å). Diffraction data were typically collected for 5° ≤ 2θ ≤ 80° with a 0.01° step size with scan times of 3 hours. Rietveld refinement was performed using the GSAS software package via the EXPGUI interface 30,31 . XRF analysis was performed using a JEOL JSX 1000 S ElementEye. Resistivity measurements were performed on cold-pressed pellets using a standard four-terminal setup. Measurements for sample #A were performed between 0.3 and 300 K using an Oxford Instruments 3He cryostat, data were collected by an AC method using a low-noise amplifier and two lock-in amplifiers. Measurements for sample #B were performed by a DC method between 0.3 K and 15 K using a Quantum Design PPMS equipped with an adiabatic demagnetization refrigerator. The resistivity for Samples #C and #D were also measured by a DC method between 2 and 300 K using a closed cycle helium refrigerator. High-pressure resistivity measurements (up to 1 GPa) were performed using a BeCu/NiCrAl clamped piston-cylinder cell using Daphne 7373 as the fluid pressure transmitting medium with Pb employed as a manometer.
3,138.8
2017-03-24T00:00:00.000
[ "Materials Science", "Physics" ]
Study of historical Byzantine seal images: the BHAI project for computer-based sigillography BHAI 1 (Byzantine Hybrid Artificial Intelligence) is the first project based on artificial intelligence dedicated to Byzantine seals. The scientific consortium comprises a multidisciplinary team involving historians specialized in the Byzantine period, specialists in sigillography, and computer science experts. This article describes the main objectives of this project: data acquisition of seal images, text and iconography recognition, seal dating, as well as our current achievements and first results on character recognition and spatial analysis of personages. INTRODUCTION The successful development of artificial intelligence (AI) approaches to image understanding has promoted their applications to the humanities.For example, several projects involving experts and computer scientists are devoted to deciphering and dating ancient written artifacts from their images, e.g.ILAC for reading dates or Roman imperial names on coin images [15], ARCADIA for recognizing drawing patterns on pottery sherds [3].Other projects aim at emulating paleographers by classifying scripts from sample images only, without the aid of codicological data (material), e.g.ancient Hebrew scripts [7] or medieval Latin scripts [5].Other initiatives seek to date handwriting [6], reconstruct papyri from fragments [16], or search for meaningful objects (heads, letters) [12]. Most systems mentioned above now use AI for completing tasks such as script classification, character recognition, named entities recognition, or document reconstruction.AI systems rely on a training step that requires annotated data, but collections may be hard to interpret even for human beings, mainly due to their damaged state and the lack of ancient language knowledge and historical context [18]. The BHAI ongoing project devoted to digital sigillography based on AI approaches is described in Section 2. Section 3 examines the correlation between the seals diameters and their hierarchical significance from a historical perspective.Section 4 details the data collection of images and their annotations by skilled experts.Section 5 presents one of our first achievements: a character recognition system that provides a plain transcription of the seal's text.Section 6 describes our second accomplishment, which refers to preliminary interpretations of Byzantine seals using spatial relations. BHAI PROJECT BHAI (Byzantine Hybrid Artificial Intelligence) is the first project dedicated to computer-based sigillography.The French Research Agency (ANR) funded this project which promotes the study of historical seal images of the Byzantine period.Byzantine seals are small circular objects used in the Middle Ages to identify the sender of documents.They carry much of the knowledge we have gained about the Byzantine administration, aristocracy, and the cult of saints. Since most preserved Byzantine seals are made of lead, they suffer from corrosion and are often damaged.The work of interpretation by historians is difficult when seals have been crushed or broken, making their inscriptions difficult or impossible to read.However, given their intrinsic properties, such as the consistency between an image and its associated text, as well as the similarities between different seals belonging to the same person, historians can make assumptions about the missing parts. Epigraphy, numismatics, and sigillography are disciplines working on inscribed objects.The writing is different from the one used in manuscripts.Therefore, a font with variants for each byzantine character was necessary.At Dumbarton Oaks, J. Kalvesmaki [10] created the OpenType and Unicode Athena Ruby font 2 .Figure 1 shows a sample seal image's obverse and reverse sides.The obverse side (Figure 1-a) includes iconography while the reverse side (Figure 1-b) includes a text written in byzantine Greek in capital letters.Figure 1-c shows the transcription using the Athena Ruby font.However, the text is abbreviated: some words and characters are missing.Moreover, sometimes words are not separated.Only historians who are experts in sigillography can derive the complete text (Figure 1-e) from the abbreviated one (Figure 1-c).The English translation "Lord help thy servant Paul protospatharios (a dignity, the first to be a member of the Senate) and taxiarch (military officer)" is shown in (Figure 1-f). BHAI proposes to combine different AI approaches to automatically extract the content of Byzantine seals and restore the damaged or missing text (see Section 5).A second objective consists in dating image seals according to their content, i.e., the restored text and interpreted iconography (see Section 6). To interpret iconography, specialists can use icons, wall paintings, and other artifacts bearing iconography.Seals often reproduce a well-established iconography making the figure easily recognizable.After the second council of Nicaea (787 A.D.), identification accompanies figures and scenes. Although the object's surface is small on seals, abbreviated or complete names are provided to identify the image.These letters are not placed identically on each seal but above, on the side, or both sides of the figure.Some saints are more frequent than others.Mary, the mother of God (called the Theotokos), is a very frequent figure.Saint Nicolas, Saint Theodore, and Saint Demetrius are all frequently chosen saints.The seal owner can decide what saint to place on his or her seal.Women chose the Theotokos.For men, numerous factors come into play when choosing a saint: profession, baptismal name, and location.In the army, officers often choose a military saint.In the administration, Saint Nicolas is a first choice.Clerics, especially bishops, have to choose the saint of the cathedral church.The location also plays a role.In Antioch, the figures of Peter and Paul (apostles of Christ) are favorites, but also a local saint called Saint Symeon Stylites.Saint Paul is chosen in Tarsus, while in Thessaloniki, it is Saint Demetrius.Some seal owners change the iconography on their seals.Michael Cerularios, patriarch of Constantinople, chose the Theotokos, who protects the city of Constantinople, but then he chose the archangel Michael because he was called Michael.During the iconoclastic period (8th-9th c.), the emperors and the Church forbade images of saints but allowed crosses.Numerous crosses were created on seals with different shapes or ornaments.Finally, the Byzantines love monograms; they create a shape with the combined letters of their baptismal or family name.All letters of a name must be present; some can be used twice, and others are combined to produce two letters.Monograms are like puzzles. SEALS DIAMETERS AND HIERARCHICAL SIGNIFICANCE Byzantine seals range in diameter from 8-9mm to over 70mm.While most of these measure between 20 and 30 mm, the seals belonging to the category of ekklesiekdikoi stand out for their remarkable dimensions, ranging between 42 and more than 70 mm.Therefore, a question arises: Is there a relation between the seals' diameters and their hierarchical significance? To answer this question, we examined a sample of 1500 ecclesiastical seals from a historical perspective [14].The study revealed that there is no correlation between the diameter of ecclesiastical seals and their hierarchical significance.The ekklesiekdikoi were priests, members of the tribunal founded by Justinian I and attached to the Church of Saint Sophia in Constantinople.The ecclesiastical tribunal sent sealed official documents concerning their deliberations, which may explain a particular concern with being identifiable and representing authority and institutional weight.These aspects could also be conveyed by the size and appearance of the bullae that accompanied the documents [1]. DATA COLLECTION AND ANNOTATION The first stage of a computer-based system dedicated to sigillography is devoted to creating a cleaned and annotated corpus of seal images.The images were taken some years ago only to illustrate the books presenting the collections.Therefore, they were not captured with a professional setup (high resolution, staged illumination, HDR, etc.), and the acquisition protocol is neither fixed nor well known, which induces a lot of variability in the images.It can be noted that the background and characters have the same color.In addition, shades may be present depending on the light source position during digitization. We started by using the Tatiş image collection [4].Then, after having established the annotation protocol (labels, choice of software, characters, and objects to be annotated), the manual segmentation began with the participation of domain experts.They worked first on the transcription of texts at line level (but without providing line positions) and at character level (with character position) using the Supervisely 3 platform.Characters have been 3 https://supervisely.com/annotated by manually setting points on their contours, the outside contour, and the inside one (see Figure 2).From these contour points, a pixel-level annotation could be derived, as well as the bounding box of each character (see Figure 3). Finally, the annotation process was repeated for the iconography to isolate objects (such as crosses, clothes, or body parts) and scenes (such as the annunciation to the Blessed Virgin Mary).Then, jointly with Byzantine sigillographers, we decided to annotate: a) personages (such as the Christ, the Virgin Mary, or the Archangel Michel), b) objects (such as globes, swords, books), c) body parts (head, hands, wings), d) crosses (including fleurons, steps, and ornaments), e) clothes (such as veils or loros), and f) elements around the head (such as nimbi and crowns). We did not fully annotate the Tatiş images, only a subset of reverse images.In contrast to pixel-based annotation, only character bounding boxes were considered to expedite the annotation process.In summary, a total of 102 annotated seal images were collected as well as 2313 character images, along with their annotations (pixel and/or character level). SEAL CHARACTER RECOGNITION Transcribing the text of seal images is one of our main objectives.In Byzantine seals, the characters are made by an engraver who creates reliefs on a boulloterion, the tool used to strike lead, silver, and gold bullae.Therefore, traditional OCR approaches cannot be applied since they require more contrast between foreground (characters) and background than lead seals have.To face such difficulties, we proposed an approach based on deep learning to read seal characters and provide a transcript. There are several transcription levels, plain text and restored text (including hidden text because of lack of room or damaged text).However, the text is difficult to read because of absent words or characters.Thus, the transcription task is decomposed into: (1) locating and recognizing characters, (2) providing a plain transcript, (3) recovering words, (4) recovering missing text. Steps 1 and 2, which are our current main focus, have to face the lack of contrast between characters and background.We actually perceive characters from shades.Moreover, characters can be damaged.To address these issues, deep neural networks have been built and trained to localize and recognize characters.We use transfer learning and data augmentation. Figure 4 shows our learnable approach for obtaining seal plain transcriptions.Since we have few annotated images, we opted for a two-step approach: i) localizing character bounding boxes in the image, and ii) reading out the characters previously localized as a simple image classification approach.This approach splits a larger problem into two inherently simpler sub-problems, each of which can be solved by learnable models trained over far fewer annotated samples.Note that the chosen models perform well, and comparison with other potential architectures and models is left for future work. For obtaining plain transcriptions, we use the outputs of both networks and apply a robust Hough-based approach that groups character bounding boxes into text lines [11].To detect and extract character crops from seal images, we rely on YOLO v5 [9], a deep convolutional architecture trainable end-to-end in a single shot. Here we use the small version of YOLO v5.Namely, our YOLO learnable parameters have been previously initialized by training the network over the COCO dataset (328,000 images) and fine-tuned for 300 epochs. Due to the limited number of data, we massively resort to data augmentation, consisting of geometric transformations (image shifts, scale variations) to increase the training set diversity.We optimize the network parameters with SGD (Stochastic Gradient Descent) for 300 epochs (the apparently large epochs count is due to the limited size of our actual training set) and a linear learning rate scheduler from 0.01 to 0.00001. For character classification, we rely on a ResNet18 architecture [8] pre-trained on the ImageNet dataset [17].The network is trained over isolated character crops extracted from the training images resized to 256×256 pixels.We train the classifier on both obverse and reverse character images to face data scarcity, plus we rely on augmentation strategies to increment the apparent training set size. Figure 5 shows the 29 character classes used in this work with their glyphs in the Athena Ruby font.Classes Xi, Psi, Zeta and Closed Beta are quite infrequent.The Upsilon and Nu classes include two different glyphs each.Character S is the abbreviated form of the word .There are also two ligatures: CT for (infrequent) and OU.When transcribed in standard Greek, the number of character classes drops to 24 classes, the number of characters in the Greek alphabet.The croisette special symbol was also included since its shape and size is quite similar to characters. While the ResNet18 is trained over character-exact crops from the annotated training images, when deployed in the complete localize-and-classify pipeline it is expected to operate on crops extracted by the YOLO-based localization pipeline.We also added an extra non-character class corresponding to bounding boxes containing no character or damaged characters.150 non-character images have been cropped and added to the character image set.We follow a K-fold evaluation framework by dividing seal images and character samples into folds, training on − 1 folds, testing on the remaining fold and cycling over the folds.In addition, we constrain training and testing characters to belong to distinct seal images. Results in Table 1 are relative to the YOLO-v5 character localization task.The Recall value means that about 90 % of predicted bounding boxes match a ground truth box.There is a match if the two boxes overlap enough, i.e. if their so-called IOU (Intersection Over Union) is greater then a typical value of 0.5.The Precision is high but differs from the ideal value 1.This means that a few predicted bounding boxes do not include a character but an ornament, an incomplete character, or background.In order to evaluate the whole pipeline, we adopt the CER metric (Character Error Rate) [13].The CER compares the predicted character sequence with the ground truth sequence.Following the K-fold cross-validation framework, we compute for each seal of the testing fold its CER, and average over the seals to obtain the CER associated with that fold.The cross-validation CER, obtained by averaging over the folds, is equal to 0.31.This highlights the challenge of reading characters in the difficult context of ancient seals, but also the potential of this plain transcription as a source of the underlying text when it will be processed using dictionaries and character/word embeddings. REASONING WITH SPATIAL RELATIONS FOR INTERPRETING BYZANTINE SEALS This section presents preliminary results for interpreting Byzantine seals using spatial relations.The hypothesis is that the spatial organization of personages and objects provides useful information for their interpretation.Initially, we selected the seals with a personage (or object) in the center of the seal.The central personage (or object) usually has the largest area coverage.Therefore, the first step of our pipeline is to calculate the area coverage of different personages (or objects) and sort and compare each value to determine if there is a dominant personage (or object) in the seal. Once the central personage (or object) is determined, we calculate the directional relations of other objects and the central personage.In other words, we want to know if a particular object is on the left, right, above, or below the central personage (or object).We applied the fuzzy landscape method proposed in [2], where the degree of satisfaction of the relation to the reference object at any point in space is computed using a morphological dilation.Once the fuzzy landscape of the central personage (or object) is built, the analysis of the directional relations between any other object and the central personage (or object) is completed in order of O ( ), with the number of pixels in . Figure 6 presents an example analysis of the four basic directional relations between the Empress Théodora and a labarum.The high degree of satisfaction of the relation "the labarum is on the left of the Empress" is characteristic of this type of seal and helps interpret them.Similar results have been obtained on other seals and other spatial configurations. CONCLUSIONS AND FUTURE RESEARCH This article presented the BHAI project, an innovative approach in Artificial Intelligence applied to Byzantine sigillography.Our proposal combines computer vision, knowledge engineering, and mathematical modeling of spatial relationships to help interpret Byzantine seals.Up to now, the proposed methods have provided encouraging results, and are currently being further developed. To the best of our knowledge, no other AI-based project has been suggested for Byzantine seal datasets.Nowadays, no software helps sigillography students in the reading of seals.We are convinced that such a tool may be a great support to introduce beginners to the challenging field of sigillography and support experts by corroborating their theories with mathematical evidence.While the proposed approaches have been applied so far to Greek characters and Byzantine iconography, they could be extended to other alphabets and historical periods. Figure 1 : Figure 1: Sample seal images.In this case, the obverse side (a) includes iconography, while the reverse side (b) includes an abbreviated text in Greek.We show plain transcriptions (c) and (d), the text to recover (e), and the English transcription (f). Figure 2 : Figure 2: Annotated contours of characters and abbreviation marks. Figure 4 : Figure 4: Pipeline of the proposed two-stage character recognition approach.CNN1 is an object detector, while CNN2 is a deep classifier.The output is the plain transcription of the input reverse seal image. Figure 5 : Figure 5: Byzantine Greek characters with their corresponding glyphs represented with font Athena Ruby. Figure 6 : Figure 6: Example of assessment of the spatial relation between the Labarum and the Empress Théodora represented on the seal in (a) and segmented in (b).Four basic relations (c,d,e,f) to Empress Théodora, represented as maps where high gray values represent high degrees of satisfaction of the relation, and corresponding satisfaction degrees on the Labarum.The highest degree (averaged over the points of the Labarum) is obtained for the relation left to the Empress, which is the expected result. Table 1 : Character Localization evaluation.Recall, precision metrics (in %) obtained by cross validating over K = 10 folds.The IOU threshold is equal to 0.5.Results in Table2are relative to the ResNet18 network (CNN2) evaluated on ground truth character crops for the 20 most represented classes (which are composed at least of 50 samples) plus the non-character class.Top 1 to Top 3 accuracies are provided.
4,187.4
2023-08-25T00:00:00.000
[ "Computer Science" ]
Equilibrium Investment Strategy for DC Pension Plan with Inflation and Stochastic Income under Heston ’ s SV Model We consider a portfolio selection problem for a defined contribution (DC) pension plan under the mean-variance criteria. We take into account the inflation risk and assume that the salary income process of the pension plan member is stochastic. Furthermore, the financial market consists of a risk-free asset, an inflation-linked bond, and a risky asset with Heston’s stochastic volatility (SV). Under the framework of game theory, we derive two extended Hamilton-Jacobi-Bellman (HJB) equations systems and give the corresponding verification theorems in both the periods of accumulation and distribution of the DC pension plan. The explicit expressions of the equilibrium investment strategies, corresponding equilibrium value functions, and the efficient frontiers are also obtained. Finally, some numerical simulations and sensitivity analysis are presented to verify our theoretical results. Introduction Nowadays, the application of stochastic control theory to portfolio selection problems of pension funds is becoming a hot issue in actuarial research.The basic pension plans have two types: the defined benefit (DB) pension plan and the defined contribution (DC) pension plan.In recent years, with the rapid development of the equity market and the lower mortality level of the population, compared with the DB pension plan, the DC pension plan is more favored by most countries in the world.As in a DC pension plan, the payment pressure of benefit which is caused by the uncertainty of investment earnings and pension plan member's longevity risk is transferred from the sponsoring company to the member himself/herself.Thus, in the literature related to pension funds, the majority of the literature focuses on the DC pension plan. As we know, for a DC pension plan, the contribution is a predetermined constant or a fixed proportion of the member's income while the benefit is distributed based on the accumulation value of the contribution and the return of the pension fund portfolio until retirement.Thus, many scholars are devoted to the optimal investment problem of the DC pension plan.For example, [1] considered a discretetime multiperiod DC pension plan model to minimize the expected deviation between the pension fund account and a predetermined target by using a quadratic loss function; to maximize the expected utility from the wealth at retirement time, [2] investigated the optimal investment strategies for a DC pension plan both before and after retirement under the continuous-time framework; [3] extended the above model into the case with stochastic interest rate and stochastic labor income; [4] obtained the closed-form of the optimal investment strategy for a DC pension plan under the logarithm utility function.Besides, [5] investigated the optimal asset allocation problem for a DC pension plan with downside protection under stochastic inflation, and [6] studied the same problem under the stochastic interest rate and stochastic volatility framework.Under the regime switching environment, [7] considered an optimal assetliability management problem for a pension fund. All the literature mentioned above focus on the optimal investment strategy under the objective of maximizing the expected utility or minimizing the expected quadratic loss.In recent years, some scholars pay attention to the portfolio selection problem of the DC pension plan under the mean-variance (MV) criteria.This is because of the fact that the optimal investment problems under the MV criteria in a multiperiod or continuous-time framework are successfully solved only recently.It is well known that the MV problems with multiperiod or continuous-time version are time-inconsistent in the sense that the Bellman optimality principle does not hold.Hence, the dynamic programming approach can not be used.In existing literature, there are usually two methods that are suggested to deal with this problem.The first one is to find the precommitment strategy (the breakthrough work for this method was made by [8,9]), which means that the manager derives an optimal strategy at the initial time 0 and commits to performing this strategy in the future, even if it does not remain optimal in the later time.However, in practice, this strategy is not easily performed, since the preference of the MV manager changes with time and he/she has an incentive to deviate from the precommitment strategy in the later time.In addition, finding a time-consistent strategy is a basic requirement for a rational decision-maker.Thus, the second method is finding a timeconsistent equilibrium strategy under the framework of game theory (the most representative work was made by [10][11][12]).In this case, we regard the decision-making process as a noncooperative game between an infinite number of distinct players.At each time , we named player , representing the future incarnation of the manager at time .We are interested in finding a subgame perfect Nash equilibrium point for the game and formulating an equilibrium strategy; this strategy is time-consistent.For the MV problem under the background of pension fund, [13] considered a DC pension plan with the return of premiums clauses under the game theoretic framework and obtained an equilibrium investment strategy in the period before retirement.Under the same framework, [14] investigated an equilibrium investment and contribution strategies for a DB pension plan.The research on the precommitment investment strategy can be found in [15][16][17] and references therein. Meanwhile, for the portfolio problem, many scholars begin to pay attention to the influence of the background risk, such as the risk of interest rate, inflation, and volatility of the risky asset, on the optimal decision-making process.On the one hand, as we all know, the inflation might affect the real value of the wealth; especially for the problem with long time horizon investment, the higher inflation leads to the lower real wealth.In the DC pension plan, [18] obtained the optimal investment strategy which maximizes the expected utility of the DC pension plan under inflation risk.A similar problem was studied in [5] under the environment of stochastic interest and inflation rate with the minimized guarantee.Under the MV criteria, [17] investigated the optimal precommitment investment strategy of a DC pension plan with inflation risk using Lagrange method.Recently, the equilibrium investment strategies for a DC pension plan with the inflation risk are obtained by [19].On the other hand, many empirical studies have shown that the volatility of the risky asset is stochastic.Some scholars have proposed a variety of SV models, such as the constant elastic variance (CEV) model [20], Heston's SV model (volatility satisfies Cox-Ingersoll-Ross process) [21], and Stein-Stein model (volatility satisfies Ornstein-Uhlenbeck process) [22].The CEV and Heston's SV models have been widely considered in investment and reinsurance problems such as [23][24][25] and references therein.For the stochastic volatility model under the DC pension plan, the interested reader can be referred to [6,26,27]. In this paper, we consider the MV portfolio problems for a DC pension plan both before and after retirement.The main difference between this paper and the existing literature is that both the inflation risk and the stochastic volatility risk are considered in our model.To the best of our knowledge, under the MV framework, there is no literature considering both of the two risks in the DC pension fund management.We assume that the financial market consists of a risk-free asset, an inflation-linked bond, and a stock with Heston's stochastic volatility.The salary income of the DC pension plan member is also assumed to be stochastic because of the influence of inflation rate.Under the framework of game theory, two MV problems are formulated related to the periods before and after retirement, respectively.For each of the problems, by solving an extended HJB equations system, the equilibrium strategy, equilibrium value function, and the corresponding equilibrium efficient frontier are obtained.We find that the equilibrium investment money in the inflationlinked bond depends on current wealth; the inflation risk and the contribution of the salary income have no influence on the equilibrium investment money of stock.Finally, using Monte Carlo method, we investigate the evolution process of the equilibrium strategy with time before retirement under different parameters and present the sensitivity of the efficient frontier to corresponding parameters. The remainder of the paper is organized as follows.In Section 2, we introduce the financial market and wealth processes both before and after retirement.In Section 3, a MV portfolio allocation problem is formulated in the period before retirement.Under the framework of game theory, an extended HJB equations system and an equilibrium investment strategy are also obtained.In Section 4, we consider a MV portfolio problem after retirement and derive the corresponding verification theorem and the equilibrium strategy.Some numerical simulations and sensitivity analysis for our results are presented in Section 5. Section 6 concludes the paper and outlines further research. Assumption and Model Let (Ω, F, F, P) be a filtered probability space with filtration F = {F } ∈[0,+] satisfying the usual conditions; that is, {F } ∈[0,+] is right-continuous and P-complete.The time horizon [0, ] represents the accumulation period of a DC pension plan member, and [, + ] is the distribution period of the member after he/she retires.Let F represent the information available until time .Suppose that all of stochastic processes and random variables are defined on the filtered probability space (Ω, F, F, P).In addition, we assume that there are no transaction costs or taxes in the financial market, trading can take place continuously, and short selling is permitted. Financial Market. As we all know, for a DC pension plan manager, the objective is making the terminal wealth maximized by investing the pension fund into the market.Generally speaking, the investment time horizon of the pension fund lasts decades.The inflation risk, as a kind of important background risk, has important influence on the real value of the pension fund.In economics, CPI (Consumer Price Index) as the typical index can represent the inflation level of the market.Following [28,29], we assume that the price level () satisfies the following diffusion process: where () represents the instantaneous expected inflation rate, > 0 is the instantaneous volatility of inflation rate, and () is a standard Brownian motion, which generates uncertainty of the price level.We assume that the financial market consists of three kinds of assets: an inflation-linked asset, a money market account, and a stock. (1) The inflation-linked index bond has the same risk source as the price level process () and can be freely traded in the market.Following [18,29], its price process () satisfies the following stochastic differential equation (SDE): where () represents the real interest rate at time .()+ () is the expected yields of the inflation-linked bond.The higher expected inflation rate () will lead to higher expected yields of the inflation-linked bond.Thus, it can hedge the inflation risk. (2) The price dynamics of the risk-free money market account is given by where () is the nominal interest rate. (3) The risky asset in the market is a stock, whose price process () follows a geometric Brownian motion with Heston's stochastic volatility.That is, where () and V () are two standard Brownian motions and √() and () are the market price of the risk sources () and (), respectively.Since the inflation risk might have influence on the price evolution process of the stock through direct or indirect ways (the correlated empirical research can be found in [30,31] and references therein), hence, we assume that the price process of the stock is derived by not only its own risk source (), but also the risk source ().We further assume that () is independent of () and V (), respectively, while () and V () are dependent, and E[ () V ()] = V , where V ∈ [−1, 1] is the correlation coefficient.The second equation of ( 4) is a CIR (Cox-Ingersoll-Ross) mean-revert process, which models the stochastic volatility of the stock price.Here we suppose 2 > 2 V to assure that () > 0 holds.In addition, we assume that the DC pension plan member receives salary income with nominal value () at time until the retirement time .Suppose that the income is stochastic and dynamically influenced by the price level (); that is, the salary income process is driven by the source of uncertainty from inflation, and it satisfies the following process: where is the average growth rate of the income and is the volatility rate. Remark 1.To simplify the model, achieve tractability, and give detailed analysis about the optimal strategies, we assume that the real interest rate (), nominal interest rate (), and expected inflation rate () are all deterministic functions of time . Remark 2. If we denote Q as the risk neutral measure, then (2) can be rewritten as By the pricing theory of the derivative (to avoid arbitrage), we obtain the following relationship: Remark 3. In the empirical research, the evolution process of () and () is usually negatively correlated.Since in general, with declining of the stock price, the volatility of the price gradually increases (cf.[32]), based on this reason, in Section 5, we assume that the parameter V < 0. Wealth Process. In this paper, we consider the optimal portfolio problems of a DC pension plan member both before and after retirement.The member contributes part of his/her salary into the pension fund account before retirement and obtains benefit after retirement.So we need to consider the wealth processes in two different periods. (I) Before Retirement.During the accumulation period [0, ], we assume that the DC pension plan member contributes Mathematical Problems in Engineering continuously to his/her pension fund account with a fixed rate of his/her salary income; that is, he/she contributes the amount of money () to his/her pension fund account at time .The pension fund is invested in the financial market by the manager.Denote () and () as the propositions of wealth invested in the inflation-linked index bond and the stock, respectively.Then () = 1 − () − () is the proposition of the risk-free money market account.Denote () = ( (), ()), ∈ [0,], as the decision-making process.Now we rewrite the SDEs ( 2) and ( 4) as matrix form: Denote () as the nominal value of the wealth process associated with the investment strategy .Then the nominal wealth process () satisfies where 1 = (1, 1) .The last equation holds because of (7) and where () = ( (), √()) .Let 0 be the nominal wealth at time 0. Since the real value of the wealth reflects the real purchase power of the current market, the nominal value of the wealth should be converted into the real value.By discounting the value of nominal wealth with the price level process, the real value of wealth will be obtained.Now denote () = ()/() and () = ()/(), and, by Itô's formula, we obtain the real wealth process () and real salary income process () as follows: where () = − () + 2 − and = ( , 0), = ( − , 0).For (0) = 0 , the real initial wealth and initial salary income are (0) = 0 / 0 and (0) = 0 / 0 , respectively.Note that the dynamic process of () depends on the real salary income process () and the stochastic volatility process (), but it is independent of the price level process (). One denotes Π(, , V, ) as the set of all admissible strategies with respect to the initial condition (, , V, ) ∈ G. (II) After Retirement.When the member of the pension plan retires, the accumulation value of the pension fund usually is used to purchase an annuity.In practice, the member usually has two ways to purchase the annuity.The first way is that he/she purchases the annuity directly at the retirement time.For example, as being considered in [2,27], the accumulated pension fund is used to purchase a paid-up annuity once the member arrives at the retirement age, and the manager has also to decide the part of the remaining mathematical reserve to invest in the market.The second way is that the member does not purchase the annuity directly but chooses the income drawdown option at the retirement.It means that the member withdraws a fixed income and invests the rest of the wealth until he/she achieves the age when the time of purchasing the annuity is compulsory.The interested reader can see [33,34] for details. Here we consider the first way.Similar to [2], we assume that the member purchases a paid-up annuity at the retirement time which guarantees the benefit to be given only on a fixed time horizon [, +] and invests the rest of the wealth continuously in the market.We denote as the part of the fund used to purchase an annuity ( ⩽ ()).Continuous benefit (real value) paid from to 1 = + is = / | , where | = (1 − − )/ and is the continuous technical rate. Let () denote the nominal total payment value of the benefit at time interval [, ], ∈ (, 1 ].For simplicity, the dynamics of () can be modeled as () = ∫ () or () = ().This setting is reasonable and realistic, and a similar setting is used in [17].It means that after retirement the nominal benefit is adjusted dynamically based on the inflation rate of that time; that is, at time , the nominal benefit is (), which keeps the purchasing power level of the retirees not decreasing.We assume that once the retired member dies during [, 1 ], the pension fund continues to be invested and the remaining annuity benefit and the terminal wealth ( 1 ) are paid off to his/her offspring.If the member is alive until 1 , the wealth ( 1 ) is left for his/her later life.Let π () and π () represent the propositions of the pension fund wealth invested in the inflation-linked index bond and the stock, respectively.Denote π() = (π (), π ()), ∈ [, 1 ], as the decision-making process.Then, after retirement, the dynamics of nominal wealth process is Using the same method as before retirement, we obtain the real wealth process as follows: Now we give the definition of the admissible strategy for the decision-making process π(). Definition 5 (admissible strategy). Let O One denotes Π(, , V) as the set of all admissible strategies with respect to the initial condition (, , V) ∈ G 1 . Problem Formulation before Retirement and Verification Theorem In this section, we assume that the pension plan manager hopes to maximize the expectation of the total wealth at retirement time , meanwhile minimizing the variance of the wealth at retirement.So the pension plan manager faces a continuous-time MV portfolio problem.However, since the MV objective function does not satisfy the iterated expectation property, Bellman's optimality principle does not hold under the continuous-time framework.Hence, this problem is time-inconsistent.This means that for the pension fund manager without precommitment should take into account that he/she may have different objective functions in different times.We can thus view this problem as a noncooperative game.At each time , there is one player, named player , representing the future incarnation of the manager at time .At any state (, , V, ), the manager faces an optimal control problem with value function (, , V, ) as follows: (, , V, , ) , where represents the risk aversion level of the manager and Var ,,V, [⋅] refers to the conditional variance.Since the objective of the pension manager updates with different state, the manager's decision process is like a game process between an infinite number of distinct players.Thus we need to look for a subgame perfect Nash equilibrium point for the game and determine an equilibrium strategy * .If we denote (, , { (, , V, , (, , V, ) , (, , V, ))} , where (, , V, , , ) = − (/2)( − 2 ). The proof of this theorem is given in the Appendix.Equations ( 19), (21), and ( 22) are called an extended HJB equations system.By solving this equations system, we can obtain the equilibrium strategy and corresponding equilibrium value function. Note that Thus we obtain the following so-called equilibrium efficient frontier, which reflects the relationship between the investment risk and return under the equilibrium strategy. Corollary 9. For MV problem (P 1 ), based on state (, , V, ), its equilibrium efficient frontier is where Remark 10.By ( 38) and (39), we find that the equilibrium investment proportion in the stock has the similar expression as in [24], although their study focuses on the optimal investment and reinsurance problem.In addition, the equilibrium investment money in the stock is only the function of time , which is independent of the current wealth.However, for the inflation-linked bond, the equilibrium investment money depends on current wealth, and the more the wealth is, the more the inflation-linked bond is invested.This result is consistent with the economic intuition, considering that when the manager has more wealth, the sensitivity to the inflation risk becomes higher, and the corresponding hedging demand is improved. Remark 11.In view of (38), we find that if > , that is, the volatility rate of the inflation is larger than that of the salary income, the equilibrium investment amount in the inflationlinked bond has positive relationship with the salary level.Otherwise, it has negative relationship with the salary level. Remark 12. From ( 42) and ( 43), if Var ,,V, [ ()] = 0, it means that the pension fund manager bears no risk at all under the state (, , V, ); this is equivalent to the case of → ∞; that is, the pension fund manager is fully averse to risk.In this case, * () = 0 and * () = 1 − (( − )/ )() − ∫ () .This indicates that the manager will only invest all the fund in the inflation-linked bond and the risk-free money market account and obtain the expected real terminal wealth of ∫ () + () at retirement time .It consists of two parts.The first part is the accumulation of the initial wealth , and the second part can be seen as the accumulation of the contribution. Remark 13.If we do not consider the risk of stochastic volatility of the stock price; that is, let = 0, V = 0; then the equilibrium strategy is simplified as In this case, we find that * () is consistent with the result of the ordinary MV portfolio problem under the Black-Scholes market (see [11]).This indicates that the contribution of the salary income has no influence on the equilibrium investment amount of the stock.Furthermore, if we do not consider the risk of inflation, that is, () = = = 0, then () = () and the inflation-linked bond is degenerated to the risk-free money market account. Problem Formulation after Retirement and Verification Theorem After retirement, we assume that the pension plan member purchases a paid-up annuity during the time horizon [, + ].Thus the objective function of the pension plan manager is to maximize the expected surplus after paying off the benefit to the member and minimize the variance of the surplus at time 1 = + .So, at the state (, , V) ∈ G 1 , the manager faces the following optimal control problem: Similarly to the method of Section 3, we denote where f(, , V, , ) = − (/2)( − 2 ). Verification Theorem. In this subsection, we first give the definition of equilibrium strategy after retirement and then give the verification theorem which is satisfied by the equilibrium value function.Now, for any (, , ) and all firstorder partial derivatives of (⋅, , V) satisfy the polynomial growth condition on O 1 }, and denote differential operator as Definition 14.For any given initial state (, , V) ∈ G 1 , consider an admissible strategy π * (, , V) and choose three real numbers > 0, , and ; one defines the following strategy: is called an equilibrium strategy, and the equilibrium value function is defined as Theorem 15 (verification theorem).For the optimal asset allocation problem (P 2 ), if there exist three real value functions F(, , V), Ĝ(, , V), and Ĥ(, , V) ∈ D (G 1 ) satisfying the following conditions: (, , V) = Ĥ(, , V), and π * is the equilibrium strategy, where The proof is similar to Theorem 7 and here we omit it. Solution to the Equilibrium Strategy. Using Theorem 15, now we calculate the equilibrium strategy and corresponding equilibrium value function.By the similar method to that in Section 3, we can obtain the equilibrium strategy as follows: We suppose F(, , V) and Ĝ(, , V) have the following linear forms with respect to and V: Then (52) (, , V) = 0, we get the following equation: Separating with , V, and other terms, we obtain the following ODEs: + () [ () − () + () ] = 0; Now we insert π * () into L π * Ĝ(, , V) = 0 and obtain the following equation: Similarly, separating with , V, and other terms, we have the following ODEs: (56) By using the boundary conditions, we solve the above ODEs and yield and the corresponding equilibrium value function is where () and () are given by ( 59) and ( 60). After some calculations, we have the following corollary. Corollary 17.Based on state (, , V), for the MV problem (P 2 ), the equilibrium efficient frontier is where Remark 18. From (62) and (64), we find that, at the state (, , V), if manager does not undertake any risk in the later time, that is, let Var ,,V [ * ( 1 )] = 0 or → ∞, he/she will only invest all the surplus into the inflation-linked bond after paying off a continuous annuity with real value of for each time.Then at terminal time 1 the expected real net surplus It includes two parts, where the first part is the accumulation of the initial real wealth and the second part is the accumulated payment of the real benefit until time 1 . Sensitivity Analysis In this section, we analyze the influence of the different parameters on the equilibrium strategy and the equilibrium efficient frontier.Here we only analyze the case before retirement and suppose that (), (), and () are all deterministic constants for simplicity, and the basic parameters are set in Table 1. In addition, the initial values are 0 = 2, 0 = 1, V 0 = 0.15, and 0 = 0.5.Note that the parameter = 0.015 based on (7).To obtain the evolution process of the investment strategy over time, using Monte Carlo method, we simulate the trajectory of the optimal wealth process with 300 times and obtain the mean investment proportion of the three assets. The Analysis of Equilibrium Strategies.This subsection works on analyzing the influence of the inflation, the salary income, and the risk aversion level of the manager on the equilibrium strategy in the period before retirement.Since the sign of the first-order derivative about the investment strategies to some parameters could not be obtained directly, we only give the intuitive analysis. Figure 1 depicts the mean equilibrium investment proportion of the three assets under the basic parameter setting.We find that the proportion of the stock decreases gradually from 40% to almost 20% during time horizon [0, 10], and the proportion of the money market account also decreases with time.However, the proportion of the inflation-linked bond is relatively low at the beginning; even short selling happened at time 0, but, after about 10 years, the proportion increases to almost 20%.In the following analysis, we use Figure 1 as a benchmark.Figure 2 shows the mean investment proportion when = 0.8.Compared with Figure 1, the whole equilibrium investment proportion of the stock moves down when the risk aversion level of the manager increases from = 0.6 to 0.8 (this result also can be found from (39), under the basic parameter setting of this section, * / < 0).On the contrary, the proportion of the inflation-linked bond increases under the higher risk aversion level.The case when the expectation inflation rate = 0.042 is illustrated in Figure 3. Since is higher than that in Figure 1, the manager invests more wealth in the inflation-linked bond to hedge the inflation risk.Similarly, Figure 4 shows that if the expected salary growth rate increases, the investment proportion in the inflation bond also increases.This means that the more the wealth contributed to the pension fund, the higher the demand to hedge the inflation risk.We note that the proportion of the stock is independent of and in (39), so the investment proportion in the stock is the same as in Figure 1. The Analysis of the Equilibrium Efficient Frontier. In this subsection, we analyze the influence of different parameters on the equilibrium efficient frontier in the investment period before retirement.Without loss of generality, we only analyze the efficient frontier at time 0. Figure 5 shows the efficient frontiers for the different expected inflation rate .Generally speaking, the larger the value of , the lower the real value of the wealth.So, as shown in Figure 5, the expected real terminal wealth has a negative relationship with the expected inflation rate when the variance of the real terminal wealth is fixed.Figure 6 reveals the relationship between the efficient frontier and the volatility of the inflation rate .We find that the higher leads to the efficient frontier moving upwards.The influence of the on efficient frontier is illustrated in Figure 7, which shows that the value of the expected volatility of the stock has positive relationship with the efficient frontier.Figure 8 depicts the influence of V on the efficient frontier.This indicates that the stronger negative relationship between the stock price () and the volatility () will lead to the efficient frontier moving upwards. Figure 9 shows that the expected growth rate of the salary has a positive influence on the efficient frontier; that is, with increasing of , the efficient frontier moves upwards.In addition, the intersection of the efficient frontier and the E[ * ()] axis is bigger if is larger.This result also can be obtained from Remark 12.In Figure 10, the influence of the salary volatility on efficient frontier is revealed.We find that the larger the , the more the uncertainty about the salary income, which leads to the efficient frontier moving downwards. Conclusion In this paper, we investigate the MV portfolio problems for a DC pension plan both before and after retirement.The background risks such as the stochastic volatility of the stock price and the inflation rate are also considered in our model.We assume that the pension fund that can be invested in the financial market consists of an inflation-linked bond, a stock, and money market account.Under the framework of game theory, we obtain the equilibrium strategies and the corresponding equilibrium efficient frontiers for both periods.Finally, using Monte Carlo method, we give some numerical analysis and find some interesting results.To obtain the closed-form solution, we assume that the risk aversion parameter of the manager is a constant in this paper. In fact, it may depend on the current wealth or investors' current investment return (see [35]) in practice.In addition, in the period of after retirement, the investment time horizon is assumed to be fixed in our paper.However, since the lifetime of the pension plan member is stochastic, the portfolio problem with uncertain investment time horizon may be more realistic and reasonable, and these problems will be studied in the future. Figure 5 : Figure 5: The effect of on efficient frontier. 8 Figure 8 :Figure 9 : Figure 8: The effect of V on efficient frontier. 7 Figure 10 : Figure 10: The effect of on efficient frontier. Table 1 : The basic parameters value. V V
7,341.6
2016-04-30T00:00:00.000
[ "Economics", "Mathematics" ]
A Search for Radio Technosignatures at the Solar Gravitational Lens Targeting Alpha Centauri Stars provide an enormous gain for interstellar communications at their gravitational focus, perhaps as part of an interstellar network. If the Sun is part of such a network, there should be probes at the gravitational foci of nearby stars. If there are probes within the solar system connected to such a network, we might detect them by intercepting transmissions from relays at these foci. Here, we demonstrate a search across a wide bandwidth for interstellar communication relays beyond the Sun's innermost gravitational focus at 550 AU using the Green Bank Telescope (GBT) and Breakthrough Listen (BL) backend. As a first target, we searched for a relay at the focus of the Alpha Centauri AB system while correcting for the parallax due to Earth's orbit around the Sun. We searched for radio signals directed at the inner solar system from such a source in the L and S bands. Our analysis, utilizing the turboSETI software developed by BL, did not detect any signal indicative of a non-human-made artificial origin. Further analysis excluded false negatives and signals from the nearby target HD 13908. Assuming a conservative gain of 10^3 in L-band and roughly 4 times that in S-band, a ~1 meter directed transmitter would be detectable by our search above 7 W at 550 AU or 23 W at 1000 AU in L-band, and above 2 W at 550 AU or 7 W at 1000 AU in S-band. Finally, we discuss the application of this method to other frequencies and targets. order of 1 meter, the optimal communication wavelength range would be from 100 µm to 1 nm, and that a distance of 1,000 AU from the Sun is preferred. Hippke (2020b) also notes that, in this proposed configuration, separate exploration probes would be required in order to study the inner solar system. Kerby & Wright (2021) explore the engineering requirements and sustainability for an SGL relay to remain in proper position, finding that such technology is feasible and noting that another observable property of such probes may be the by-products of station-keeping propulsion. They also argued that single stars (i.e. those with minimal gravitational perturbations from companions) were the best hosts for such probes because their station-keeping costs were smallest. Gillon et al. (2021) searched for a probe on the Solar focal line in communication with the Wolf 359 system, the third nearest star to our Sun and one from which Earth's transit across the Sun would be observable, at a time when Earth could have intercepted the transmissions. Their search for optical signals from the solar system toward the star, as well as for an object with the position and motion hypothesized within the extent of Uranus' orbit (20 AU), did not reliably identify any such probe. Solar Gravitational Lensing for Interstellar Communication Massive objects in the universe, such as black holes and stars, bend the trajectories of nearby photons. This process can create a lensing effect similar to a focusing element of a telescope (Einstein 1936). Gravitational lensing warps a source object's apparent shape into two distorted images. The distortion effect centers on a ring around the lens object, called the Einstein ring, which has an angular radius of: where G is the gravitational constant, M is the mass of the lens object, c is the speed of light, D L is the observer-lens distance, D S is the observer-source distance, and D LS is the lens-source distance. In a solar gravitational lens system, the Sun is the lens, the distant star is the source, and the probe is our origin/observer. The physical Einstein ring radius (R E = D L θ E ) of the solar lens should be at least the radius of the sun. This works out to a minimum D L of roughly 550 AU, which we adopt as our minimum possible probe distance. Figure 1 shows the layout for a probe utilizing the SGL to send or receive messages to/from α Cen . We note that Gillon et al. (2021) argue that the use of out-of-focus nodes at lower separations may also be worthwhile. It may be possible to detect signals from a relay spacecraft at a distance d 550 AU from the Sun if the Earth is contained within the opening angle of the probe's outgoing transmission beam at any point in its orbit. For target stars that lie along the ecliptic plane, the Earth will always pass through the beam when it transits and eclipses the Sun. However, from the view of a relay off of the ecliptic plane, Earth will always have some non-zero separation from the Sun, which its beam may not encapsulate. From the point of view of a probe at a distance of 550 AU, the maximum angular separation between the Earth and the Sun is 6.3 , regardless of its ecliptic latitude. A probe with an ecliptic latitude of b sees a minimum angular separation φ E,min ≈ d 550 AU −1 sin(b) × 6.3 arcmin. ( The Earth's entire orbit would be in a beam for 10-cm wavelength signals for any transmitter with diameter 34 m. However, Earth's point of closest approach to the SGL line may fit into smaller -1 pc -100 -10 -1 1 10 100 cen Probe 20m beam 35m beam 50m beam Sun Earth Figure 1. Visualization of an SGL probe opposite α Cen. The Sun is at the origin in our coordinate system, and Earth's orbit lies on the xyplane. The probe is indicated in green and its outgoing beams for various dish sizes in gray (see Section 1.2). beams. This is a unique feature of performing this search search at radio wavelengths. In optical wavelengths, the focus of prior searches, the beam sizes for dishes on the order of 10m would be much less than 6.3 . Thus, eavesdropping on optical SGL signals would only be feasible for target stars in the ecliplic plane. Following Equation 2, for a probe opposite α Cen (b ∼ 44 • ), the minimum visible beam size at the point of closest approach would be 4.4 . This corresponds to transmitter diameter 50 m. The directional gain for a transmitting relay is where r t is the radius of the transmitter, k t is its efficiency (which we assume to be of order unity), and λ is the emitting wavelength (Maccone 2011). Hippke (2020b) suggests ∼1 meter transmitter size for sub-mm signals, but a larger transmitter would be required to produce similar gains at radio wavelengths. A 50m transmitter could achieve directional gains on the order of 65 dB, and even a 20 m transmitter could achieve 55 dB. From Equations 8-9 of Maccone (2011), we find that the gain resulting from a solar lens is: For λ ∼ 10 cm, the solar lens gain is roughly 60 dB. Combining the directional gain of the relay probe and the lensing of the Sun, a transmitting spacecraft could achieve a total gain of G tot = G × G SC of over 120 dB, overcoming the difficulties of transmission across interstellar distances by focusing transmissions into tight parallel beams. There have been many suggestions in the literature of purposes for probes in the solar system. The SGL foci of nearby stars are special locations for probes because they allow access to a "Galactic Internet," but the purpose and functions of such probes is otherwise unconstrained. In contrast to classical radio SETI searches for intentional messages sent toward Earth from interstellar distances, this method primarily attempts to eavesdrop on communications between two technological structures. We offer here three suggestions from the literature on such functions and the resulting implications for our search, but emphasize that our search is not dependent on and does not assume the probes we seek serve such functions. Direct interception of outgoing communication through the SGL is the obvious signal to search for under the interstellar communication network hypothesis, but it suffers from some technical limitations. For interstellar transmissions from a relay spacecraft to be detectable from the Earth, the stellar relay must be actively transmitting when we search, such searches can only be performed when the Earth happens to be in the beam, and/or the beam could be tightly focused on a portion of the Einstein ring the Earth does not transit. A constantly transmitting stellar relay is more likely to be detected from Earth than a relay that only intermittently sends signals to the target star or local probes. When searching for a relay, one major obstacle is determining which wavelengths would be most appropriate for SGL lensing. Radio wavelengths are an obvious choice for interstellar communication, due to their low energy and extinction. However, the solar corona can interfere with long wavelength radio signals. Turyshev & Toth (2019) found that photons with λ ≥ 157 cm (ν ≤ 0.2 GHz) are completely blocked. At shorter wavelengths, plasma refraction steers some photons away from the focal line, decreasing signal strength (Hippke 2020b). Moving from 3 cm to 200 µm (10 GHz to 1.5 THz), the effect gradually decreases. At wavelengths below ∼ 100µm (ν 3 THz), the decrease in solar lens gain is negligible. For wavelengths non-negligibly but not fully blocked, the SGL could still provide significant gains, so these are not completely eliminated from consideration, but do make searches for intercepted transmissions at radio or microwave frequencies less well motivated. On the other hand, shorter wavelengths produce smaller beam widths, which would make eavesdropping on an SGL signal impossible for reasonable transmitter sizes and targets not near the ecliptic plane. In addition to sending messages via the SGL, a relay spacecraft might send signals to other probes in the inner solar system. Indeed, such a scheme is recommended by Maccone & Antonietti (2022) as a way to retrieve information from interstellar probes, and is reminiscent of relay schemes used by humans' interplanetary probes, for instance for communication with Mars landers. There is no reason such secondary communications might not happen at different wavelengths than the interstellar communications through the SGL, since coronal interference is not a problem for them. It is thus possible they would occur at radio frequencies and with beam sizes and directions that would allow them to be intercepted at Earth year-round. Figure 2 shows the view of the solar system from a probe at the Sun's focal point for communication with α Cen. Figure 2. View of the solar system from a probe opposite α Cen at 550 AU on the night of our observations. The Sun lies at the origin, and the inner solar system planets' orbits are shown. Gray filled circles indicate beam widths for various transmitter diameters at 10-cm wavelengths. For a probe located at a distance d > 550 AU, these orbital axes would scale down by a factor of d/(550 AU). Bracewell (1960) suggested that the purpose of a solar system probe would be to serve as a beacon, and Freitas (1983) and others considered places where such beacons would reside. Success in intentional communication SETI requires a determination of how to find another group of beings without being able to communicate beforehand. This requires the determination of optimal places, times, frequencies, and other aspects of possible communication which may be chosen by the other group. In game theory, these optimal choices are referred to as "focal points" (Schelling 1960), but we use the term "Schelling points" in order to avoid confusion with the optical definition of focal points (Wright 2020). The SGL focal line is a recognizable location for both humans and the residents of nearby stars, and it is a relatively static location in the sky. Thus, it may be an optimal place to place a beacon to intentionally attempt to make contact with technological life in the Solar System. Beacons sent to other star systems may be a preferred method of interstellar first contact, as it does not immediately reveal the location of the sender, reducing risk of aggressive retaliation (Gertz 2018). An interstellar communication relay only reveals the position of the next node in the web. By their nature, beacons are intended to be found, so a beacon at the SGL might transmit at any frequency its constructors thought potential recipients would guess, further justifying our choice of frequencies near the "water hole" (Oliver 1979) for this study. For this work, our first observations of the SGL for nearby stars as a proof-of-concept, we have chosen to observe in L and S bands. We chose these bands for three reasons: (1) the size of beams at these frequencies allowed us to check them all in a single 1 hour session, (2) these are in or near the "water hole" (to search for beacons), and (3) these are or are near frequencies humans use for interplanetary communication (to search for communications with probes in the solar system). Where to Look To first order, we expect this type of probe to lie at the antipode of some nearby star, around which another interstellar communication network probe could reside. Initially, we neglect light travel time considerations and place the probe at the exact antipode of the target star as we currently see it in the sky. This position on the sky, viewed from Earth, will vary slightly based on the probe's assumed distance from the Sun because, being a small fraction of a parsec distant, the probes suffer significant parallax. For observation time t, we consider the positions of the Sun and Earth in the barycentric frame, S(t) and E(t) respectively. We also define unit vector x(t) as the direction of the target star from the Sun. We consider Sun-probe distances, z, from 550 AU to infinity. We then calculate the barycentric frame position of the probe as: Finally, on-sky coordinates of the probe are calculated from the Earth-probe vector −→ P E = P − E(t). Finite light travel time requires some corrections to this expected position (Seto & Kashiyama 2020). We must consider the positions of our celestial objects at other points in time. This calculation can be performed for two probe types: receivers and transmitters. For a probe transmitting signals through the SGL to a star, the signal must travel to the position the star will be one sun-star light travel time in the future. Since our naive calculation uses the apparent position of the star at our observation time, which is where the star was one travel time ago, we must advance the star's position by two light travel times. In principle, we must also consider that the probe must aim to where the sun will be when the signal arrives, and advance its position by one Sun-probe light travel time. We neglect this effect because the maximum change it has on the position of the probe is of order 0. 1. We thus calculate the transmitting probe position as: where c is the speed of light. For a probe near the sun receiving signals from a star, our naive calculation of the star's position is appropriate, because the signal will arrive from the apparent (retarded) position of the star, just as other light from the star does. But for a distant probe, we need to consider that we see the probe where it was one Earth-probe light travel time ago, when it was sitting in a position corresponding to the star's position one probe-Sun travel time before that. Formally, we should then retard the star's apparent position by a time (z +|P E|)/c, however for computational simplicity we approximate |P E| ≈ z because the difference in times is small compared to the timescale over which the star's apparent position changes significantly. Our receiving probe position is then: This light travel time complication means that the probe position does not converge at infinite distance. Here, we assume that such a probe would not be placed further than a tenth of the Sun-star distance. For both types of probe we have a locus across the sky of possible locations for a probe at varying distance z from the Sun. In general, the full extent of this line across the sky should be observed in order to detect or rule out active communication from a probe at the target star's SGL location. The SGL for α Cen In this work, we demonstrate the SGL relay SETI search method described above with a search for a hypothetical probe in contact with α Centauri. Figure 3 shows the positions of transmitting and receiving probes for α Cen A (the two components of α Cen are separated by less than 10 , which is so much smaller than our beam that our search encompasses both components). The observations are described in Section 2. We present the analysis of our data in Section 3. In Section 4, we discuss the results of our analysis and additional artifact SETI searches of this type. We conclude in Section 5. α CENTAURI OBSERVATIONS WITH GBT We selected a set of positions corresponding to the possible locations of a relay probe in communication with α Centauri, considering both the receiver position and the transmitter position. We took observations with the Green Bank Telescope (GBT) in the L and S bands using the Breakthrough Listen (BL) backend (MacMahon et al. 2018). As shown in Figure 3, we observed the region of the sky directly opposite α Cen, placing observations along a line to account for the parallax of a probe at finite distances from the Earth and Sun. A probe at infinite distance from the Sun with no accounting for light travel time will be positioned exactly opposite α Cen on the sky and have no parallax, but a probe at 550 AU will be offset from the exact antipode by up to six arcminutes, depending on the position of Earth in its orbit. We trace two of these lines, as receiver and transmitter probes require different considerations of the light travel time between the Sun and α Cen, as described in Section 1.3.2. The GBT pointings were selected to cover regions along the lines of possible positions on the sky, leveraging the different beam width of the various bands in each case. We observed our target area on UT 2021 November 6. The Earth is at its closest position to the vector connecting the Sun to the target point in early November. In addition to minimizing the beam size required for the hypothesized signal to be detectable from Earth, this also minimizes the number of pointings required to cover the entire line of possible probe locations. Data were taken at 300s per scan using an ABABAB pattern. The ABABAB sequence consists of three "nods" between our "ON" source target and an "OFF" target. The OFF target is used to identify and correct for radio frequency interference (RFI). We chose HD 13908 as our OFF target because it has the smallest on-sky separation from the ON target of any known planet-hosting star. The separation between the ON and OFF targets is 317 , which is equivalent to 37 beam widths at L-band and 58 at S-band. Additionally, a scan of a pulsar was completed at the beginning of each night of observations to confirm that the telescope and instrument were performing as expected. We used turboSETI (Enriquez & Price 2019) to analyze our observations by searching for signals with specific drift rates. In the rest frame of the Sun, the proposed communication relay is effectively stationary, so a monochromatic transmission viewed from Earth should only show Doppler shifts due to the orbital motion and the rotation of the Earth. To obtain a drift rate for a signal from the relay, we used barycorrpy (Kanodia & Wright 2018) to calculate the radial acceleration between GBT and the probe. Given this radial acceleration dv r /dt, we calculate a maximum drift rateḟ max in Hz/s: where f max is the maximum rest-frame frequency observed. The magnitude of the drift rate depends on the frequency and relative acceleration at the time of observation. The largest contribution to the relative acceleration term is due to the rotation of the Earth. A signal from a probe at rest relative to the Sun should exhibit a negative drift rate to an observer on Earth (Sheikh et al. 2019). We calculate drift rates for the hypothetical probe relative to the barycentric frame at the time of observation based on the pointing of each beam. These drift rates are around −0.124 to −0.127 Hz/s in L-band, and −0.18 Hz/s in S-band. To account for uncertainty in the probe's position within a generous margin, we multiply the calculated drift rate for each pointing by a small numerical factor (×1.3) and use these modified drift rates as the maximum drift rates in our search through the data. Without specifying a minimum, we employ turboSETI to search for hits up to this maximum drift rate. Since the maximum drift rate of the hypothetical probe is quite small, it is computationally feasible to conduct a high-sensitivity search. By default, turboSETI searches for signals above a SNR threshold of 25. For our analysis, we lower this threshold to 10. Following an initial search for signals, turboSETI performs an additional check on the preliminary detections using three different "filter thresholds" to remove RFI from the pool of candidates based on the ON/OFF observing cadence. The first filter threshold simply looks for signals above the specified SNR. The second threshold looks for signals above the SNR that appear in at least one on-pointing, but not in any off-pointings. And the third looks for signals that appear consistently in all ON pointings and no OFF pointings. It has been noted by the BL team (Enriquez et al. 2017) and others (Margot et al. 2021) that turboSETI has two potential weaknesses. First, frequency binning causes a significant drop in sensitivity for drift rates above 0.16 Hz/s. Fortunately, for these observations the expected drift rates are of the same order as the drift rate where this drop in sensitivity occurs. Thus, we do not expect this issue to cause significant sensitivity loss in this case. Second, it has been suggested that the filtering algorithm within turboSETI may be prone to missing signals that overlap with easily identifiable RFI. However, the narrow range of possible drift rates predicted by our theory allows for a much more sensitive search (Wright 2020). We were able to inspect all of the signals within this range by eye (3864 hits in L-band and 448 hits in S-band) to ensure that no signals of interest were discarded along with RFI during filtering. L-band The band pass filter used on GBT in L-band has sensitivity over the range of about 1.07 to 1.87 GHz, with a notch filter from 1.25 -1.35 GHz. The RFI environment for L-band is notoriously "noisy," and the notch filter eliminates some of the powerful RFI that frequently dominates the band 1 . The "hits" plotted in Figure 4 show the crowded RFI environment. Hits outside regions of detector sensitivity, including the notch filter, have been included in greyed out regions to provide a more complete picture of the data set and RFI environment. The abundance of signals detected with multiple drift rates within narrow ranges of frequencies is characteristic of RFI. We refer to signals with the same morphology spanning multiple frequencies as combs. Although turboSETI is designed to not re-trigger on the same signal, it often triggers at multiple drift rates on the same comb when that comb spans a large enough frequency range and its morphology is sufficiently complex. The addition of the light blue vertical lines in Figure 4 illustrates the presence of these combs by highlighting hits closely grouped in frequency, separated by less than 0.1% of the maximum frequency in the given filter. The orange dashed lines show the expected drift rates for the ON source pointings as discussed in Section 3.1. From the entire L-band data set, turboSETI identified 27995 hits passing the first filter threshold, 23306 of which were within the frequency ranges the band pass filter is sensitive to, as described above and shown in the white regions of figures 4 & 5. 9929 hits passed the second filter threshold, 9585 of which were within the frequency ranges the band pass filter is sensitive to, and only 3 signals made it through the third filter threshold, all within the sensitivity range. Figure 5 isolates all the hits within the barycentric drift rate window shown in Figure 4. Only the drift rates for the ON source targets are relevant, so we considered only those drift rates calculated by turboSETI near the expected values. The marker types indicate the highest filter threshold that each hit passed. Out of the 3 hits passing the third filter threshold, only 1 was within the drift rate range we were searching for. Figure 6 shows the output plot of the dynamical spectrum for this best candidate and more generally illustrates the ON/OFF observing cadence. Figure 7 zooms out to show this signal over a wider frequency range, which clearly identifies both what turboSETI likely erroneously triggered on, and 1500 Frequency (MHz) 10 100 SNR Filter Threshold 1 Filter Threshold 2 Filter Threshold 3 Figure 5. All of the signals above a SNR of 10 detected using the turboSETI algorithm around the barycentric drift rates, between -0.124 Hz/s and -0.127 Hz/s in the ON target pointings, for these observations in L-band. This shows the SNR for all the hits around the dashed orange lines shown in figure 4 within the detector sensitivity range. Hits passing only the first filter threshold (any signal above the SNR detected in any ON source pointing) are marked as blue dots. The purple triangles indicate hits passing the second filter threshold (any signal above the SNR in at least one ON and no OFF source windows). The lone golden star indicates the one hit within this drift rate window that passed the third filter threshold (signal detection in all ONs and no OFFs). Figure 6 shows the dynamic spectrum of this signal. that the signal is part of a ∼80 kHz wide sweeping RFI signal. While we cannot be certain about the signal's origin, its frequency, brightness, and drift structure are consistent with what would be expected from an IRIDIUM satellite downlinking in its L-band allocation (Maine et al. 1995). To be thorough, we concatenated all of the hits in the data from the ON sources and manually selected them by drift rate, over the range of -0.110 to -0.127 Hz/s. We plotted all 3864 of these signals and examined them by eye to confirm that turboSETI only filtered out RFI, finding nothing of interest. Figure 6. The single L-band hit that passed turboSETI's strictest filtering threshold, intended to select for narrowband signals that appear in every ON source pointing and none of the OFF source pointings, in the target drift rate range near -0.124 Hz/s. Each rectangular block labeled "ON L" and "OFF HD13908" stacked on top of each other shows the ON/OFF observation cadence. This is plotted over a narrow frequency range of 602 Hz centered at the detected signal frequency of 1626.076447 MHz. The red dashed line indicates turboSETI's best-fit drift rate for this signal. The color mapping indicates power, normalized to the maximum power in each panel. It is difficult to determine what turboSETI picked up from this plot alone, but Figure 7 shows that it is RFI with a bandwidth of ∼80 kHz. Many hits in turboSETI show similar features, where a zoomed-out look clarifies that that the signal is drifting RFI. Figure 6, covering more than 80 kHz in L-band (as compared to 602 Hz in the previous plot). The red box indicates the plot window in Figure 6. From this plot it is easier to see what the algorithm likely picked up as a potential signal. This shows our most promising result is simply an RFI signal. S-band The band pass filter used on GBT in S-band is sensitive over the range of 1.80 to 2.80 GHz, with a notch filter from 2.30 to 2.36 GHz (Lebofsky et al. 2019). Figure 8 provides a look at the RFI environment in the S-band observations with all of the hits passing the first filter threshold of turboSETI according to their drift rate and frequency. Similar to Figure 4, regions where the detector has little to no sensitivity are greyed out. The RFI environment for S-band is not as crowded as L-band, but turboSETI still detects multiple frequency combs, again highlighted in vertical bands. Drift Rate (Hz/s) Figure 8. All of the signals above a SNR of 10 detected at the first filter threshold (any signal above the SNR detected in any ON source pointing) of turboSETI in S-band. Each signal is sized by the log of the SNR. The drift rates of each plotted point, corresponding to a signal above the SNR, are determined by the turboSETI algorithm. Hits passing only the first filter threshold (any signal above the SNR detected in any ON source pointing) are marked as blue dots. The purple triangles indicate hits passing the second filter threshold (any signal above the SNR in at least one ON and no OFF source windows). The golden stars indicate the hits that passed the third filter threshold (signal detection in all ONs and no OFFs). The orange dashed line corresponds to the barycentric drift rate, of -0.18 Hz/s, for each of the 3 ON source pointings. The light blue vertical strips correspond to signals within 2.8 MHz (0.1% of the maximum frequency in this band) of each other, likely indicating they are part of a RFI comb. Data outside regions of detector sensitivity as well as those falling within the notch filter (2.30-2.36 GHz) have been greyed out. Figure 9 isolates all the hits within the barycentric drift rate window shown in Figure 8. Only the drift rates for the ON source targets are relevant, so we considered only those drift rates calculated by turboSETI near the expected values. The marker types indicate the highest filter threshold that each hit passed. Out of the 6 hits passing the third filter threshold, none were within the drift rate range expected for an SGL probe. From the entire S-band data set, turboSETI identified 24298 hits passing the first filter threshold, 23563 of which were within the frequency ranges the band pass filter is sensitive to, as described above and shown in the white regions of Figures 8 & 9. 2226 hits passed the second filter threshold, 2169 of which were within the frequency ranges the band pass filter is sensitive to, and only 6 signals made it through the third filter threshold, all within the sensitivity range. To be thorough, we concatenated all of the hits in the data from the ON sources and manually selected them by drift rate, over the range of -0.16 to -0.18 Hz/s. We plotted all 448 of these signals and examined them by eye to be sure no signals were being filtered out with the RFI by turboSETI. We found no signals of interest within the frequency ranges of this band pass. . All of the signals above a SNR of 10 detected using the turboSETI algorithm around the barycentric drift rates, of -0.18 Hz/s, for these observations in S-band. This shows the SNR for all the hits around the dashed orange lines shown in Figure 8 within the detector sensitivity range. Hits passing only the first filter threshold (any signal above the SNR detected in any ON source pointing) are marked as blue dots. The purple triangles indicate hits passing the second filter threshold (any signal above the SNR in at least one ON and no OFF source windows). No hits within this drift rate window passed the third filter threshold (signal detection in all ONs and no OFFs). RESULTS & DISCUSSION Our analysis reveals no signals near the expected drift rate for a stellar relay opposite α Cen in L or S bands. Although turboSETI output 3 signals in L-band and 6 signals in S-band that passed all of its filters, only 1 of these was found around the expected drift rate, and we determined that all candidate signals are RFI. This was confirmed by visually inspecting the waterfall plots produced by turboSETI, paying particular attention to signals around the expected drift rate for each pointing and examining the RFI environment for each band. Figure 6 shows the only one of these candidates identified with a drift rate around what would be expected. Figure 7 shows this to be RFI. To address concerns that a positive signal could be filtered out with the noise, we examined all of the signals detected around the barycentric drift rate for each band without applying turboSETI filter thresholds. Although not practical for many radio SETI searches, our narrow range of drift rates provided the opportunity to scrutinize the data more closely. The result of this search provided additional confidence that there were no true positives filtered out by the turboSETI algorithm. HD 13908 In addition to our SGL search, we used the OFF source pointings to observe a nearby star, HD 13908. We ran another analysis up to the default drift rate of 10 Hz/s on these observations in L-band and S-band at a SNR of 10. As this was not our primary target of interest, we did not address the limitations of turboSETI for this additional target. We report for completeness that no convincing ETI signals were found in these bands using turboSETI during the HD 13908 observations. In L-band, turboSETI detected 4932 hits passing the first filter threshold, 1085 passing the second filter threshold, and 7 hits passing the third filter threshold. None of the hits that passed the third filter threshold were of interest. In S-band, turboSETI detected 1733 hits passing the first filter threshold, 392 passing the second filter threshold, and no hits passing the third filter threshold. Detection Sensitivity Sensitivity for radio SETI observations is often measured with Equivalent Isotropic Radiated Power (EIRP) (Tarter 2001), which is a function of the observing instrument, the SNR threshold, the observing time, and distance to the transmitter: where d is the distance to the transmitter, SNR is the signal to noise ratio, ∆ν is the frequency resolution of the detector (where here we assume the transmitter bandwidth is narrower than or equal to the resolution of the detector), and t is the on-source integration time. The System Equivalent Flux Density (SEFD) is specific to the observing instrument in a given band. The SEFD is reported as 10 Jy for the L-band receiver and 12 Jy for S-band at GBT 2 . In both bands, the BL backend achieves a frequency resolution ∆ν of 2.79 Hz. The integration time for each of our pointings was 300 seconds, and we analyzed the data with a minimum SNR of 10. At the shortest theoretical distance of 550 AU, the minimum detectable powers of an isotropic transmitter in L-band and S-band are EIRP L = 6.6 kW and EIRP S = 8.1 kW respectively. At the 1000 AU suggested by Hippke, EIRP L = 22 kW and EIRP S = 27 kW. However, we anticipate a probe utilizing the SGL to be a directed transmitter. Assuming a conservative gain of 10 3 in L-band and roughly 4 times that in S-band (see Section 1.2), a ∼1 meter transmitter would be detectable by our search above 7 W at 550 AU or 23 W at 1000 AU in L-band, and above 2 W at 550 AU or 7 W at 1000 AU in S-band. These transmitter powers are comparable to cell phones, which emit at 0.6 or 3 W, as well as CB radios at 4 W. A micro-broadcasting FM transmitter puts out 7 W, and Voyager I 23 W. Using GBT to make observations affords studies of this kind some of the most sensitive radio reception in the world, allowing us to search for relatively low power transmitters. Conducting searches for more powerful transmitters may feasibly be carried out by less sensitive facilities down to the scale of amateur hobbyists. Future Applications Although α Cen belongs to our nearest neighboring star system, it may not be the best star to study in the SGL network context. Kerby & Wright (2021) found that close binary star systems are unsuitable for stellar gravitational lens relays due to the high delta-v cost required to maintain alignment. Kerby & Wright instead suggest that the best candidate stars for hosting relay systems are without close companions, without large gas giant planets, relatively low in mass, and not spinning rapidly enough to deviate from sphericity. However, a significant number of stars do not fit this criteria, and the difficulties may often be overcome with some extra delta-v or settling for lesser but still significant stellar lens gains. Hippke (2021) created a first attempt at a prioritized list of targets for these searches, based on factors including the lack of massive companions and the presence of confirmed planets. Another consideration in selecting stellar relay probe search targets is whether our observations would be made from within the probe's beam, though we have shown that for relatively small spacecraft in radio wavelengths this is not of much concern. Focusing on stars in Earth's transit zone (Heller & Pudritz 2016;Sheikh et al. 2020;Kaltenegger & Faherty 2021) would allow for observations guaranteed to be in such a beam when Earth is eclipsed by the Sun from the star's viewpoint, even for large optical laser transmitters. We observed the SGL position opposite α Cen over a single night. Such a limited search may miss extant probes if they are not constantly transmitting. A thorough search for stellar relay probes in the Solar System would involve the monitoring of the antipodes of many stars over longer periods of time. Some progress on this may, in fact, be possible with serendipitous archival observations that happen to include the antipodes of nearby stars (see Palumbo et al., in prep). Our search was limited to the region of sky that would be occupied by probes at or past the Sun's focal point of 550 AU. However, Gillon et al. (2021) propose that searching for closer probes may still be worthwhile. Gillon et al. propose that an array of 1 m lasers may be placed much nearer the Sun, because even at 10 AU, the loss in gain is only a factor of 33. Such transmitters would be subject to stronger perturbations due to planets, but they would also receive much more solar irradiation for power. As discussed in Section 1.3.1, wavelengths longer than 100 µm are subject to signal strength loss due to the solar corona. Hippke (2020b) suggests that the optimal wavelength range for SGL signals would be between 100 µm and 1 nm. We observed at relatively long radio wavelengths, 1-4 GHz or 30-7.5 cm. Here, losses due to the solar corona are not prohibitive, but are nevertheless significant. The difficulty with using turboSETI to analyze observations at shorter wavelengths is, as mentioned in Section 3.1, that the 1-to-1 binning ratio is 0.16 Hz/s. The drift rate of a signal is a function of the frequency of that signal, so higher frequencies (shorter wavelengths) increase drift rate beyond the 0.16 Hz/s limit, which causes a significant reduction in sensitivity (Margot et al. 2021). Sensitivity loss is a trade-off that may be worthwhile for the opportunity to cover more parameter space, especially when specific power levels are not well motivated. It is also possible to recover some of the lost sensitivity for higher drift rates through "frequency scrunching," wherein frequency bins are effectively summed together. The next generation dedoppler and hit search algorithm from the BL team is in production with the intention to improve search sensitivity. SUMMARY & CONCLUSION In this work, we have have described the history and physics behind the proposed use of stars as gravitational lenses to create an interstellar communication network, which motivates an artifact SETI search for such probes in our own solar system. We presented our method for searching for radio transmission from a probe in the solar system in communication with nearby stars and demonstrated this method on α Cen. Our analysis found no signals near the expected drift rate during our observations. Therefore, if such a probe exists, it is not transmitting constantly in the 1-3 GHz range. We also present statistics regarding the radio frequency interference environment in this region of sky in the L and S bands. In the future, both archival data and new observations can use similar methods to further search for relays that make use of the Sun's lensing ability for interstellar communication.
9,524.2
2022-06-28T00:00:00.000
[ "Physics", "Engineering" ]
Chloride Ion Corrosion Pattern and Mathematical Model for C60 High-Strength Concrete after Freeze-Thawing Cycles In this study, the porosities of C60 high-strength concrete after 0, 30, 60, and 90 freeze-thaw cycles determined via the water retention method are 1.30%, 3.65%, 5.14%, and 7.34%, respectively. Furthermore, a mathematical model of porosity varying with the number of freeze-thaw cycles is established. Using an artificial environment simulation experimental system and the natural diffusion method, the chloride diffusion law of C60 high-strength concrete after 0, 30, 60, and 90 freeze-thaw cycles is obtained. *e corresponding diffusion coefficients are calculated based on the experimental results and Fick’s law, where 0.3431× 10 , 0.5288×10 , and 0.6712×10 , and 0.8930×10 m/s are obtained, respectively, and a mathematical model of diffusion coefficient with freeze-thawing is established. Transport control equations comprising solution flow and solute migration control equations are established for chloride ions in concrete after freeze-thawing cycles. *e equations consider the effects of freezethawing, solution pressure, solution concentration, solution density, convection, mechanical dispersion, and chemisorption on chloride ion transport in concrete. Using COMSOL numerical software, the transport control equations for chloride ions are solved using a real concrete numerical model, and the chloride ion corrosion process in concrete after freeze-thaw cycles is simulated. *e simulation results are consistent with the experimental values. Introduction With the continuous increase in energy demand, mineral resources extraction has gradually shifted to deeper strata, and the development depth of vertical shafts has increased consequently, along with the increase in the cross-sectional size of the shaft [1][2][3][4][5][6][7][8][9][10][11][12]. For the development of deep and thick unstable aquifer mines, the artificial ground freezing method is widely used [13][14][15]. When constructing using the freezing method, the freezing shaft lining is crucial, and its main function is to block groundwater and withstand the temporary load to ensure the safety and smooth operation of shaft sinking construction [16][17][18][19]. e structural forms of freezing shaft lining primarily include single-layer reinforced concrete, double-layer reinforced concrete, and reinforced concrete composite shaft lining. Currently, concrete materials are essential in the construction of all freezing shaft lining. With the increase in mining depth, high-strength and high-performance concrete materials such as C60 and C70 have been used for freezing shaft lining, and many scholars have conducted relevant studies [20][21][22][23][24]. As the freezing shaft lining is generally cast on site, pouring concrete will produce a large amount of heat of hydration, and the heat of hydration will contribute to part of the freezing lining temperature. After the heat of hydration dissipates, the freezing lining will refreeze, and the temperature will decrease; therefore, the freezing shaft lining will be in a freeze-thaw environment with a changing temperature field, posing a significant challenge to the strength and stability of the freezing lining. Hence, many scholars have investigated the effect of temperature on the rupture of vertical shaft lining [20][21][22][23][24][25][26][27][28][29][30][31][32]. Reinforced concrete structures for freezing shaft lining not only are affected by temperature and stress fields but also are susceptible to the corrosive effects of the subsurface environment. For example, sulfate has a certain corrosive and destructive effect on concrete materials, and the intrusion of chloride ions accelerates the rusting of reinforcing steel bars, thereby affecting the stability of the structure; this is closely associated with the durability problem of concrete structures and materials [33][34][35][36][37]. By understanding the corrosion law of chloride ions in C60 high-strength concrete after freeze-thawing, we aim to establish control equations for chloride ion transport in concrete considering freeze-thawing, solution pressure, solution concentration, solution density, convection, mechanical dispersion, and chemisorption and verify them by numerical calculations to provide some theoretical basis for the design, construction, and safe use evaluation of freezing shaft lining. Materials and Mix Proportions. ey are cement: 42.5 grade ordinary silicate cement supplied by Xuzhou Zhonglian Cement Plant, the composition of which is shown in Table 1; fine aggregate: river sand with a fineness modulus of 2.8, good gradation, apparent density of 2679 kg/m 3 , and silt content of 2.49%; and coarse aggregate: stone, particle size of 5-20 mm, apparent density of 2719 kg/m 3 , and silt content of 0.54%. e mix proportions of concrete were 453 kg/m 3 cement, 740 kg/m 3 sand, 1112 kg/m 3 stone, and 145 kg/m 3 water; the water-cement ratio was 0.32. e compressive strength of the sample was 57.0 MPa after 28 d of standard curing. Test Procedure. e test procedure, as shown in Figure 1, is as follows. ree cylindrical concrete samples of diameter d 100 mm and height h 50 mm (standard curing for 28 d) were used. eir actual height was measured, and then they were subjected to 0, 30, 60, and 90 freeze-thaw cycles (based on the GBT50082-2009 standard, China, shown in Figure 1(a)) and porosity tests (shown in Figures 1(a)-1(d)). e porosity test was performed using the water saturation method. e freeze-thaw treated samples were first dried at 40°C for 48 h (shown in Figure 1(b)) and then weighed using an electronic balance, placed in a water tank with a 16 mm diameter PVC pipe underneath, and filled with room-temperature water such that the water surface was 4-5 mm above the bottom of the sample (shown in Figure 1(c)). e samples were removed at 5, 10, 30, 60, 120, 720, and 1440 min, respectively, and then weighed (shown in Figure 1(d)) and placed back immediately until all intervals were completed. Finally, the samples were completely submerged, and their water-saturated mass was measured after another 48 h. Porosity. Porosity ϕ is calculated using the following equation: where ϕ is the porosity, V ws is the volume of water in the sample when saturated, and V cs is the actual volume of the sample. V ws and V cs are calculated using the two following equations, respectively: In equation (2), m ws is the mass of water in the sample when saturated, and ρ w is the density of water; in equation (3), the diameter of the sample d � 100 mm, and h c is the actual height of the sample. m ws is calculated using the following equation: where m 1 is the mass of the concrete sample when saturated and m 0 is the initial mass of the concrete sample after drying. e porosity of concrete after 0, 30, 60, and 90 freezethaw cycles was calculated to be 1.30%, 3.65%, 5.14%, and 7.34%, respectively, and a model of porosity ϕ n varying with the number of freeze-thaw cycles n was obtained, as shown in the following equation: where ϕ n is the porosity of concrete after freeze-thawing; ϕ 0 is the initial porosity of concrete before freeze-thawing; η and θ are the material parameters, set at 1.61459 and 0.0143, respectively; n is the number of freeze-thaw cycles. e experimental values were compared with the calculated values, as shown in Figure 2 Corrosion Pattern of Chloride Ions in Concrete after Freeze-Thawing Cycles 3.1. Test Methodology. e sample measured 100 mm × 100 mm × 400 mm; the materials and mix proportions are shown in Section 2.1, and the test procedure is shown in Figure 3. In the test, four groups (three samples in each group) were formed and treated with 0, 30, 60, and 90 freeze-thaw cycles, respectively (based on GBT50082-2009, shown in Figure 3(a)); subsequently, a 100 mm × 400 mm rectangular surface of each sample was used as the corrosive surface, and the other surfaces were sealed with epoxy resin (shown in Figure 3(b)). e sealed samples were placed into an artificial marine climate simulation laboratory for salt spray corrosion test (shown in Figure 3(c)), and the humidity in the environmental chamber was controlled at 75%-80%. e salt spray corrosion settings were as follows: salt spray "on" for 12 h/d (hour/day) + salt spray "off" for 12 h/d and NaCl mass concentration of 5% in the salt spray solution. When the corrosion time reached 30,110,190, and 270 d, the samples were removed, concrete powder was obtained by drilling to depths of 5,10,15,20,25,30,35, and 40 mm, respectively (shown in Figure 3(d)), and the chloride ion concentration values were measured using a DY-2501A chloride ion concentration rapid tester (shown in Figure 3(e)). After each drilling of powder, the borehole was sealed with a sealant. Subsequently, the samples were placed back into the artificial marine climate simulation laboratory and continued to be corroded until the next testing time point (shown in Figure 3(f)), that is, until the end of 270 d of corrosion. Chlorine Ion Concentration. Figures 4(a)-4(d) present the variation patterns of chloride ion concentration value c with depth l after 30,110,190, and 270 d of chloride ion corrosion, respectively, the analysis of which shows that c of the samples decreased with increasing l. In this experiment, the salt spray cycle corrosion mechanism was used, and the driving force for chloride ion intrusion into concrete was primarily provided by capillary action and concentration gradient. e chloride ion concentration gradient in the surface layer of concrete was relatively large, whereas the chloride ion concentration gradient in the deep layer was relatively small; hence, more chloride ions were accumulated in the surface layer of concrete. As the corrosion time progressed, the total chloride ion corrosion and corrosion depth increased. As the number of freeze-thaw cycles increased, the porosity of the concrete increased, the water and air permeability of the samples increased, chloride ions were more easily accessible and stored, and the corrosion concentration of chloride ions increased (at the same depth). Chloride Ion Diffusion Coefficient. Fick's second law was used to describe the corrosion pattern of chloride ions in concrete, as shown in the following equation: Advances in Civil Engineering 3 C 0 is the initial chloride ion mass concentration at any section of concrete, C s is the chloride ion mass concentration at the concrete surface at any moment t, D is the chloride ion diffusion coefficient, t is the time, l is the distance from the concrete surface, and erf is the error function. e inverse solution of equation (6) yields the equation for the chloride ion diffusion coefficient as follows: e mass concentration of chloride ions on the concrete surface C s is calculated as follows: where ϕ is the porosity of concrete; ρ w is the density of sodium chloride solution, that is, 1034 kg/m 3 ; ρ s is the density of concrete, that is, 2400 kg/m 3 ; ω is the mass concentration of sodium chloride in the solution, that is, 5%. Based on equations (7) and (8), the chloride ion diffusion coefficient D at l � 5 mm was calculated using MATLAB, and the results are shown in Table 2. A mathematical model correlating the chloride ion diffusion coefficient D n with the number of freeze-thaw cycles n was developed, as shown in the following equation: where D n is the diffusion coefficient of concrete after freezethawing; D 0 is the diffusion coefficient of concrete before freeze-thawing; ζ and ψ are the material parameters, which were set to 1.02151 and 0.00916, respectively; n is the number of freeze-thaw cycles. e experimental and calculated values were compared, as shown in Figure 5 Basic Hypotheses. To derive the control equations for chloride ion transport in freeze-thaw corroded concrete, the following hypotheses are proposed: the liquid in concrete is an ideal liquid; the effect of temperature on liquid flow is not considered; and only a single solute (chloride ion) is considered. e transport of chloride ions in concrete can be classified into two main processes: solution flow and solute migration, in which the effects of freeze-thawing on both primarily affect the porosity and diffusion coefficient. Porosity. Introducing the parameter ξ, pore deformation due to pore pressure changes can be characterized. where U w denotes the pore volume. e volume of each component phase of the porous medium is expressed as follows: Advances in Civil Engineering where U is the total volume of the porous medium, U s is the volume of the solid skeleton of the porous medium, and ϕ is the porosity of the porous medium. Based on the above hypothesis that the solid particles do not undergo deformation, U s is a constant; that is, dU s � 0. Solution Flow Rate. It is assumed that the solution flow in the porous medium conforms to Darcy's law and that the effect of gravity is not considered. where k is the permeability of the solution and μ is the dynamic viscosity of the solution. e relationship between permeability and porosity is described using the Kotyakhov [38] model, as shown in the following equation: where d is the effective diameter of the porous medium particles. Based on equation (15), the following can be derived: where k 0 is the initial permeability of the solution. Because 1 − ϕ n ≈ 1 and 1 − ϕ ≈ 1, equation (16) can be simplified as follows: erefore, based on equations (13) and (17), the solution velocity can be expressed as Solution Density. As chloride ions are transported through concrete, the pore pressure and solution concentration change, resulting in changes in the solution density. Hence, the density of solution ρ is a function of the pore pressure P and concentration c; that is, Taking the derivative of both sides of equation (19) yields To effectively characterize the density as a function of pore pressure and solution concentration, two parameters β P and β c were introduced, which are the pressure compression coefficient and concentration compression coefficient, respectively. Parameters β P and β c represent the ratio of the change in density due to a unit change in pressure to the initial density and the ratio of the change in density due to a unit change in concentration to the initial density, respectively. Substituting equation (21) into equation (20) yields Integrating both sides of equation (22) yields where ρ 0 denotes the initial density, P 0 denotes the initial pore pressure, and C 0 denotes the initial concentration. Control Equation for Solution Flow. Using equations (10), (13), (18), and (23), the flow control equation for chloride ion solution in concrete porous medium was obtained as follows: Control Equation for Solute Migration. Chloride migration in concrete is governed by convection, molecular diffusion, mechanical dispersion, precipitation dissolution, complexation, and chemisorption. However, only the effects of convection, mechanical dispersion, and chemisorption on chloride migration were considered in this study. Convective Effects. e convective migration of chloride ions is described by convective flux, as shown in the following equation: where J → v � (J vx , J vy , J vz ) T is the convective flow vector; u → � (v x , v y , v z ) T is the solution flow velocity vector. Advances in Civil Engineering Mechanical Dispersion. e mechanical dispersion migration flux of chloride ions is expressed by Fick's first law, as shown in the following equation: where J → d � (J dx , J dy , J dz ) T is the dispersion flux vector; D d � (D dx , D dy , D dz ) T represents the mechanical dispersion coefficients in the x-, y-, and z-directions, respectively. If we assume that the porous medium is isotropic, then Chemisorption. e Langmuir adsorption theory was used to describe the chemisorption of chloride ions in concrete transport, as shown in the following equation: where ρ s is the dry density of the skeleton material, c k is the isothermal adsorption concentration, K L is the Langmuir isothermal nonlinear adsorption partition coefficient, and C max is the maximum adsorption capacity. Control Equation for Solute Migration. Equation (28) is the chloride ion convection-diffusion equation, where [D] denotes the chloride ion diffusion coefficient tensor. It is assumed that the diffusion coefficient is uniform everywhere in the concrete; therefore, the diffusion coefficient can be expressed as shown in equation (9). Considering adsorption, the mass of chloride ions in the representative elementary volume is expressed as ϕc − (1 − ϕ)ρ s K L (C max c/1 + K L c), and the convection-diffusion equation for chloride ions in concrete considering adsorption can be obtained by replacing ϕc in equation (28) with the above equation; that is, Substituting equation (13) into equation (29) yields the convection-diffusion equation for chloride ions expressed in terms of pressure P and concentration c, as in the following equation: Control Equations for Chloride Ion Transport in Concrete. e control equations for chloride ion transport in concrete considering the freeze-thaw action comprise two parts, that is, the solution flow control equation and the solute migration control equation, as shown in 31a and 31b, respectively, which contain the effects of convection, diffusion, and chemisorption, reflecting the coupling between the seepage and solute migration fields, where ϕ and D are shown in equations (13) and (9), respectively. (31b) e initial and boundary conditions are solved as follows. e solution flow and solute migration can be described using the Dirichlet and Neumann boundary conditions as follows: where p Advances in Civil Engineering where P 0 and c 0 are the initial fluid pressure and chloride ion concentration distribution over the solution domain Ω, respectively. Figure 6. To study the effect of freeze-thaw concrete chloride ion corrosion, numerical simulation scenarios involving 0, 30, 60, and 90 freeze-thaw cycles and 5% chloride salt concentration were investigated. Because of the complex coupled model and boundary conditions, the solution was complex; therefore, the model parameters were partially simplified in the calculation; for example, the porosity ϕ, solution density ρ w , chloride ion diffusion coefficient D, and other parameters were set as constants in the calculation. Figure 7, the processed numerical plots can accurately reflect the porefissure distribution in concrete, with smaller porosity corresponding to coarse aggregates and larger porosity corresponding to cracks. When the numbers of freeze-thaw cycles were 0, 30, and 60, no cracks were observed in the concrete; therefore, the pores were relatively dispersed, and large pores dominated the mortar, as shown in Figures 7(a)-7(c); when the number of freeze-thaw cycles was 90, wider diagonal cracks appeared in the concrete, and the distribution of cracks can be observed well, as shown in Figure 7(d). Figure 8 shows the cloud plots of the chloride ion concentration distribution after 30,110,190, and 270 d of chloride ion corrosion of concrete at a corrosive surface concentration of 854 mol/m 3 (5% NaCl solution) when the numbers of freeze-thaw cycles n were 0, 30, 60, and 90, respectively. As shown, the chloride ions eroded nonuniformly from the surface to the interior; with the increase in the number of freeze-thaw cycles, chloride ion corrosion accelerated significantly, primarily due to the increased porosity of the concrete. Meanwhile, the chloride ion diffusion and permeation of the concrete after 90 freezethaw cycles were significantly faster than those of the concrete after 60 freeze-thaw cycles, primarily due to the increased porosity of the concrete and the generation of microcracks. Figure 9 shows the numerical simulation curves of chloride ion corrosion concentration c in concrete with depth l (horizontal model midline) after different numbers of freeze-thaw cycles compared with the experimental values, where the scatter points represent the experimental values, and the curves represent the simulated value. As shown, the simulated value of chloride ion concentration with depth was not a smooth decreasing curve owing to the introduction of the real concrete numerical model. Furthermore, the results reflected the effect of the coarse aggregates, as shown in Figure 9(d). e simulated curves deviated slightly from the experimental values, particularly in the smaller numbers of freeze-thaw cycles, as shown in Figures 9(a) and 9(b). Conclusion e porosities of C60 high-strength concrete after 0, 30, 60, and 90 freeze-thaw cycles determined using the water retention method were 1.30%, 3.65%, 5.14%, and 7.34%, respectively, and a mathematical model of porosity varying with the number of freeze-thaw cycles was developed. e chloride ion diffusion patterns of C60 high-strength concrete after 0, 30, 60, and 90 freeze-thaw cycles were obtained using an artificial environment simulation experimental system and the natural diffusion method, and the corresponding diffusion coefficients were calculated to be 0.3431 × 10 − 12 , 0.5288 × 10 − 12 , 0.6712 × 10 − 12 , and 0.8930 × 10 − 12 m 2 /s, respectively. Furthermore, a mathematical model of diffusion coefficients varying with the number of freeze-thaw cycles was developed. Transport control equations for chloride ions in concrete after freeze-thawing comprising a solution flow control equation and a solute migration control equation were developed, in which the effects of freeze-thawing, solution pressure, solution concentration, solution density, convective action, mechanical dispersion, and chemisorption on the transport of chloride ions in concrete were considered. e chloride ion transport control equation was solved using a real concrete numerical calculation model in COMSOL numerical software to simulate the chloride ion corrosion process after different numbers of freeze-thaw cycles, and the simulated values of chloride ion concentration agreed well with the experimental values. Data Availability Some or all data, models, or code generated or used during the study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
4,928
2021-04-08T00:00:00.000
[ "Materials Science" ]
EFL learners’ grit, classroom enjoyment and their willingness to communicate: Iranian public school versus private English language institute learners The pivotal role of communication in second language (L2) learning has triggered plethoric research to identify factors that may influence learners’ willingness to communicate (L2 WTC). However, there is a dearth of comparative research on L2 WTC, especially among EFL learners studying English at different educational institutions. To this end, the present study investigates the role of ‘grit’ and ‘classroom enjoyment’ (CE) in learners’ L2 WTC in two different educational settings of public schools and private language institutes. Grit includes two lower-order constructs, namely perseverance of effort (POE) and consistency of interest (COI), which were examined separately in this study. A total of 269 Iranian students from both public schools and private institutes completed an online survey. What was revealed from the data analysis through the Mann–Whitney u-test, Spearman’s rho, and multiple regression analysis is that private institute learners enjoyed higher levels of WTC compared to public school students. While POE and CE exerted a significant effect on L2 WTC in both educational settings, COI failed to do so. The findings of this study are discussed from a socio-educational perspective with regard to the difference between these two educational contexts. examine influential variables with regard to L2 WTC, conducting this study is of critical importance. Given the above gaps and to deepen our understanding of L2 WTC and provide pedagogical implications, this study examines the following research questions: L2 WTC For a long time, SLA was dominated by structural approaches, focusing mainly on the mastery of vocabulary and grammar. However, with the blooming of communicative approaches, there has been a significant shift in scholars' perspectives toward SLA. In this regard, WTC, as an influential aspect of communication in L2, has attracted unprecedented attention from researchers. WTC has its roots in the first language studies and was applied to L2 communication by MacIntyre and Charos (1996). In the beginning, WTC was considered a stable personality variable across time and in different situational contexts. More recently, however, L2 WTC has been presented not only as a stable tendency toward communication, but also as a situational trait. Nowadays, researchers believe that both internal variables, such as L2 motivation and attitude toward L2, and external variables, such as inter-group climate and social support, can influence learners' L2 WTC (Zhang et al., 2020). As the pioneers of L2 WTC, MacIntyre et al. (1998) proposed a groundbreaking model of L2 WTC known as the Heuristic model of WTC in L2 (Fig. 1), which emphasized the complex interactions of varying factors such as context, personality traits, cognition, and emotion. This model can illustrate the multifaceted effects of different variables (individual and situational), including the variables of interest to the present study (i.e., grit and CE), in each layer on the top of the pyramid (i.e., L2 communication). This model consists of six layers which include 12 constructs. In this model, the top layers (I, II, & III) include "more changeable, and context-dependent" (state) variables, and at the bottom layers of the pyramid (IV, V, & VI), more "stable social individual context" (trait) variables are placed (Dewaele & Pavelescu, 2021, p.3). Following this line of research, many researchers have adopted this model to examine the relationship between L2 WTC with a plethora of variables, including motivation (Wu & Lin, 2014), language mindset (Zarrinabadi et al., 2021), anxiety (Zhou et al., 2020), enjoyment (Lee, 2020), and L2 motivational self-system (Lee & Lee, 2020). L2 grit Unlike cognitive skills, non-cognitive skills are relatively new to the SLA field. The scarcity of studies in the literature about non-cognitive skills can be rooted in the intangible nature of these skills that may cause a problematic process of measuring and quantifying (Teimouri et al., 2020). In recent years, however, non-cognitive skills have garnered vast research attention in the SLA domain. Grit as a non-cognitive variable highlights the differences between students' achievements with the same intellectual talent. Researchers have established that grittier students succeed in achieving higher scores (Khajavy et al., 2021a(Khajavy et al., , 2021bStrayhorn, 2014), employ persistent effort during learning an L2 (Lake, 2013), and are more likely to experience positive classroom emotions (Wei et al., 2019). Since grit is not a fixed personality trait and can be counted as malleable and teachable, the importance of measuring and estimating grit is gaining unprecedented attention. Duckworth et al. (2007) view grit as a domain-general variable that could be applied to different fields. Teimouri et al. (2020) extended the concept to L2 learning and developed an L2 domain-specific grit notion. They explained that grit as a higher-order factor consisting of two lower-order elements, namely POE and COI, should be examined separately. Classroom enjoyment As the external positive variable, the concept of CE, first developed by Dewaele and MacIntyre (2014), is employed in this study. Boudreau et al. (2018, p. 153) claim that enjoyment is different from routine experiences of pleasure, since "if pleasure can occur simply by performing an activity or completing an action, enjoyment takes on additional dimensions such as an intellectual focus, heightened attention, and optimal challenge. " Unlike pleasure, CE focuses on more permanent feelings of joy in the classroom (e.g., In our EFL classroom, I'm a worthy member of the class). Many researchers have established the CE in relation to L2 WTC as a unique variable different from temporary classroom pleasure (e.g., Lee, 2020). MacIntyre et al., (1998, p. 547) Page 5 of 19 Ebn-Abbasi and Nushi Asian. J. Second. Foreign. Lang. Educ. (2022) 7:24 Public schools versus private English language institutes in Iran The present research was conducted in Iran, where Persian is the official language and medium of communication, and English is taught as a foreign language (FL). English finds its way into the country's school curriculum from the 7th grade as a mandatory subject and is therefore studied by anyone who attends school. Since Iranians hold a positive attitude toward English (Abdolahzadeh & Nia, 2014;Rezaei et al., 2019), it is also taught and learned outside public school contexts (e.g., private English institutes). The centralized English courses taught in public schools have been criticized wieldy by many language researchers. Some of these criticisms are due to employment of inappropriate methodology, use of low-quality coursebooks, highly crowded classrooms, lack of necessary equipment, inefficient teachers, and failure to achieve the proposed objectives, including helping students to acquire basic communicative proficiency and learn English at a survival level (Farhady & Hedayati, 2009;Moradkhani & Shirazizadeh, 2017;Rahimi, 2009). On the other hand, private English institutes in Iran employ different policies toward language learning. These institutes have been established to promote language use among the learners so that by offering better content and methodology than those in public schools. FL instruction provided in private centers is fundamentally different from public schools in several ways (Moradkhani & Haghi, 2017). From a curriculum perspective, the ministry of education in Iran has published certain books and assigned the same syllabus for public schools across the country. Khoshsima and Hashemi Toroujeni (2017) revealed that the Iranian public educational system primarily uses Grammar Translation Method (GTM) as its teaching methodology. For many decades, the methodology of such schools consisted of "reading, translation, memorization, and grammar" (Hosseini Goodrich, 2020, p. 9). This approach was replaced with the Communicative Language Teaching (CLT) in 2013; nonetheless, criticism was still directed at the quality and effectiveness of the new English teaching method and its concomitant textbooks (Hosseini Goodrich, 2020). More specifically, the course materials taught at high school are still focused on reading comprehension, grammar, and vocabulary development (Sadeghi & Richards, 2016). However, by employing a decentralized system, private institutes utilize various international coursebooks and different approaches to teaching English. CLT principles are the favorite choice to design private institutes' curricula, indicating that more attention is given to communication than public schools. These institutes are commonly better equipped and allow fewer students to attend their classrooms compared to public schools. In these institutes, teachers are fortified to encourage communication in the target language. They are required to attend teacher training courses and are mostly assessed via interviews before being given the job (Moradkhani & Haghi, 2017). Khajavy et al., (2018) investigated the effects of FL learners' emotions and positive classroom environment in relation to L2 WTC. They studied WTC with regard to the effects of specific positive and negative emotions, namely enjoyment and anxiety. To this end, they employed doubly latent multilevel analysis as a qualitative methodology. They found that at an individual level both positive (CE) and negative (anxiety) variables influenced L2 WTC, suggesting that L2 learners who experience CE and less anxiety while learning an L2 are more likely to initiate or participate in the act of communication. At a classroom level, Khajavy et al. (2018) observed a relationship between CE and WTC that is equal at both individual and classroom levels. However, this relationship was observed only at the individual level in the anxiety case. It is worth mentioning that the correlation between positive emotions and WTC was stronger and more consistent than that found for negative emotions. Previous studies The findings of Khajavy et al. 's (2018) study align well with those of Dewaele et al. 's (2019) study conducted in a different context. They confirmed that FL anxiety and CE are influential negative and positive predictors of L2 WTC, respectively. By employing a multiple regression analysis on data gathered from Spanish students, these researchers suggested frequency of use can also create a sense of WTC among students. Regarding grit and SLA, Karlen et al. (2019) investigated the role of implicit theories (beliefs regarding whether or not intelligence can be expanded) about self-regulated learning on learners' (N = 1215) grit as a two-factor model (POE and COI), goal achievement, challenging academic tasks achievement, and learning motivation. They also examined the relationship between POE and COI and learners' goal achievement, intrinsic and extrinsic motivation, and academic achievement. To this end, these researchers employed a longitudinal self-reports design in the Switzerland. In a nutshell, Karlen et al. (2019) concluded that incremental theory (beliefs that intelligence is a malleable trait that can be improved through hard work) is highly related to grit and adaptive motivational patterns. Interestingly, they observed a weak indirect correlation between POE and academic achievement, while COI showed none. Furthermore, different motivational correlation patterns were obtained between POE and COI to achievement goals and intrinsic motivation in this study. Wei et al. (2019) observed a direct positive link between grit, foreign language proficiency (FLP), classroom environment, and foreign language enjoyment (FLE). Believing that the level of English among middle school students is still low, while their classroom anxiety is high, the researchers realized that by employing consistent effort into FL learning, students could increase their FL self-efficacy. This will lead to overcoming challenges and setbacks in their path to success. Their study also indicated that grit positively influences FLP through CE and FLE. The researchers explained that one of the reasons beyond this finding can be Asian students' lack of interest in learning another language. Therefore, grit can be created through having a positive environment and enjoyable atmosphere in the classroom to attract students' interest, which can lead to putting more effort into learning and achieving better FLP. Another finding of the study was the moderating effect of CE between grit and FLE, and also between grit and FLP. Simple slope test results showed that when CE was good, with the increase of grit, the students' FLE and FLP also significantly increased. However, when CE was poor, the increase of grit did not significantly affect FLE and FLP. This shows that although grit is a positive personality trait, without good CE, its effect on FLE and FLP is minimized. A possible explanation for this could be the effect of environmental factors. Lee (2020) explained that students with higher levels of POE who experience a positive environment in the classroom are more likely to show higher levels of WTC. COI, on the other hand, was not observed as a significant predictor of L2 WTC. Another significant finding of this study was the positive relationship between L2 WTC and the length of time devoted to learning English. In other words, students with more experience learning English tend to communicate more compared with others; on the contrary, Lee found out that there is a negative correlation between the age of students and their level of L2 WTC. University students showed lower levels of WTC compared with those students in lower levels of education. In a recent case study by Dewaele and Pavelescu (2021), the link between FLE, foreign language classroom anxiety (FLCA), and WTC among two high school students of FLE in Romania was examined. The qualitative data collection procedure included observations, a written task, and semi-structured interviews with these students. The analysis of the obtained data revealed FLE and FLCA affect L2 WTC in different ways. An important finding of this study was the direct and indirect influences of previous experiences of using English in/outside the classroom and different personalities of the students on their classroom emotions and WTC in English. Observing a direct link between FLE and FLCA and students' L2 WTC during the interviews and observations, these researchers found out that infrequent participation cannot be taken as a sign of low levels of WTC. It can be a normal response to the context, uninteresting topics, and presumed responses from their classmates. Participants A total of 269 Iranian EFL learners from both public schools and private institutes were surveyed in this study. The participants were selected based on convenience sampling. Participants from the school were studying in the 11th and 12th grades at the time. As for the institute sample, language learners with elementary (A1) and pre-intermediate (A2) levels of English were selected for the study. These levels were based on the common European framework of reference for languages (CEFR) which describes language ability on a six-point scale, from A1 for beginners, to C2 for advanced students. Gender was not considered a variable in our study. Table 1 represents the demographic characteristics of each group that took part in this study. Instruments The data for each of the variables in the present study were collected using an online survey consisting of three different questionnaires, namely L2 WTC questionnaire (Peng & Woodrow, 2010), grit questionnaire (Teimouri et al., 2020), and CE questionnaire (Dewaele & MacIntyre, 2014). The survey started with a section that elicited the participants' demographic information. To avoid some extraneous variables, the researchers added two items to the public schools' questionnaire version. The first item was to detect and remove those public school students who were simultaneously attending private institutes, and the second item was to identify students' grades. The the questionnaire administered to the learners in the private institutes contained an additional item to detect learners' current English proficiency level. Moreover, to prevent age differences from affecting our results, the data collected from those institute learners who were older than 18 and younger than 15 were excluded from the analysis. The details of the three questionnaires comprising the survey are detailed below. The L2 WTC questionnaire A modified version of L2 WTC by Weaver (2005) adopted by Peng and Woodrow (2010) was used in this study (N = 10). The participants answered the questions such as 'I am willing to ask my group mates in English the meaning of words I do not know' on a 5-point Likert scale ranging from 1 (definitely not willing) to 5 (definitely willing). The Cronbach's α for this scale in Peng and Woodrow (2010) was 0.85. Teimouri et al. (2020) developed an L2 domain-specific grit scale that consisted of two sections, namely POE and COI, in the classroom. This questionnaire consisted of 9 items, 5 items were related to POE (e.g., I am a diligent English language learner), and 4 items covered COI (e.g., I think I have lost my interest in learning English). The participants could select their responses on a 5-point Likert scale ranging from 1 (not like me at all) to 5 (very much like me). The reported Cronbach's alpha in Teimouri et al. 's (2020) research was 0.80. The classroom enjoyment questionnaire CE data were collected using a modified version of the original FLE scale developed by Dewaele and MacIntyre (2014). The original questionnaire included 24 items on a 5-item Likert scale (from strongly disagree to strongly agree), from which 16 items (e.g., I enjoy our EFL classroom) were chosen that were appropriate for our context. The reported mean of Cronbach's α for this scale in the Dewaele and MacIntyre's (2014) study was 0.86. Data collection procedure Google Forms was chosen to host the survey instrument and collect the data. To help the participants comprehend the items better, the questionnaire was translated into Page 9 of 19 Ebn-Abbasi and Nushi Asian. J. Second. Foreign. Lang. Educ. (2022) 7:24 Persian by two professional translators. After obtaining the schools (N = 5) and institutes (N = 6) boards' approval, the link to the survey was sent to the EFL learners. Descriptive statistics Cronbach's alpha was used to estimate the reliability of each questionnaire comprising the survey. As seen in Table 2, the questionnaires enjoyed high levels of reliability to elicit WTC, grit, and CE from all the respondents in public schools and private institutes. For the first research question, the mean and standard deviation for each group were calculated. The results of the descriptive analysis revealed that the mean of L2 WTC in public schools was 2.3, and the standard deviation was 1.16 (Table 3). Table 4 displays the descriptive statistics for L2 grit and CE in public schools. The respondents' mean performance regarding those variables was above 2, which indicates a close to medium performance on L2 grit and CE. The mean of L2 WTC in private institutes was 3.1, and the standard deviation was 1.31, revealing that the participants' level of L2 WTC was also medium. Table 5 represents the descriptive statistics for POE, COI, and CE among private institutes learners. The mean for POE and COI, which were 2.86 and 3.28, respectively, indicated that the participants' POE and COI were slightly below the medium range. Regarding CE, the learners' responses yielded the highest mean (3.29) in comparison with that obtained for grit (POE & COI) which was also medium (Table 4). To investigate how public school students and private institute learners differ in their level of L2 WTC, Kolmogorov-Smirnov and Shapiro-Wilk tests were first conducted to see if the data were normally distributed. The results showed the data did not enjoy normal distribution (Table 6); therefore, following McKnight and Najab's (2010) suggestion, the Mann-Whitney U test, which assumes no specific distribution, as the nonparametric equivalent of the independent t-test, was employed in this study. As shown in Table 7, the Mann-Whitney U test showed a significant difference between the level of L2 WTC in private and public school learners (U = 7382.500, p = 0.013). The effect size was small (0.151) according to Cohen's (1992) classification of effect sizes. Correlational analyses Spearman correlation was used to investigate the relationship between L2 WTC, L2 girt (POE & COI), and CE among the EFL learners in public schools and private institutes (questions two and three). The results of the Spearman correlation revealed that, in public schools, there was a significant positive association between CE and WTC, r(148) = 0.31, p < 0.001 (Table 8). In the case of grit, the results indicated a significant positive correlation between grit (POE) and L2 WTC, r(148) = . 37, p < 0.001. Grit (COI) on the other hand did not show a significant correlation with L2 WTC, r(148) = . 14, p > 0.078. Regarding the private institute learners, the results indicated significant positive associations between CE, r(121) = 0.31, p < 0.001, COI, r(121) = 0.24, p < 0.007, and POE, r(121) = 0.53, p < 0.001, and with private institutes' EFL learners' L2 WTC. Regression analyses Two separate regression analyses were also performed to estimate the predictive power of L2 girt (POE & COI) and CE for L2 WTC. Table 9 shows information about the regression model as a whole. The three variables together positively correlated with the total score at 0.55, which is almost high. The adjusted R2 indicated that the model significantly predicted 29 percent of the variance in the population. To check the significance of these relationships, the results of ANOVA is required (Table 10). The results indicated that POE and CE significantly predict learners' WTC (F (37.24) = 3.14, p < 0.001). Moreover, the coefficients table (Table 11) showed that POE as the grit's first component (B = 0.30, p < 0.001) and CE (B = 0.23, p < 0.001) wielded a significant predictive power for public schools' students L2 WTC. On the other hand, COI did not show any significant predictive power for L2 WTC among learners (B = 0.0.010, p > 0.872). Table 12 demonstrates that the three variables together positively correlate with the total score at R = s0.720, which is high. R-squared (R2) shows 51 percent of the Discussion This study was heuristic in that it is among the first efforts to compare the relationship between positive individual (grit) and situational (CE) variables and L2 WTC among EFL learners in both public schools and private institutes. The results of the present study, to some extent, replicate but mainly extend previous findings on EFL learners' L2 WTC. Our approach was consistent with the goal of positive psychology (Seligman & Csikszentmihalyi, 2000) and L2 WTC theory (MacIntyre et al., 1998) to understand the possible links between positive emotions and success (L2 WTC in this -Abbasi and Nushi Asian. J. Second. Foreign. Lang. Educ. (2022) 7:24 case). Several significant points were observed after analyzing the data, which will be discussed in the following sections. Public school and private institute learners' L2 WTC The descriptive data and the Mann-Whitney U test demonstrated that learners in private institutes enjoy higher levels of L2 WTC compared with public school students. The reasons for this finding can be explained from the socio-educational perspective. First, Iranian public schools and private institutes differ in their teaching methodology. The former syllabus for English in public schools was mainly based on the GTM (Rahimi, 2009). Although the current so-called 'revised' version (in effect since 2013) is claimed to be in line with the CLT perspectives, it focuses on inductive grammar teaching, reading comprehension, partial vocabulary instruction, limited speaking exercises in the form of drills (Sadeghi & Richards, 2016), and has failed to promote communicative proficiency among students. On the other hand, private institutes possess a decentralized system, enabling them to select their own textbooks and methodology. Almost all of the institutes fashion their methodology based on the principles of the communicative competence approach (Zhang & Rahimi, 2014) so as to prepare learners to communicate in both spoken and written modalities (Rahimi & Zhang, 2015). Naturally, the different teaching methodology of the two enterprises can be a justifiable reason why the WTC of the learners would differ in the different contexts. Second, the majority of classrooms in Iranian public schools are teacher-dominated. This teacher-centered method at schools can deter students' participation in the classrooms. Zarei et al. (2019) suggested that the dominant teacher-centered style in Iran's educational system, teachers' roles in the classroom, and institutional expectations, are debilitating factors concerning WTC. Moreover, in teacher-centered classrooms, pair/ group work, which is likely to trigger active student-student interaction in the L2, is ignored to a great extent. Many researchers (e.g., Ahlquist, 2019;Fushino, 2010) have proved group work significantly affects the ease of language use. Private institutes' classrooms are less teacher-dominated in comparison to public schools, and pair/group work is more common. In Cao and Philp's (2006) words, group work can easily advantage language learners and give them a sense of security to initiate and participate in communicative acts. Third, as Robertson et al. (2000) pointed out, culture can affect teachers' and students' interactions. Students entering the EFL classrooms for the first time in public schools have already formed a perception of the roles normally held by the teachers based on the cultural norms and their experiences with teachers in other subjects. They are likely to find it natural to hold their teachers in high regard, which brings us to the issue of 'power distance. ' As Hofstede (2001) puts it, power distance is distributed unequally, yet it is 'accepted' and 'expected. ' Public schools' classrooms are teacher-centered, where teachers' role as the dominant authority is accepted and expected. This often hinders a possible yet needed friendly atmosphere in the classrooms, which can enhance students' willingness to participate in or initiate communication. Dewaele et al. (2019) confirmed that teachers should be supportive, funny, and friendly to enhance the effects of novel and exciting classroom activities. Page 14 of 19 Ebn-Abbasi and Nushi Asian. J. Second. Foreign. Lang. Educ. (2022) 7:24 Regarding item 30 in this study's questionnaire, which addressed 'teachers' friendliness, ' the researchers observed a large mean difference between the learners' responses in public schools and private institutes. This suggests that learners in private institutes find their teachers friendlier. This is understandable considering the smaller age difference between the EFL learners and their teachers in the private institutes, the less teacher-dominant classrooms, and the employment of exciting activities in this context, which reduces the power distance between teachers and the learners and increases the possibility of classroom communication. Fourth, Iranian public schools are test-oriented, and performance-based tasks are not a part of the criteria for the final assessment (Farhady & Hedayati, 2009). This is understandable when considering the Iranian public schools' context. English study for most public schools' students in Iran is undertaken to meet the need for passing examinations rather than for communicative purposes. A similar situation has been reported in the Chinese context. Peng and Woodrow (2010) found that both EFL teachers and learners prioritize test-related skills such as vocabulary, reading, and writing over speaking in an exam-oriented context. This contrasts with private institutes where performance-based tasks are included as a part of their examination criteria; therefore, learners are more willing to participate in the communication. These possible reasons resonate with those hinted at by Yashima et al. (2017) and . They realized that inserting performance-based activities in the assessment criteria can enhance students' WTC in Japanese and Korean contexts, also counted as exam-driven countries. CE and WTC The results of this study revealed that CE positively correlates and predicts L2 WTC in both contexts. There was, however, a small difference between the perceived CE among learners in these two settings. The EFL learners in public schools experienced CE slightly less than those in private institutes. Adding to the nascent but growing literature focused on CE, these findings bring the important relationship between CE and L2 WTC into the spotlight. The positive relationship between CE and WTC can be explained in terms of Fredrickson's (2003) 'broaden and build' theory. The 'broaden' side of this theory suggests that positive emotions can motivate learners to explore new experiences and seek opportunities for learning more efficiently. In order words, learners would be more willing to participate in communicative activities when they enjoy the classroom. Moreover, through creating active approaches to peers and teachers, positive emotions such as CE can also diminish the effect of negative emotions that act as hindering factors in relation to L2 WTC (Dewaele & Alfawzan, 2018). These findings are in line with those of the prior studies that investigated the possible link between positive emotions and L2 WTC (Khajavy et al., 2018;Khajavy et al., 2021aKhajavy et al., , 2021bJiang & Dewaele, 2019;Peng, 2015). Grit and L2 WTC Grit (POE) displayed a positive correlation and significant predictive power in relation to L2 WTC in both contexts. By comparing the reported results, one can notice that POE was higher among private institutes' learners. We can infer from the analysis that grittier students who show constant effort can be more inclined to engage in communication. Page 15 of 19 Ebn-Abbasi and Nushi Asian. J. Second. Foreign. Lang. Educ. (2022) 7:24 This is in agreement with the findings of Dörnyei and Ushioda (2013) who explained that success in learning an L2 is highly reliant on learners' sustained effort. These findings can be reviewed in light of the incremental theory of intelligence (Costa & Faria, 2018;Lindblom, 1950), which explains that intelligence is a malleable quality that can be developed. In this regard, EFL learners with a malleable perspective toward language learning are expected to exert themselves to achieve goals despite setbacks. The findings of other studies that have established positive associations between grit and SLA further confirm this claim. Teimouri et al. (2020) asserted that grittier students are more likely to engage in class discussions than their less gritty peers. In relation to L2 WTC, Lee (2020) also demonstrated that POE is predictive of L2 WTC. Furthermore, the results suggested no significant predictive power and correlation concerning COI with/for L2 WTC in the public schools' setting. In the private institutes, however, a significant correlation, but no predictive power was observed between COI and L2 WTC. This finding also implies that learners' interest in language learning may suffer some wax and wane during the process of language learning, a process which is naturally replete with lengthy and tedious activities. Nevertheless, learners might still work hard to achieve their goals. Similar findings were reported in prior research in other contexts such as Korea (Lee, 2020), Switzerland (Karlen et al., 2019), and China (Feng & Papi, 2020). Credé et al., (2017, p. 502) conclusion that "perseverance is a much better predictor of performance than either consistency or overall grit …" further buttresses our findings regarding COI. Conclusion WTC has been studied from a variety of perspectives, from being examined as a personality trait disposition to a novel situational approach. The results of this study showed that students in public schools suffer a lower level of L2 WTC compared to EFL learners in private institutes. Many researchers have suggested various ways to nurture L2 WTC in EFL classrooms. For instance, Cao and Philp (2006) suggested familiarity with interlocutors, familiarity with the topics, and self-confidence as important factors to increase learners' WTC. Public schools' EFL teachers, therefore, should know that although speaking in the target language is not a part of the examination criteria, they can assign grades to this skill. Yashima et al. (2017) and confirmed that including performance-based tasks can help students be more willing to participate in the communication acts. Iran's Education Ministry authorities and curriculum designers should also make changes in the overall English education system so as to reinforce the WTC among EFL learners. The language learning process can be, at times, tedious and challenging (e.g., participating in communicative activities). EFL teachers should be aware of the importance of grit (POE) and how it can help learners during this process. These teachers should introduce their students to the malleable nature of intelligence and inform them that besides talent, more valuable factors such as diligence and perseverance can help them achieve their goals. According to Keegan (2017), being 'gifted' is not the only predictor of success in language learning. In relation to L2 WTC, since many teachers, regardless of where they are teaching, struggle to engage their students in communication, the results of this study, which depicted POE as a positive predictor of Page 16 of 19 Ebn-Abbasi and Nushi Asian. J. Second. Foreign. Lang. Educ. (2022) 7:24 L2 WTC, can be of great interest. Teachers can ensure that their learners are familiar with the positive role of POE by giving lectures, introducing successful people, and encouraging them to be persistent to achieve their objectives. This study revealed the positive link between an external positive emotion (CE) and L2 WTC. EFL teachers, especially in public schools which displayed lower levels of CE, should know that besides boosting students' L2 WTC (Cheng, 2021;Lee, 2020), positive emotions such as enjoyment can attenuate the hampering effects of negative emotions on L2 WTC . To promote a positive and enjoyable atmosphere in the EFL classrooms, teachers should be encouraged to reduce the power distance by having a friendly tone, employing exciting activities, and helping learners feel secure enough to start communicating in the target language. Some limitations need to be taken into account when interpreting the results of this study. First, a close-ended questionnaire to collect the data was employed due to the restrictions imposed by the Covid-19 pandemic at the time of conducting this study and the practicality of online questionnaires at that critical time. This resulted in the deprivation of the present study of any qualitative data. Future studies can use multiple data collection methods such as field observations and interviews to ensure the integrity of the data. Second, since the researchers intended to compare the results obtained from public schools and with those of private institutes, students with the same level of proficiency in the two contexts had to be selected. Due to the limited curriculum in Iranian public schools, English proficiency among public school students remains low, almost equal to the lowest level of proficiency in private institutes. Therefore, this can limit the generalizability of the findings to other proficiency levels. The present study encourages further subsequent research on public schools' EFL learners' L2 WTC, which is being ignored to an unfortunate degree. Other investigations with a larger sample size regarding the roles and effects of grit and CE on L2 WTC would be worthwhile and generalizable to Iranian EFL learners. Third, it is worth mentioning that Iran is a multilingual and multicultural country. The present study was conducted from a socio-educational perspective. Therefore, future studies examining the same variables regarding cultural issues would yield more sustainable results. Fourth, the effects of gender on these variables were not addressed in this study and, to our best understanding, not at all in the Iranian context. Further research can compare the variances due to gender differences. Page 17 of 19 Ebn-Abbasi and Nushi Asian. J. Second. Foreign. Lang. Educ. (2022) 7:24
7,964.6
2022-09-15T00:00:00.000
[ "Education", "Linguistics" ]
Towards an extended framework for analysing technology policy This paper analyses technology policy as a scholarly concern and political practice that needs to be taken beyond the present somewhat singular focus on innovation and deployment. We also need to include an interest in the making of infrastructure, the provision of regulations, and democratic engagement. Consequently, this paper introduces the concepts of socialisation and domestication to overcome the instrumental, economic framing of technology policy. These concepts highlight the importance of embedding and enacting new technology. The suggested conceptual framework is used in a brief synthetic analysis of four examples of technology policy and technological development in the Norwegian context: cars, wind power, hydrogen for transport, and carbon capture and storage (CCS). Introduction: what is technology policy? Technology plays a prominent role in many kinds of discourses concerned with improving human conditions and the political management of challenges like global warming, sustainability and employment.In particular, this is expressed through widespread use of concepts like 'innovation' and 'knowledge-based society', which form the basis of much of today's public policies and governance.Arguably, the development of technology has become a sublime that focuses the hope for a better future in a particular manner.This paper is concerned with how we may conceptualise the scope of policy issues involved in pursuing technological development as a way of improving modern societies.Presently, many scholars agree about the need to supersede the present dominance of a fairly singular focus on technological innovation for economic growth, albeit for different reasons, like the need for sustainable transitions (Schot andGeels 2008, Steward 2012), the impact of non-technological regulations (Paraskevopulo 2012), concerns for the role of activists (Hess 2007), the need to include broader political economy perspectives (Tyfield 2012), or the importance of pursuing public engagement and perceptions of risk (Felt et al. 2007). In early science and technology studies (STS), the analysis of science and technology policy was a main concern (Spiegel-Rösing and Price 1977).However, the main focus of these efforts was science-government relations centred on R&D, in particular the analysis of how social interests shaped such policies (Cozzens andWoodhouse 1995, Elzinga andJamison 1995).While these are important issues, this paper moves in a different direction.Rather than emphasising the role of science policy as an articulation of social interests and power to influence innovation, I want to pursue what may be considered "downstream" issues arising from efforts to integrate technologies in society.Thus, the intention is to complement the efforts of broadening science and technology or innovation policy analysis by developing an inclusive concept of technology policy.This concept should help providing a comprehensive agenda with respect to what the analysis of policy-making with respect to technology may involve. As a scholarly term, technology policy is not widely used in the social sciences, including policy analysis.The concept is not common in public political discourses either.For example, using 'technology policy' (in Norwegian: 'teknologipolitikk') to search Norwegian news media through the comprehensive database Retriever, we find that the term is rare -in striking contrast to 'science policy' or 'innovation policy'.Maybe 'technology policy' triggers unpopular images of governmental planning and thus runs counter to the present dominance of neoliberalism and the belief in the all-powerful market?Or is it that the concept does not fit the heralded visions of globalisation since it seems to refer to the nation state?What should we mean by 'technology policy'?Lewis M. Branscomb (1993:3) provides the following definition: A technology is the aggregation of capabilities, facilities, skills, knowledge, and organization required to successfully create a useful service or product.Technology policy concerns the public means for nurturing those capabilities and optimizing their applications in the service of national goals and the public interest […].Technology policy must include not only science policy … but also all other elements of the innovation process, including design, development, and manufacturing, and the infrastructure, organization, and human resources on which they depend. In a similar vein, Charles Edquist (1994:68) defines technology policy as "all public intervention in the process of technical change. More specifically technology policy is implemented by a number of public policy-making bodies that use specific instruments to influence the process of technical change".Further, Edquist makes a distinction between direct and indirect technology policy.The first is expressly intended to influence technical change, while the latter includes policies that are not primarily designed to shape technical change, but still have such effects.This includes trade policies, military policy and industrial policy.Thus, supposedly, technology policy is a comprehensive scholarly concern, but how comprehensive in practice?Both Branscomb and Edquist frame technology policy as primarily an economic issue.Branscomb (ibid.)states that: "Technologies are created for economic reasons and the investments they call for must be economically justified".Edquist (p. 70) claims that: "The most important goal of (civilian) technology policy is in practice increased productivity growth and competiveness".This suggests a limited and fairly instrumental interpretation of technology policy as mainly science and/or innovation policy to serve economic interests. A twist on such an interpretation is found in Mowery et al. (2010).If we turn to STS, we find a number of studies that are relevant to the understanding of technology policy, like work on standards (Bowker and Star 1999), genetics (Wright 1994, Jasanoff 2005) or gender (Sørensen, Faulkner andRommes 2011, Wajcman 2004). Arguably, the co-production approach of STS (Jasanoff 2004) could be useful, for example by considering technology policy as a co-production of technology and policy or of development and deployment.However, the concept of technology policy is usually not part of these scholarly contributions.Some efforts have been made to provide a more policy-oriented version of STS (Sørensen & Williams 2002, Hommels et al. 2007, Raven et al. 2009).One way of doing this is to extend the concept of technology policy to be more concerned with downstream issues, like use or domestication of technology (Sørensen 2002a).Sørensen (2002b) suggests that studies of technology policy should have four main concerns to transcend the dominant economic framing and focus on R&D: (1) Support for innovation, (2) The provision of infrastructure, (3) Regulation, and (4) Public engagement.This paper will use the latter effort as a stepping stone to develop a framework for conceptualising and analysing technology policy.In doing so, there is a need to be reflexive with respect to the relationship between technology policy as an analytic and as a normative concept.Since we find relatively few instances where policy-making efforts are characterised by the practitioners as technology policy, relevant efforts have to be re-assembled (Latour 2005). Scholarly contributions have to be treated in the same manner. Using 'technology policy' as a generic term for issues of governance with respect to technology and technological development is meant to emphasise the need to study such governance as a set of possibly interrelated activities.This is intended to provide analytic benefit but it is also a normative stance in the sense of an implied critique of policy-making efforts that appears to be split up or are rendered invisible. As suggested above, technology policy issues related to research and innovation have been fairly thoroughly researched.This is above all true with respect to the literature on innovation systems (Archibugi and Lundvall 2001) but also through the concept of triple helix (Etzkowitz 2008).To go beyond innovation-centred perspectives, this paper starts by moving downstream to consider what Mowery et al. call deployment issues, the rate of adoption of given technologies.I argue that from an STS perspective deployment is closely related to the processes of socialisation and domestication of technology, and thus to sense-making and use.However, as we shall see this is not a one way trip, but rather involves complex navigation upstream, downstream and sideways.The next section introduces some relevant theoretical perspectives that may help in the navigation.Then, I turn to some empirical examples mostly related to sustainable energy to demonstrate potential achievements from drawing on an inclusive concept of technology policy. Technology in use: deployed or domesticated? It is a truism that demand plays a crucial role in successful innovations.This is considered to be related to understanding user needs as well as user experiences and the related processes of learning (Andersen and Lundvall 1988).Kline and Rosenberg (1986) introduced the chain-linked model to transcend linear understandings of innovation by emphasising how knowledge and information moved through a variety of chains involving a diversity of actors.Lundvall (1988) proposed an interactive learning model, where innovations were shaped by producer-user interactions.The more recent national system of innovation literature integrates these and supplementary perspectives (Lundvall et al. 2002, Fagerberg andSapprasert 2011) as do triple helix-oriented research (Etzkowitz 2008).Still, the innovating company or organisation is at the centre of attention, in some ways similar to classic actor-network theory's understanding of translation as being performed by entrepreneurial scientists or engineers (Latour 1987). The concept of deployment transcends this focus through the acknowledgement of the need for policy actions to bring new technologies into use.Deployment policies are concerned with changing the premises of demand as well as users' engagement with given technologies, rather than with analysing consumption and use.Such policies may of course affect innovation efforts, for example by leading to increasing investments in innovation (Hoppmann et al. 2013), but that is not the prime target.The main aim is getting new or existing but underutilised technologies in place so that they can contribute to, for example, production of energy without emissions of CO2.This list of barriers covers a broad spectrum of technology policy issues, which makes the thinking with respect to deployment pretty comprehensive.Nevertheless, there is an ontological problem in the identification of deployment and barriers.The concern of Müller et al. seems to be to identify and remedy features that may curb the diffusion of renewable technologies through the stages of initiation, take-off and consolidation (see for example p. 50-51). The resulting frame of interpretation largely black-boxes technology through the use of quantitative indicators.Deployment is measured by counting the number of installations, energy production, investment levels, etc.Thus, the concept becomes predominantly economic with a singular focus on market competition.The actual dynamics of the appropriation processes are overlooked, like what happens when new technologies are moved into "the real world", where the concern for demand might be extended into a concern about use.In a sense, the deployment perspective also black-boxes demand by making it into an issue only of accounting, overlooking the potentially dynamic and reinforcing effects of creative use. As already suggested, an alternative to the fairly instrumental deployment thinking is to be concerned with processes of appropriation of technology -the ways in which technologies embedded in society, and how technologies are affected by the processes of embedding, including cycles of embedding, dis-embedding and re-embedding (Giddens 1990).This would be in line with basic tenets of STS.How may we theorise such processes of technological change, focusing on use and the ways in which a diversity of publics engage with new technologies?STS offers a host of overlapping possibilities.In the light of the focus on R&D, so prevalent in technology policy studies, an interesting proposal is to study the socialisation of scientific and technological research (Bijker and d'Andrea 2009).This could mean reframing the policy issues related to innovation and deployment as a need also to develop specific socialisation policies to provide what Mowery et al. (2010) call R&D support programmes.Actually, the socialisation perspective goes further in its insistence that the embedding of new technologies potentially implies a very comprehensive set of tasks, distributed over many areas. Bijker and d'Andrea identify six such socialisation areas: (1) scientific practices, (2) scientific mediation, (3) scientific communication, (4) evaluation, (5) governance, and (6) innovation.Consequently, potentially, there are a manifold of agents of socialisation, which should be found in scientific institutions, NGOs, government agencies, etc.The problem is, according to Bijker and d'Andrea, that the work of socialisation is not done: "(I)n Europe, the "agents of socialisation" seem to be few; they often work in hostile environment, where resistance and hindrances limit the "systemic" impact of their action; the degree of acknowledgement that they receive from public institutions varies country by country, but overall it appears to be limited; they prevalently act in an "atomised" way, or create short and scarcely visible operation chains" (ibid, p. 22-23, emphasis in the original). Compared to the deployment perspective with an ontology characterised by an economic framing and a focus on barriers, the socialisation approach as outlined by Bijker and d'Andrea is broader and more concerned with the potential for facilitation of societies' and social communities' appropriation of science and technology. Their concept of 'agents of socialisation' is helpful in identifying who should be expected to do the work of bringing science and technology out of scientific institutions and into use. Of course, the idea that scientific and technological research or technology needs to be socialised is a basic STS tenet.Technologies only exist as sociotechnical entities.They are developed through reflections about achievements and use, including commercial intentions.As Latour (2005) reminds us, there is a lot of work by a diversity of actors involved in the translation efforts through which new embedded technologies emerge.Thus, actually, much socialisation is and has to be done.However, this work as well as the technologies involved are often rendered invisible and forgotten (Winner 1977).This means that the efforts of the agents of socialisation are easily overlooked.Bijker and d'Andrea are correct in their call for more and improved socialisation efforts.Still, if we are aware of the lack of visibility of the efforts of agents of socialisation, we may be able to observe more of it.This is important when we are concerned about the potential scope of technology policy. We should also recognise that non-human actors too may be important agents of socialisation.While we may discuss how we should understand the ways in which humans and non-humans interact (Pinch 2012), we should not overlook the importance of infrastructure in shaping and facilitating the shaping as well as embedding of new technologies, including how new technologies are interpreted (Bowker and Star 1999).For example, fuel-cell cars will not be socialised without a network of hydrogen filling stations, which facilitate the practice of refuelling hydrogen as well as signifying that fuel-cell cars are a viable alternative to petrol-powered cars.Equally important are regulations, which set standards and provide risk management that are vital socialisation efforts.Thus, we need to multiply the number of socialisation areas that Bijker and d'Andrea identify. To summarise, the paper has argued an extended conceptualisation of technology policy to include concerns about socialisation, together with innovation and deployment, as well as the interaction of these sets of activities.However, we need to explore the processes through which new technologies are embedded in society; how they may be enacted and made sense of by users.This concern points towards domestication theory as an approach to study such enactments and sense-making (Sørensen 2006).Domestication takes place in many areas and involves a multitude of actors.It This means that the study of domestication provides measures from which we may assess innovation, deployment and socialisation.With respect to innovation, the understanding of user needs is vital.Technologies have to be domesticated to be considered employed, and domestication failures may indicate socialisation flaws. However, these relationships may be contested, competitive and filled with conflict.Technology policy is a field of articulation of interests and thus of controversy.Thus, it has to be approached with this in mind.There may be good reasons that some technologies do not become deployed, socialised and/or domesticated, and anti-deployment and anti-socialisation strategies may be acceptable, even fruitful, for a host of reasons. In a concept of technology policy concerned with innovation, deployment, socialisation and domestication, it is important to note that in relation to new technologies the public may play a complex of roles, as consumers, citizens and users.Technology policy may address these roles more or less explicitly, depending on scope and This may be contrasted to electric vehicles, where current technology policy in Norway includes comprehensive socialisation efforts to make such vehicles attractive as well as to facilitate an interpretation of them as environmentally and climate friendly (Ryghaug and Toftaker forthcoming).In this way, the population is addressed as citizens (to understand and accept the special conditions provided to electric cars), as consumers (making electric cars attractive) and as users (providing meaning to as well as some suggestions about the use of electric cars). An interesting example of a non-embedded technology in Norway In the next section, the aim is to demonstrate the potential of the proposed framework to analyse technology policy activities, with an emphasis on socialisation and domestication.We shall also see that such policy-making is complex, multi-sited and multi-actor. Such observations are not new to policy analysis, but this is still important to observe.examples is partly a pragmatic one; I have been involved in studying them.However, as hopefully will become clear, they display interesting diversities with respect to scope, aims, achievements, timing, and policy efforts.The analysis is synthetic and draws on printed sources to explore theoretical considerations.I do not present fullblown empirical accounts but try to demonstrate how the extended concept of technology policy brings forward observations that are more difficult to make with a singular economic focus on innovation and deployment.Thus, as a consequence, the paper highlights socialisation efforts.This is done by identifying areas, actors and strategies involved in the socialisation as well as considering domestication activities and their effects.First, we turn to a fairly long-term historical example, that of the motorcar in Norway. The embedding of the car in Norway 1 The appropriation of the car in Norway during the 19th and 20th century provides many lessons with respect to the role of socialisation in technology policy as well as regarding deployment and domestication.Also, it points to the possible problem of conflicting policy aims.Initially, the story of the introduction of the car in Norway was initially very much about development of regulation and provision of suitable infrastructure, neither of which really predated the automobile.The first legal term for a car was 'a rail-free vehicle', contrasting it to the railway.Thus, the making of non-human socialisation actors was critical and the main element of technology policy with respect to cars.Initial regulations meant cars could only be considered to be a hobby for the wealthy, since the expensive vehicles were slow and not very comfortable, while the rules for driving them were very strict.This changed, and a main socialisation actor was the Directorate of Public Roads whose managing director Hans Hagerup Krag in the late 19th century publically demonstrated car driving and sent employees abroad to learn about making roads suitable for automobiles.Regulatory efforts were developed to become more accommodating; including the making of traffic rules as well a system for certification of drivers and vehicles.In combination with advertising efforts and newspaper coveragedone by socialisation actors outside of policy-making circles -this resulted in an extensive sense-making with respect to cars as well as the development of driver practices.Infrastructure was built to include petrol stations, car repair shops, car dealers, etc. The result is that cars became a pervasive feature of modern Norwegian society with a comprehensive infrastructure as well as a diversity of car-related practices of individuals and communities. Policy-making activities related to provide regulations and infrastructure clearly were effective socialisation measures.This resulted in a widespread domestication of cars in Norway.For example, when Hans Hagerup Krag was the head of the Directorate of public roads, he could be seen as developing a policy to deploy cars in Norway.This effort was made above all by being a socialisation actor, which included removing the barrier of unsuitable roads by improving transport infrastructure.On the other hand, politicians were not too keen on a speedy deployment.Norway early began to tax cars and car-use relatively heavy.This was legitimized by labelling the car as luxury, as a relatively expensive and unnecessary artefact.Since Norway was (and is) without its own car industry, cars are imported and from an economic point of view, they are a negative item on the trade balance.Such considerations led to the introduction of import quotas on cars from 1945 to 1960.During this period, those who wanted to buy a private car had to apply for an import license, and such licences were granted on the basis of assumed needs.This favoured people who could argue that they needed a car to facilitate their professional activities, like doctors, shop-owners and craftsmen.Overall, the labelling of cars as luxury items represented a technology policy that at least partly employed an anti-socialisation strategy. Thus, technology policy with respect to cars could be seen as ambivalent, a mix of deployment and impediment efforts.Such ambivalence may be more common than most of the literature about technology policy suggests.Further, technology policy with respect to cars was not a concerted action.Rather, it was distributed, involving a multitude of actors with a diverse set of interests, objectives and instruments.Deployment was important to some, but most actors were socialisation agents contributing to the adaption of cars and related technologies to Norwegian societysome policy-making insiders, others being outsiders.However, one cannot understand the predominant role of the car in transporting people in Norway without acknowledging car owners' domestication of their vehicle as a combination of a necessary good and as an object of comfort, identity, and freedom.In this sense, deployment and socialisation had strong tail wind, despite import quotas (lifted in 1960) and high taxes. Cars are definitively technology policy objects, but we have to be aware -as suggested above -that the technology policies that 1 This section is based on Sørensen (1991) and Østby (1995).are meant to influence the deployment and use of cars may not be confluent.Some measures, like building better roads, may stimulate ownership and use of cars.Other measures, like taxes or road pricing, may work as anti-socialisation strategies.The lack of confluence may also be due to different interpretations of the public, like environmentally concerned citizens versus impatient consumers.Moreover, considering the historical process of appropriation of cars in Norway, it should be clear that the massive deployment was not mainly a policy outcome.It is easier to see socialisation agents -inside and outside of policy-making -that facilitated sense-making, but arguably, Norwegians were easily persuaded to become car owners and drivers.In this sense, the outcome of the Norwegian domestication of cars has shaped technology policy with respect to transportation, most obviously so by motivating anti-socialisation strategies. Wind power development -in headwind? Like the car, the deployment of wind power in Norway is basically about imported technology.Technological innovation has been a marginal and backstage issue.Moreover, deployment has been slow, mainly because of a general lack of investments in the production of electricity.Compared to hydro power, wind power has always been considered to be too costly, and technology policy with respect to wind power has mainly been an issue of how and how much to subsidise.In 2012, Norway joined Sweden in establishing a system of so-called green certificates to stimulate investments in renewable electricity through subsidies.While this deployment effort seems particularly beneficial to hydro power, it has also spurred increased willingness to invest in wind power.Per 2012, there were only 315 wind turbines in Norway, with a capacity of 704 MW.The capacity is expected to reach between 3 000 and 3 500 MW in 2020. 2ith respect to socialisation efforts, the situation is more complex. Existing regulation provide a licencing system that calls for developers of wind power to inform and engage the local public, while the power grid infrastructure has imposed limitations with respect to constructions (Gjerald 2012).Gjerald shows that industrial actors working with wind power see the licencing system as bothersome because it is time consuming, but they also acknowledge the usefulness of the system exactly because it acts as a socialisation machinery.Two public institutions are part of the system as socialisation and deployment actors; the Norwegian Water Resources and Energy Administration (NVE) and the energy transformation directorate Enova.Enova oversees funding support while NVE grants licences. For a long time, news media together with environmental organisations were the most important socialisation agents with respect to the interpretation of wind power.In the 1980s, wind power was framed positively as an environmentally friendly technology, but this changed during the 1990s.Increasingly, the framing of wind power became critical, with an emphasis on wind turbines being in conflict with conservation of nature, as noisy, ugly and dangerous to birds (Bye andSolli 2007, Solli 2010).Some scientists have tried to counter these views, and according to surveys, the general public is quite positive to wind power (Karlstrøm 2012).Moreover, most of the constructed wind power parks have met with little local resistance.Actually, local communities may want such parks because of benefits in terms of income, employment and improved roads.To some extent, this is the result of local governments acting as socialisation agents (Rygg 2012). It is interesting and important to note that the Plan and Building Act -a legal instrument that regulates all kinds of major construction work in Norway -actually works as a piece of important socialisation machinery for wind power technology and many other technologies as well.This shows how technology policy to some extent has been automated in a way that has little visibility.The lack of concern for grid capacity, which has been and still is a bottleneck for wind power, is another indication that policy-makers may have thought financial measures, including R&D investments, to be sufficient efforts to achieve deployment of wind power.The existence of standard institutional procedures like the Plan and Building Act may cloud the issue of what technology policy should accomplish. The situation with offshore wind, a priority area in Norwegian energy research, reflects a similarly narrow technology policy focus. Policy-makers have granted funding for R&D, which is so-to-speak end of story.The involved R&D institutions, together with their industrial partners, have been left with the task of innovating and commercialising offshore wind technology.There are no policy efforts to support any kind of training ground like a home market for offshore wind electricity.While industry is complaining about lack of government support (Hansen and Steen forthcoming), the involved scientists appear to be reluctant to take on any kind of responsibility to socialise the technology besides talking to their industry partners (Heidenreich forthcoming).Presently, there are no visible public deployment efforts and socialisation initiatives are meagre.There are no concrete plans to build offshore wind parks in Norway either. A hydrogen road to nowhere? 3The HyNor project was established in 2003 as an effort to construct a network of filling stations for hydrogen that would provide infrastructure for fuel-cell vehicles to drive the 343 miles between Oslo and Stavanger along the south coast of Norway.The idea underlying the project was to provide a basis for a realistic experiment with the use of hydrogen for transport by building an early stage infrastructure for the provision of hydrogen, which later could become part of something more permanent.The project also included local experiments with the production of hydrogen, trying out several technological options like making hydrogen from gas from waste or reforming natural gas. The initiative to make the Hydrogen Road came from Norsk Hydro, a company that had large quantities of hydrogen available.It gained support from other interested parties, like bus companies, and obtained funding from Research Council of Norway and the Ministry of Transport.The project was presented as a user-directed, market close innovation project.The main innovations foreseen were linked to the set-up of a filling station network and related technologies.As a technology policy initiative, the HyNor project has increasingly been presented as a deployment effort with respect to hydrogen vehicles.HyNor is presently applying for funding "for a new permanent fleet of hydrogen cars in Norway, which through the project will identify remaining barriers for a larger introduction of hydrogen cars".4Support for such initiatives is sought through Transnova, a public technology policy institution set up to provide grants and advice for pilot and demonstration projects to encourage new sustainable mobility solutions. From my technology policy perspective, it seems more pertinent to interpret HyNor as a socialisation effort than as an innovation or deployment initiative.The project has not been linked to any short or medium term plan to introduce fuel-cell cars in Norway on a commercial basis.Of course, HyNor could be said to have contributed to innovations regarding supply, storage and filling of hydrogen for vehicles.However, the main issue has been the construction of a sociotechnical imaginary (Marcus 1995) of hydrogen for transport, which includes an image of hydrogen vehicles as clean, safe and with a long range.However, the extent to which this imaginary, this socialisation effort, has been picked up by the public is unclear.Of course, one should not dismiss the technological learning achieved through HyNor.Surely, useful experiences have been reaped.Nevertheless, in the long run, the socialisation gains will certainly prove more important. CCS -the Norwegian "moon landing" project The idea that climate change mitigation could be achieved through technologies for capturing, transporting and storing CO2 has played a vital role in Norwegian politics to create broad consensus around energy and climate policy (Tjernshaugen and Langhelle 2009).When Prime Minister Jens Stoltenberg in his televised annual New Year speech in 2007 announced CCS as Norway's "moon landing" project, he launched a large innovation initiative while he performed an important socialisation effort.Still, the technology policy with respect to developing CCS for natural gas power plants has been carried through mainly as innovation policy through large R&D investments with little public visibility.To be fair, the underlying sociotechnical imaginary -gas power plants without CO2 emissions and thus a climate friendly use of an abundant source of fossil energy -has also been communicated, but mainly by ENGOs Bellona and Zero.These ENGOs, together with news media, have been the main socialisation agents. News media coverage has been a mix mainly of recirculating the sociotechnical imaginary of CCS as a strategy for climate friendly fossil energy and complaints that the innovation and deployment efforts have been half-hearted.There have been nearly no critical voices with respect to whether CCS technology actually can deliver on the promises (Klimek and Sørensen forthcoming).On the other hand, the scientific expertise working with CCS technology is not particularly eager to engage in socialisation efforts, claiming that this is a job for somebody else (Klimek forthcoming). There is little doubt that the international situation with respect to CCS is quite challenging (Scott et al. 2013) and that a supportive technology policy needs to be comprehensive (Markusson et al. 2012).However, Norwegian CCS technology policy is fairly narrowly focused on innovation with little visible reflection among policy actors with respect to the socialisation of CCS, including the challenges of providing infrastructure and regulatory framework.It seems that CCS technology is believed to mitigate climate change in a way that to the public is 'out of sight, out of mind'.Thus, socialisation efforts are left to news media and ENGOs.This suggests that current CCS technology policy is not geared to embed CCS in Norway, but rather to innovate for use in other countries. Conclusion: Technology policy as an embedding effort Innovation policy may be described as a broad set of activities (Borrás and Edquist 2013); deployment policies similarly (IEA 2011). Still, as I have argued in this paper, a focus on innovation and deployment is too narrow as a point of departure for analysing as well as making effective technology policies.When innovation and deployment are the main concerns, this facilitates an economic, R&D centred approach that overlooks the challenges emerging from the need to embed new technologies in the relevant social practices.Thus, we need to extend the focus by including the concepts of socialisation and domestication of technology. Innovation, deployment, socialisation and domestication represent overlapping areas of concern, but also distinct issues that need to be considered separately.'Innovation' is about the development of technology (or other goods) that has economic and/or social significance.'Deployment' concerns putting innovations to use.'Socialisation' points to the activities needed to embed new technology in society as well as to processes affecting the embedding (Skjølsvold 2012).'Domestication' focuses on the enactment of technologies in specific contexts, with a view to the development of practices and sense-making. These interrelated concepts are important to identify and understand the policy actions that are taken to make sociotechnical change happen (or not).Also, they are helpful as a basis from which to criticize missing features of a given technology policy, like the lack of emphasis on socialisation identified in the case of CCS above. For example, an effective technology policy to reduce the use of petrol-fuelled cars should be based on an understanding of the ways in which such cars have been domesticated in Norway.It may include support of innovations to reduce emissions, develop new fuels or new ways of conducting transport as well as efforts to deploy more environmentally friendly practices.However, in the end, socialisation efforts are needed as an on-going concern to help pave the way for technologies that may mitigate climate change and reduce pollution -in parallel with anti-socialisation measures directed at technologies that should be phased out.This is needed to foster demand for the new technologies but also to actually change the currently well-embedded practices as well as the culture of transportation in the context of everyday life. Thus, technology policy should address innovation, deployment and socialisation by supporting, mobilising and limiting human as well as non-human actors.Further, technology policy should be informed by concerns as well as knowledge about domestication of the technology or set of technologies that are to be affected. Thus, domestication has a different role than the three other concepts.Understanding domestication, the activities undertaken by customers, citizens and users to finally embed the technology in question, is important to be able to select and shape measures to effectively stimulate innovation, deployment and socialisation towards intended outcomes.In particular, socialisation efforts should be developed from insights into the performance of domestication or at least in dialogue with such performances.Bijker and d'Andrea (2009) rightly observe that socialisation in most cases is given insufficient attention or even neglected.To some extent, this may be due to the assumption that there are systems already in place that cater to socialisation so that policy-makers may remain unconcerned about such issues (cf. the wind power example).On the other hand, such systems of socialisation also need to be acknowledged when we analyse technology policy practices. Analysing these systems may also remind about their existence as well as allowing assessments of their effectiveness.For example, there is a well-articulated expectation that scientists should engage in explaining their research to the public, but the systems set up to achieve this are not working very well (cf.the CCS example). The neglect of socialisation challenges is probably also related to policy-makers' way of understanding demand as primarily an economic issue of consumption, downplaying the fact that consumers are also citizens and users.As citizens, the public may want to be involved in innovation and deployment of new technologies, at least to feel informed to the extent that they trust innovation and deployment actors.As users, people want to understand and make sense of the practices they may develop from new technologies.Socialisation efforts should cater to both needs. The four examples discussed above may be analysed to showunsurprisingly -that technology policy actions are multi-sited, multi-actor and multi-purpose.This complexity has not been dealt with in this paper, because the main concern has been to argue the need to include more sites and actors -in particular related to the inclusion of socialisation concerns.In order to deal with technology policy-making processes, further development is necessary to provide a better understanding of the role of non-human actors.One avenue to explore, given the emphasis on socialisation and the need to think about domestication, would be a concept of reflexive policy-making regarding technology.This could draw upon suggestions found in Beck (2006) and Latour (2007) to study policy-makers' processes of learning about and interpreting the embedding of new technologies.Thus, there is considerable need for scholarly work to explore and systematise the analysis of technology policy as theory as well as practice.Hopefully, this might benefit the doing of technology policy.When technology is seen as sublime with respect to the society of the future, it would be nice to be hopeful that the embedding happens in ways that increase the probability that the assumed sublime qualities are realised. They propose developing a technology policy approach aimed at managing the threat of global climate change.Mowery et al. argue the need for a large-scale, concerted effort to develop and deploy energy technologies that can be tools for climate change mitigation, and they criticise suggestions that such efforts can modelled after the Manhattan project or the Apollo programme.They call their alternative a R&D support programme, making R&D the core of the effort of climate change mitigation.The problem of deploying technologies for sustainable energy is mainly conceived as a challenge for governments to stimulate the demand for such technologies.Again, we observe the dominance of an economic framing, even if Mowery et al.'s main concern is global warming. Müller et al. (2011) perceive this aim above all as a need to remove barriers of deployment.They classify such barriers in the following way (p.32-33): 1) Techno-economic barriers related to relative costs compared to competing technologies.2) Non-economic barriers that related to factors preventing deployment or increasing costs o Regulatory and policy uncertainties o Institutional and administrative barriers o Market barriers, for example inconsistent pricing structures o Financial barriers o Infrastructure barriers o Lack of awareness and skilled personnel o Public acceptance and environmental barriers. results in practices with regard to use, provides meaning to the technology in question, and depends on users managing cognitive challenges related to learning and understanding the technology.Some technologies are domesticated swiftly across a broad spectrum of the population, while other technologies become domesticated slowly and/or by small communities, and some technologies are not domesticated at all.Arguably, socialisation efforts should help technologies, or scientific knowledge for that matter, become domesticated.Domestication of a given technology means that it has been deployed, but the observation that the technology has been deployed tells us nothing about sense-making and the development of practices.To get such knowledge, we need to study the actual process of domestication. focus.If we are to improve our understanding of technology policy, we need to study how the different roles are catered for -if at all.Let us briefly consider some examples.Today, nearly everyone in Norway is familiar with SMS (short message service), which is an integral part of mobile telephones and developed as part of the telecommunication standard called GSM (General System for Mobile Communication).The domestication of SMS happened incredibly swift through young mobile phone users who discovered this application as a cheap, quick and handy way of communicating with each other.The emergent practices, including shorthand and symbols, were produced by the collective of users in a distributed fashion where nobody credibly may claim intellectual property rights.This collective of users socialised SMS without any policy effort outside standard regulation of mobile telecommunication.In this case, technology policy with respect to mobile telecommunication did not really address any of the three potential roles of the public. is nuclear power.Norway got its first atomic reactor in 1951 as the fifth country in the world.The reactor was primarily intended for research and experiments, and the director of the Institute for Atomic Energy (today, Institute for Energy Technology), Gunnar Randers, made a very substantial effort to socialise atomic energy (Randers 1975).However, Norway and Norwegians never domesticated nuclear power, and the Parliament eventually decided against the construction of nuclear power in Norway.Relatively speaking, no other technology has received as much funding as atomic energy in Norway, but as a technology policy object it became a failure because neither the practices involved in producing nuclear power nor the meaning attributed to the technology was considered attractive.The anti-socialisation efforts of the anti-nuclear movement (the public enacting the role of citizens) stopped innovation and deployment and thus made the roles of consumers and users unavailable.These examples also nicely illustrate some consequences of domestication with regard to technology policy.In the case of SMS, a quick, successful domestication based on a distributed, collective user-driven socialisation effort, made any form of policy intervention superfluous.With electric cars, policy-makers saw a need for facilitating actions and launched an active technology policy for deployment and use, leaning on explicit socialisation efforts.Nuclear power exemplifies the potential role of conflict in rendering technology policy ineffective.The comprehensive socialisation efforts, in particular by the research community throughout the 1950s and 1960s failed when confronted with strong anti-socialisation actions.Thus, nuclear power did not lend itself to be domesticated by the general public or even by energy companies.So far, this paper has provided an argument for analysing technology policy in a comprehensive manner by going beyond innovation and adding the issues of deployment, socialisation -including infrastructure and regulation -and domestication.Deployment should be considered because, often, policy efforts are made to get technologies employed.Socialisation is similarly important as a set of actors and activities that may or may not be mobilised in order to embed new technologies in society, while the analysis of domestication throws light on the effectiveness of policy achieving employment and embedding.Above all, socialisation efforts should be thought of as means to facilitate domestication. In this section, I analyse four examples of development of technology in Norway from a technology policy perspective: (1) The appropriation of the car in the 19th and 20th century, (2) Wind power development, (3) The so-called Hydrogen Road as an experiment in supplying hydrogen for transport, and (4) The development of carbon capture and storage (CCS) technology.The choice of these
9,566.2
2016-12-01T00:00:00.000
[ "Political Science", "Engineering" ]
Predicting Synergism of Cancer Drug Combinations Using NCI-ALMANAC Data Drug combinations are of great interest for cancer treatment. Unfortunately, the discovery of synergistic combinations by purely experimental means is only feasible on small sets of drugs. In silico modeling methods can substantially widen this search by providing tools able to predict which of all possible combinations in a large compound library are synergistic. Here we investigate to which extent drug combination synergy can be predicted by exploiting the largest available dataset to date (NCI-ALMANAC, with over 290,000 synergy determinations). Each cell line is modeled using primarily two machine learning techniques, Random Forest (RF) and Extreme Gradient Boosting (XGBoost), on the datasets provided by NCI-ALMANAC. This large-scale predictive modeling study comprises more than 5,000 pair-wise drug combinations, 60 cell lines, 4 types of models, and 5 types of chemical features. The application of a powerful, yet uncommonly used, RF-specific technique for reliability prediction is also investigated. The evaluation of these models shows that it is possible to predict the synergy of unseen drug combinations with high accuracy (Pearson correlations between 0.43 and 0.86 depending on the considered cell line, with XGBoost providing slightly better predictions than RF). We have also found that restricting to the most reliable synergy predictions results in at least 2-fold error decrease with respect to employing the best learning algorithm without any reliability estimation. Alkylating agents, tyrosine kinase inhibitors and topoisomerase inhibitors are the drugs whose synergy with other partner drugs are better predicted by the models. Despite its leading size, NCI-ALMANAC comprises an extremely small part of all conceivable combinations. Given their accuracy and reliability estimation, the developed models should drastically reduce the number of required in vitro tests by predicting in silico which of the considered combinations are likely to be synergistic. Drug combinations are of great interest for cancer treatment. Unfortunately, the discovery of synergistic combinations by purely experimental means is only feasible on small sets of drugs. In silico modeling methods can substantially widen this search by providing tools able to predict which of all possible combinations in a large compound library are synergistic. Here we investigate to which extent drug combination synergy can be predicted by exploiting the largest available dataset to date (NCI-ALMANAC, with over 290,000 synergy determinations). Each cell line is modeled using primarily two machine learning techniques, Random Forest (RF) and Extreme Gradient Boosting (XGBoost), on the datasets provided by NCI-ALMANAC. This large-scale predictive modeling study comprises more than 5,000 pair-wise drug combinations, 60 cell lines, 4 types of models, and 5 types of chemical features. The application of a powerful, yet uncommonly used, RF-specific technique for reliability prediction is also investigated. The evaluation of these models shows that it is possible to predict the synergy of unseen drug combinations with high accuracy (Pearson correlations between 0.43 and 0.86 depending on the considered cell line, with XGBoost providing slightly better predictions than RF). We have also found that restricting to the most reliable synergy predictions results in at least 2-fold error decrease with respect to employing the best learning algorithm without any reliability estimation. Alkylating agents, tyrosine kinase inhibitors and topoisomerase inhibitors are the drugs whose synergy with other partner drugs are better predicted by the models. Despite its leading size, NCI-ALMANAC comprises an extremely small part of all conceivable combinations. Given their accuracy and reliability estimation, the developed models should drastically reduce the number of required in vitro tests by predicting in silico which of the considered combinations are likely to be synergistic. INTRODUCTION Drug combinations are a well-established form of cancer treatment (Bayat Mokhtari et al., 2017). Administering more than one drug can provide many benefits: higher efficacy, lower toxicity, and at least delayed onset of acquired drug resistance (Sugahara et al., 2010;Holohan et al., 2013;Crystal et al., 2014). Serendipitous discovery in the clinic has been a traditional source of effective drug combinations (Zoli et al., 2001;Kurtz et al., 2015). Yet systematic large-scale efforts to identify them have only recently been pursued, with a growing number of preclinical experimental efforts to identify synergistic combinations (Zoli et al., 2001;Budman et al., 2012;Lieu et al., 2013;Kashif et al., 2015;Yu et al., 2015;Kischkel et al., 2017) being reported in literature. The sheer number of available and possible drug-like molecules (Polishchuk et al., 2013) and an exponential number of their combinations, however, make the process of finding new therapeutic combinations by purely experimental means highly inefficient. An efficient way of discovering molecules with previously unknown activity on a given target is using in silico prediction methods. Quantitative Structure-Activity Relationship (QSAR) models establish a mathematical relationship between the chemical structure of a molecule, encoded as a set of structural and/or physico-chemical features (descriptors), and its biological activity on a target. Such methods have been successfully used in a wide variety of pharmacology and drug design projects (Cherkasov et al., 2014), including cancer research (Chen et al., 2007;Mullen et al., 2011;Ali and Aittokallio, 2018). QSAR models are traditionally built using simple linear models (Sabet et al., 2010;Pick et al., 2011;Speck-Planche et al., 2011 to predict the activity of individual molecules against a molecular target. In the last 15 years, non-linear machine learning methods, such as Neural Network (NN) (González-Díaz et al., 2007), Support Vector Machine (SVM) (Doucet et al., 2007) or Random Forest (RF) (Singh et al., 2015), have also been employed to build QSAR models. More recently, QSAR modeling has also achieved accurate prediction of compound activity on non-molecular targets such as cancer cell lines (Kumar et al., 2014). To extend QSAR modeling beyond individual molecules, the set of features from each molecule in the combination must be integrated. Various ways exist to encode two or more molecules as a feature vector, e.g., SIRMS descriptors (Kuz'min et al., 2008) for properties of combinations or the CGR approach for chemical reactions (de Luca et al., 2012). Rigorous validation strategies for the resulting models have been developed too (Muratov et al., 2012). The most common representation of a drug pair is, however, the concatenation of features from both molecules (Bulusu et al., 2016). On the other hand, modeling drug combinations requires the quantification of their synergy. Several metrics exist to quantify synergy (Foucquier and Guedj, 2015) (e.g., Bliss independence Bliss, 1939, Loewe additivity Chou andTalalay, 1984, Highest single agent approach Greco et al., 1995or Chou-Talalay Method Chou, 2010. These are implemented in various commercial and publicly available software kits for the analysis of combination data, e.g., Combenefit (Di Veroli et al., 2016), CompuSyn (http://www.combosyn.com) or CalcuSyn (http://www.biosoft.com/w/calcusyn.htm). One major roadblock in drug synergy modeling has been the lack of homogeneous data (i.e., datasets generated with the same assay, experimental conditions and synergy quantification). This has been, however, alleviated by the recent availability of large datasets from High-Throughput Screening (HTS) of drug combinations on cancer cell lines. For instance, Merck has released an HTS synergy dataset (O'Neil et al., 2016), covering combinations of 38 drugs and their activity against 39 cancer cell lines (more than 20,000 measured synergies). This dataset has been used to build predictive regression and classification models using multiple machine learning methods (Preuer et al., 2018). AstraZeneca carried out a screening study, spanning 910 drug combinations over 85 cancer cell lines (over 11,000 measured synergy scores), which was subsequently used for a DREAM challenge (Li et al., 2018;Menden et al., 2019). Very recently, the largest publicly available cancer drug combination dataset has been provided by the US National Cancer Institute (NCI). This NCI-ALMANAC (Holbeck et al., 2017) tested over 5,000 combinations of 104 investigational and approved drugs, with synergies measured against 60 cancer cell lines, leading to more than 290,000 synergy scores (ComboScores). NCI-ALMANAC datasets have recently been modeled to predict the best growth inhibition of a given drug combinationcell line tuple (Xia et al., 2018). However, the question remains of how well ComboScores can be predicted on each NCI-60 cell line, which is important given that ComboScore-based screening has led to the identification of novel synergistic drug combinations in vivo (Holbeck et al., 2017). Here we present a large-scale study addressing this question. We build an individual model for each cell line using the popular RF algorithm (Breiman, 2001). We also build a second model per cell line using XGBoost (XGB for short) (Chen and Guestrin, 2016), a recent machine learning method that has helped to win numerous Kaggle competitions (Chen and Guestrin, 2016) as well as to generate highly predictive QSAR models (Sheridan et al., 2016). We validate these models for commonly-encountered prediction scenarios: e.g., unseen drug combination or unseen drug partner. We also introduce and validate reliability estimation techniques to further improve prediction of drug combination synergy. Lastly, we assess the suitability of NCI-ALMANAC datasets for predictive modeling depending on the screening center where they were generated. Data NCI-ALMANAC is the largest-to-date phenotypic drug combination HTS. It contains the synergy measurements of pairwise combinations of 104 FDA approved drugs on the 60 cancer cell lines forming the NCI-60 panel (Shoemaker, 2006). The drugs include a wide array of small organic compound families, as well as several inorganic molecules (cisplatin and related platinum-organic compounds, arsenic trioxide). A similarity clustering dendrogram (Figure 1) shows the high diversity of the drugs in NCI-ALMANAC. Indeed, only 3 clusters comprising 8 drugs are formed with a Tanimoto score threshold of 0.8 (Vinblastine with Vincristine, Sirolimus and Everolimus, and Daunorubicin-Doxorubicin-Idarubicin-Epirubicin clusters), while the remaining 96 drugs have smaller similarity among them. NCI-ALMANAC aggregates synergy data from three screening centers: NCI's Frederick National laboratory for Cancer Research (screening center code 1A, 11,259 synergy determinations), SRI International (FF, 146,147 determinations), FIGURE 1 | Sketch of the workflow for drug combination modeling. Training data comes from NCI-ALMANAC, which comprises over 290,000 synergy measurements from pairs of 104 drugs tested on the 60 cell lines. Structural and physico-chemical features are calculated for each drug from its chemical structure. Similarity clustering diagram for 104 NCI-ALMANAC drugs is on the left. Each drug is characterized by MFPC features complemented with physico-chemical features, using the Tanimoto score on these features as the similarity metric. Hierarchical agglomerative clustering was carried out (ward.D2 algorithm in the R hclust function). Closely related compounds form tight clusters (e.g., doxorubicin and its analogs, analogs of paclitaxel, etc). By contrast, naturally inorganic compounds such as cisplatin and arsenic trioxide appear as outliers (the highest similarity coefficient to other drugs being 0.156 and 0.125, respectively). The concatenated vectors of the two drugs are the features utilized to build and test predictive models with machine learning techniques. The predictive accuracies of the models are determined by multiple cross-validation experiments. and University of Pittsburgh (FG, 136,129 determinations). The synergy of drug pairs is measured in these screening centers against the NCI-60 panel, which includes cell lines from nine cancer types: leukemia, melanoma, non-small-cell lung, colon, central nervous system, ovarian, renal, prostate, and breast. In total, synergy is measured for 293,565 drug combination-cell line tuples, which represents a matrix completeness of 91.35%. Each center follows its own protocol, and some drugs are absent from the combination pool depending on the screening center. Since there is no overlap between drug combination-cell line tuples between the three centers, it is not possible to estimate inter-center batch effects, and therefore we must use data from different screening centers separately. The combination benefit is quantified in NCI-ALMANAC by the so-called ComboScore (a modified version of the Bliss independence model). From the entire dose-response matrix of the considered drug combination and cell line tuple, the gain (or loss) of the effect achieved by the combination over the theoretically expected value if the effect was additive is calculated. Positive values of ComboScore indicate a synergistic effect of the combination, whereas the negative correspond to an antagonistic effect (those purely additive obtain a zero ComboScore). Further description of NCI-ALMANAC data is available at Supplementary Information. Features For the use in machine learning, the structures of compounds must be encoded as vectors of numerical features known in chemoinformatics as molecular descriptors (Todeschini and Consonni, 2000). Several types of chemical structure features have been considered in this work: (1) Morgan FingerPrints (MFP) are topological descriptors describing the connectivity of the molecular structure, which take values 0 or 1, depending on whether the pattern is present in the molecule or not (Rogers and Hahn, 2010). They have been calculated with RDKit library (Lamdrum, 2015) using the following parameters-length is 256 bits, radius is 2. (2) Morgan FingerPrint Counts (MFPC) are a non-binary version of MFP that takes integer values equal to the number of times the pattern is detected in the molecule (256 features per drug, also calculated with RDKit). (3) MACCS keys encode presence or absence of 166 predetermined substructural fragments as binary vectors (calculated with RDKit). (4) ISIDA fragments encode structure as a vector of numbers of occurrences of substructural fragments of given nature and topology in the molecule (Varnek et al., 2005), which are calculated with ISIDA/Fragmentor (Ruggiu et al., 2015). Only one type of fragments is considered here: sequences of atoms and bonds of length 2 to 6 (1,325 features per drug in total). (5) SIRMS fragments are the number of occurrences of 4-atom fragments FIGURE 2 | Performance gain across cell lines for each introduced modeling choice during the exploratory analysis of FG data. Each boxplot represents the distribution of the cell line models' test set performances (R p ) at any given step. Analysis steps are carried out sequentially: I-RF, 1,000 trees with all n features tried to split a node, 80% training set, 20% test set, MACCS (Molecular ACCess System) keys as features; II-MFPC (Morgan fingerprint counts) are used as features instead; III-physico-chemical features are added for each drug; IV-training set rows are duplicated with the reverse order of drugs (data augmentation); V-−90% training set, 10% test set are used instead of the initial 80/20 partition; VI-RF with 250 trees with n/3 features tried to split a node; VII-XGB models with recommended settings; VIII-tuned XGB models. Note that I-V employ RF with same values for its hyperparameters (RF tuned in VI) and V-VIII use the same training and test sets. Modeling choices introducing the largest improvements are the choice of molecular features and the data augmentation strategies. of varying topology in a molecule, including bonded and nonbonded atoms (Kuz'min et al., 2008). Calculated with SiRMS python library (github.com/DrrDom/sirms), it led to 1,454 features per drug. In addition to these sets, 7 physico-chemical features are calculated by RDkit: total polar surface area (TPSA), molecular weight, logP, number of aliphatic and aromatic rings, H-bond donors and acceptors. Machine Learning (ML) Workflow Models are built using two ML algorithms: Random Forest (RF) (Svetnik et al., 2003) and Extreme Gradient Boosting (XGBoost; XGB for short) (Sheridan et al., 2016). The entire modeling workflow is sketched in Figure 1. Further details about how ML models were built are available at the Supplementary Information. Predictive Performance Metrics To evaluate the performance of a model, the following metrics are calculated from observed y obs and predicted y pred ComboScore values: Spearman's rank-order correlation coefficient (R s ) R s = R p (rank y obs , rank y pred ) We use R p between observed and predicted values of ComboScore of a dataset not used to train the model as a primary metric of its accuracy. For proper estimation of the generalization error, these metrics are always calculated here on a test set not used to train or select the model. Exploratory Modeling of NCI-ALMANAC Data First, we perform an exploratory modeling on the FG datasets in order to determine optimal settings for synergy prediction by assessing various types of features, data augmentation schemes and machine learning methods. The summary of performance improvements is shown on Figure 2. The best median R p across cell lines for RF was obtained with 250 trees, a third of the features evaluated at each tree node, training data augmentation and MFPC fingerprints complemented by physico-chemical properties (256 and 7 features per drug, respectively). The gain of performance with RF is substantial: the median R p increases from 0.530 (I) to 0.634 (VI). XGB models are generated with the same features and data set partitions. Changing the machine learning algorithm from RF to XGB does not improve the median test set R p , although both minimum and maximum R p are higher with XGB (boxplots FIGURE 3 | Observed vs. predicted ComboScore for all the drug combinations in the test set. This is presented for the best-and worst-performing models with both ML methods, RF, and XGB (these models correspond hence to the extremes of Figure 2's boxplots VI and VIII, respectively). On the left column, the best-performing cell line models from each method. On the right side, the worst-performing cell line models. All performance metrics are shown. Each point represents a drug combination in that test set. Figure 2, respectively). After tuning of XGB hyperparameters per cell line, a small gain in overall performance is obtained: the median R p of tuned XGB rises to 0.641 (boxplot VIII). In comparison, Y-randomization (Tropsha et al., 2003) tests using the same learning algorithm did only obtain a median R p of −0.016 (−0.024 when using RF). Figure 3 shows the degree of accuracy achieved by each algorithm for the best and the worst predicted cell line. The cell lines with the worst predictions (OVCAR-8 for RF and SF295 for XGB) have substantially smaller variance in observed ComboScore than those with the best predictions (SK-MEL-5 for both algorithms). Estimating the Reliability of Drug Synergy Predictions For prospective use of models, it is paramount to calculate not only predicted drug combination synergies, but also how reliable these predictions are (Mathea et al., 2016). With this purpose, we have applied a RF-specific reliability prediction approach, where the degree of agreement between the diverse trees in the forest serves as a reliability score. This is quantified here as the standard deviation (SD) of the RF tree predictions (250 per drug combination and cell line) and referred to as tree_SD. tree_SD has been pointed out as one of the most powerful metrics to assess the reliability of predictions in regression problems (Mathea et al., 2016). We thus assemble test subsets with the 25% most reliable ComboScore predictions per cell line (i.e., combinations with the 25% lowest tree_SD scores). Likewise, we assemble test subsets with 25% least reliable predictions per cell line. Figure 4 presents the test set performances of each cell line model on the three scenarios: 25% most reliable predictions, all predictions regardless of estimated reliability and 25% least reliable predictions. The top and bottom 25% predictions in terms of reliability obtain the lowest and highest RMSE in every cell line, which demonstrates the accuracy and generality of tree_SD as a reliability score for drug synergy predictions. Test set RMSE varies greatly across cell lines, e.g., models built on leukemia cell lines obtain in general higher error. This, however, comes from the higher range of ComboScores observed in these cell lines. Indeed, the larger this range, the higher the range of predicted ComboScores is, which combined tend to make RMSE larger. Similar RMSE is only obtained on the K-562 leukemia cell line, which is consistent with the fact that it has the lowest range among leukemia cell lines and similar to that of other cancer types. FIGURE 4 | Ten percent test set RMSE of RF cell line models trained on 90% of the FG data. Gray squares represent the model's RMSE on all the test combinations (RF predictions as usual). Black triangles mark the RMSE of the 25% least reliable (highest tree_SD) combos, whereas white inverted triangles correspond to the RMSE of the 25% most reliable (lowest tree_SD). In each cell line, the reliability score tracks test RMSE and hence it can be used to identify a priori the most accurate predictions. Each cell line name in the horizontal axis is preceded by its cancer type ID: breast (BR), central nervous system (CNS), colon (CO), non-small-cell lung (LC), leukemia (LE), melanoma (ME), ovarian (OV), prostate (PR), and renal (RE). Reliability estimation is evaluated in terms of RMSE rather than R p . While RMSE is not as intuitive as correlation, correlations may be misleading when comparing performances of models across test sets with distinct variances. Figure 5 illustrates this issue with the test performances of HL-60 models, which benefit the most from reliability estimation. The test set with the most reliable combinations is predicted with half the RMSE of the entire test set (RMSE of 41 vs. 80) and a third of the least reliable combinations (RMSE of 41 vs. 117). This more accurate prediction can be visually observed too, but the other metrics (R 2 , R p , and R s ) do not capture this increase in accuracy due to substantially different ComboScore variance between the compared test sets. Importantly, RF with reliability prediction provides a much larger reduction in RMSE than that introduced by XGB (bottom right), both with respect to RF without reliability prediction (bottom left). These results strongly suggest that, in cases where it is not necessary to test all positive predictions (here synergistic drug combinations), selecting the most reliable predictions is more effective than using the most suitable ML algorithm. Performance in Predicting Synergies With Drugs Not Included in NCI-ALMANAC The random data splits that we have used so far may overestimate the model's performance in the case of drug combinations. This would be due to the presence of the two drugs in the combination in both training and test sets, albeit with other partners (Muratov et al., 2012). In order to assess to which extent this is the case, we also carry out Leave-One-Drug-Out (LODO) FIGURE 5 | Observed vs. Predicted ComboScore plots for HL-60 leukemia cell line test set (10% of data). Models are built on 90% of FG dataset corresponding to this cell line using RF and XGB methods, both tuned. Each circle is now a drug combination from the entire test set, with its shade of gray indicating one of the three scenarios (as in Figure 4). All performance metrics are displayed in each plot. The subset with the most reliable ComboScore RF predictions (top left plot) achieves half the RMSE of the entire test set (bottom right). Importantly, this is a much larger reduction in RMSE than that introduced by XGB (bottom right) with respect to RF (bottom left). Furthermore, the most reliable predicted ComboScores (top right) obtain a third of the RMSE of the least reliable predictions (top left). cross-validation experiments for each cell line. In LODO crossvalidation, every combination containing the considered leftout drug is placed in its test set, and the model is built on the remaining combinations tested on that cell line. Thus, there are as many folds as drugs in the dataset. In this way, the LODO crossvalidation simulates the model's behavior when presented with a new chemical entity outside of the model's scope, as if it was not included in the dataset. Figure 6 shows the outcome of LODO cross-validation for XGB per cell line. We henceforth use XGB with the recommended values for hyperparameters, as tuning them for each LODO cross-validation fold and cell line is prohibitive and would only provide marginal gains (see Figure 2). LODO results show that combinations associated with 75% of the left-out drugs can be predicted with an accuracy of at least R p = 0.3 against any cell line. This accuracy raises to at least R p = 0.5 for 50% of the left-out drugs. The latter is not much worse than the median R p across cell lines when using 90/10 data partitions (R p = 0.641 as shown in Figure 2's boxplot VIII). k-fold cross-validation results are available for comparison in Supplementary Figure 3. Figure 7 shows the analysis for LODO cross-validations in terms of RMSE. About 75% of models demonstrate at least moderate accuracy (RMSE < 50). The exceptions are mostly leukemia cell line models, which obtain higher RMSE due to having the highest variances in ComboScores among cancer types. An important result is that using RF models restricted to the most reliable predictions allows us to reduce the error of prediction further in every cell line (RMSE < 40), in full agreement with the findings from random 90/10 partitions (see Figure 4) and also outperforming the best models without reliability prediction. Analyzing LODO results per left-out drug instead of per cell line reveals that synergy prediction is much worse for certain left-out drugs in each cancer type. LODO performance of each drug across cell lines is shown in Supplementary Figure 4. This figure shows that models for arsenic trioxide, highly dissimilar to other drugs, have the lowest performance across cell lines and partner drugs (median R p of models concerning this drug is −0.28). Conversely, partners of tyrosine kinase inhibitors, well-represented in these datasets (e.g., Imatinib, Nilotinib or Although there are no large differences in how well different cancer types are predicted, left-out drugs on melanoma (ME, in red) and leukemia (LE, in green) cell lines obtain slightly higher average performance (median R p of drug-out models for corresponding cell lines are 0.554 and 0.524, respectively). FIGURE 7 | Median RMSE in LODO cross-validation for XGB with the recommended values for their hyperparameters (gray squares) and RF top 25% most reliable predictions (white inverted triangles) for each cell line (grouped by cancer type). As the plot shows, combinations with one left-out drug can be predicted with at least moderate accuracy across cell lines (RMSE < 50 for XGB, RMSE < 40 for RF with reliability estimation; both being approximate thresholds). Lapatinib), are predicted with high accuracy (e.g., models for imatinib have median R p = 0.82). Topoisomerase inhibitors (Teniposide and Etoposide) are also among the best-predicted left-out drugs. These in silico models could be used to anticipate how the synergies of a drug in combination with its partner drugs would vary across NCI-60 cell lines. However, since high accuracy is only obtained on those left-out drugs wellrepresented in NCI-ALMANAC, such selectivity predictions should only be accurate for drugs with similar chemical structure to those in NCI-ALMANAC. As models predicting drug-induced cell line response have been shown to improve by integrating drug features with multi-omics cell features (Menden et al., 2013;Xia et al., 2018), we expect that predicting drug synergy across cell lines will also improve by following such multi-task learning approach on this closely related problem. Comparing Predictive Models Built With Data From Different Screening Centers So far we have exclusively employed data from the FG screening center, which represents about half of NCI-ALMANAC data. Practically all the remaining ComboScores come from the FF screening center and are also determined with a 3 × 3 grid of FIGURE 8 | RF model performance comparison for FG and FF datasets. Models are built following the final setup in the exploratory analysis (a 90/10 data partition is employed for each cell line); MFPC with physico-chemical features as well as data augmentation are also used). On the left, boxplots for cell line models test set R p 's (top row) and RMSE (bottom row) for both centers data. On the right, R p (top row) and RMSE (bottom row) of models trained on FG dataset against models trained on FF dataset, each point shows the two model performances for the cell line. FF models obtain consistently lower performance than FG models. As the same modeling workflow was used, this strongly suggests that FF data is less predictive than FG data. non-zero concentrations. Thus, we evaluate here the predictive potential of FF datasets. We start by building RF models from FF data using the same 90/10 partitions as with FG. Surprisingly, FF-based models obtained worse performance in every cell line (Figure 8) and thus were objectively worse at predicting ComboScores. In trying to understand this unexpected result, we started by investigating whether this was due to modeling differences, but this was not the case. First, FF training sets are slightly larger than FG datasets (see Supplementary Table 1), which theoretically favors better performance on FF. Furthermore, using tuned XGB models led to essentially the same result (median R p of 0.641 for FG vs. 0.368 for FF) as shown in Figure 8 with RF. In addition to these non-linear methods, we also used Elastic Net (EN), but FF models were still substantially less predictive than FG models (median R p of 0.37 for FG vs. 0.23 for FF). When we carried out LODO cross-validations instead of 90/10 partitions, the same trend was observed (Supplementary Figures 5, 6 also show worse performance of FF-based LODO than that of FG-based LODO in Figure 6). To shed light into this issue, we looked at the only factor that we can compare between these screening centers: the relative growth inhibition (PERCENTGROWTH) induced by a given concentration of a drug tested individually. Interestingly, by counting the different test dates, we observed that FG had on average tested a non-combined drug 3.77 times per cell line, whereas FF almost doubled this number (7.13 times per cell line). A higher number of tests is not in itself worrisome if the growth inhibition of the drug-concentration-cell line tuple is similar between dates. However, if the measurements from these tests are substantially different, this is a problem because the set of ComboScores determined with variable measurements from the same tuple will be inconsistent as well. Consequently, synergy differences between such combinations will not only come from their intrinsic properties, but also from unrelated experimental variability. To show that higher growth inhibition variability in FF data results in less predictive models, we analyzed five drugs (Thioguanine, Chlorambucil, Altretamine, Fluorouracil, and Melphalan) with a high number of different test dates in both centers. We first consider the drugs on a cell line were only FG models obtain high average accuracy in predicting synergy (NCI/ADR-RES) and subsequently on another where both FF and FG models are on average predictive (NCI-H322M). On each FIGURE 9 | RF model performance comparison for FG and FF datasets. Models are built following the final setup in the exploratory analysis (i.e., a 90/10 data partition is employed for each cell line; MFPC with physico-chemical features as well as data augmentation are also used). We analyzed five drugs (Thioguanine, Chlorambucil, Altretamine, Fluorouracil, and Melphalan) with a high number of different test dates in both centers. On the left, the results with the cell line that is worst predicted by RF with FF data (NCI/ADR-RES with R p = 0.14 in 90/10 partition), which is much better predicted with FG (R p = 0.65, using the same partition). This plot shows the standard deviation of the values of each set from FG against those from FF. cell line, each drug has a set of growth inhibition replicates per concentration and screening center (i.e., 15 sets per screening center). The performance on NCI/ADR-RES using FF data is indeed poor (R p = 0.14 in 90/10 partition by RF), but it is much better predicted using FG data (R p = 0.65, using the same partition and method). Fourteen of the fifteen sets have higher standard deviation of growth inhibition with FF data (Figure 9), which is consistent with the lower accuracy in predicting synergy obtained with this dataset. Conversely, we repeated this operation with NCI-H322M where synergy is well-predicted by RF with both FF (R p = 0.61 in 90/10 partition) and FG data (R p = 0.66, on the same partition). The standard deviations from both screening centers are now similar (Figure 9). Taken together, these experiments suggest that the reason why FF data results in less predictive models is the noise introduced in ComboScore determination by larger variability of growth inhibition measurements. DISCUSSION NCI-ALMANAC is an extremely valuable resource for the discovery of novel synergistic drug combinations on NCI-60 cell lines. First, it is by far the largest-to-date HTS of drug combinations, therefore allowing in silico models with much higher accuracy and broader domain of applicability in predicting the synergy of other combinations. Second, some of the synergistic drug combinations discovered in vitro by NCI-ALMANAC were subsequently tested on human tumor mouse xenografts of the same cell line. 48% of them were also synergistic in at least one of these in vivo models (Holbeck et al., 2017), which led to the launch of two clinical trials so far (NCT02211755 and NCT02379416). In this study, we have found that it is possible to predict the synergy of unseen drug combinations against NCI-60 panel cell lines with high accuracy by exploiting NCI-ALMANAC data. We have established a general ML workflow (types of structural features, data preprocessing strategy, ML method) to generate such models. When trained on FG data, predicted synergies from these models match observable synergies with R p correlations comprised between 0.43 and 0.86 depending on the considered cell line. Incidentally, these regression problems must be highly non-linear, as EN leads to substantially less predictive models than XGB or RF. Some cell lines and drug combinations can be predicted with higher accuracy than others. For example, models for the SK-MEL-5 cell line perform best with any method (Figure 6). However, if we use RMSE instead of R p to reduce the influence of the ComboScore range, models for the NCI-ARD-RES are now best (gray squares in Figure 7). Another explanatory factor for this variability is the adequacy of the employed ML technique to the problem instance to solve (each cell line constitutes here a different problem instance). Even if training set size, features and classifier are the same, the modeled relationship between drug synergy and features depend on training set composition and cell line properties (implicitly). It is well-established that the performance of supervised learning algorithms varies depending on the problem instance in ways that cannot be anticipated without doing the actual numerical experiments (Fernández-Delgado et al., 2014). LODO cross-validation also revealed both best and worst partner drugs. These differences are mainly due to the number of similar partner drugs. For example, it is difficult to predict synergy of combinations containing arsenic trioxide because its 103 partner drugs are highly dissimilar in terms of chemical structure and physico-chemical properties. Indeed, machine learning from dissimilar data instances tend to be less accurate, although here the dissimilarity can be partial as arsenic trioxide's partner can be similar to other NCI-ALMANAC drugs. On the other hand, combinations containing some other drugs are better represented in NCI-ALMANAC and hence tend to be predicted with higher accuracy. This is the case of various alkylating agents, tyrosine kinase inhibitors and topoisomerase inhibitors (Supplementary Figure 4). Recent QSAR and drug combination modeling studies have evaluated the application of the latest machine learning algorithms (e.g., XGBoost, Deep Neural Network). These studies have found that these algorithms provide better performance on average across targets than RF. However, these gains are small and hence do not always justify the much greater resources required for hyperparameter tuning (Sheridan et al., 2016;Preuer et al., 2018). Performance gains have also been found small here with NCI-ALMANAC data, as the average test set R p of XGBoost across the 60 cell lines is just +0.007 larger than with RF. An important result is that restricting to the most reliable RF predictions provides much greater predictive accuracy than that introduced by a more suitable learning algorithm (e.g., XGBoost). It is surprising that this powerful technique is so uncommonly used, as has already been pointed out (Sheridan, 2013;Mathea et al., 2016). In fact, we are not aware of any other previous study applying reliability estimation to the prediction of drug synergy on cancer cell lines. Here reliability prediction permitted to reduce the RMSE by up to 50% depending on the cell line. This is particularly exciting for virtual screening problems, where only a small subset of the predictions can be tested in vitro. In this scenario, it is useful to identify those combinations that are not only predicted to be synergistic, but also reliable because this should provide higher hit rates. Lastly, highly synergistic combinations predicted with low reliability should also be tested, as the corresponding measurements would be those broadening the applicability domain of future models the most. We have also found that using FG datasets leads to substantially more predictive models than FF datasets. This result is robust in that it is observed with various types of models (XGB, RF, EN). Moreover, it occurs in spite of the availability of slightly more training data. Further investigation revealed that there are many more measurements of growth inhibition and with greater variability in FF than in FG. This in turn introduces more noise into ComboScore determinations in FF, thus impairing its modeling. Inconsistencies between centers measuring the response of cancer cell lines to drugs have been observed before (Haibe-Kains et al., 2013). There has been intense controversy about the extent, sources and impact of these inconsistencies (Stransky et al., 2015;Geeleher et al., 2016;Safikhani et al., 2016Safikhani et al., , 2017. In any case, it is clear that data permits the development of predictive models regardless of the screening center (Ammad-ud-din et al., 2014;Covell, 2015;Fang et al., 2015;Naulaerts et al., 2017), as it has also been the case here with NCI-ALMANAC. Owing to this controversy on datasets from multiple screening centers, a better understanding of their limitations and the identification of protocols to generate them with improved consistency has emerged (Haverty et al., 2016). These protocols will ultimately permit that merging datasets from different screening centers result in further predictive accuracy. CONCLUSION While NCI-ALMANAC measured the synergies of over 5,000 combinations per cell line, this still represents a minuscule part of all conceivable combinations. Even if we restricted ourselves to the set of 12,000 drug molecules estimated to have reached clinical development or undergone significant preclinical profiling (Janes et al., 2018), almost 72 million combinations per cell line would have to be tested in vitro to identify the most synergistic among them. Therefore, the developed in silico models are of great importance because these can drastically reduce the number of required in vitro tests by predicting which of the considered combinations are likely to be synergistic. AUTHOR CONTRIBUTIONS PB conceived the study and wrote the manuscript. PS implemented the software and carried out the numerical experiments with the assistance of SN. All authors commented and proposed improvements to the manuscript.
8,946
2018-12-21T00:00:00.000
[ "Biology" ]
Therapeutic Effect of Dual CAR-T Therapy Targeting PDL1 and Anti-MUC16 Antigens on Ovarian Cancer cells Background: More favorable treatment against epithelial ovarian cancer(EOC) is urgently needed because of its insidious nature at an early stage and a low rate of five-year survival. The primary treatment, extensive surgery combined with chemotherapy, exhibit few benefits for improving prognosis. Chimeric antigen receptor T (CAR-T) cell technology as novel immunotherapy has made breakthrough progress in the treatment of hematologic malignancies, and there were also benefits in a partial solid tumor in previous research. Therefore, CAR-T cell technology may be a promising candidate as an immunotherapeutic tool against EOC. However, there are some weaknesses in targeting one antigen from the previous preclinical assay, such as on-target off-tumor cytotoxicity. Thus, the more specific dual-target CAR-T cell may be a better choice.Methods: We Constructed tandem PD1-antiMUC16 dual-CAR, PD1 single-CAR, and anti-MUC16 single-CAR fragments by PCR and genetic engineering, followed by preparing CAR-T cells via lentiviral infection. The expression of CAR molecules on single and dual CAR-T cells detected by flow cytometry. The killing ability and activation of CAR-T cells were measured by cytotoxic assays and cytokines release assays in vitro. The therapeutic capacity of CAR-T cells was assessed by tumor-bearing mice model assay in vivo.Results: We successfully constructed CARs lentiviral expression vectors and obtained single and dual CAR-T cells. CAR-T cells demonstrated robust killing ability against OVCAR-3 cells in vitro. Meanwhile, CAR-T cells released plenty of cytokines such as interleukin-2(IL-2), interferon-γ(IFN-γ),and tumor necrosis factor-α(TNF-α). Besides, CAR-T cells indicated a therapeutic benefit against OVCAR-3 tumor-bearing mice models and significantly prolonged survival time of mice. Dual CAR-T cells were proved to be two to four times more efficacious single CAR-T cells on survival time. Conclusion: Dual CAR-T cells exhibited a similar ability as single CAR-T cells against OVCAR-3 cells in vitro. However, dual CAR-T cells verified more outstanding capacity against OVCAR-3 cells than single CAR-T cells in vivo. Furthermore, it significantly prolonged the survival time of tumor-bearing mice models. Thus, PD1-antiMUC16 CAR-T cells have more potent antitumor activity than single CAR-T cells in vitro and in vivo, and it could be applied in the treatment of EOC. Background Epithelial ovarian cancer (EOC) occupies approximately 90% in Ovarian cancer (OC), which is the fifth most typical tumor in female malignancies [1][2]. Meanwhile, EOC is classified by tumor cell histology as serous, endometrioid, mucinous, clear cell, and unspecified type [3]. Serous carcinoma, more than a half [4], is the primary type of EOC, it is diagnosed at stage III (51%) or stage IV (29%) due to the absence of specific early symptoms [3]. Owing to the inadequate screening and detection methods in early-stage, more effective and less recrudescent therapies are urgently needed. The primary treatment of EOC is extensive surgery combined with platinum-based and taxane-based chemotherapy to date, however, there are few benefits for improving prognosis [2][3][4]. Therefore, novel therapeutic methods are a promising direction for development. CAR-T cell therapy strategies, one of the representative adoptive immunotherapies, has made unprecedented significant progress in the treatment of hematologic malignancies. At present, the Food and Drug Administration (FDA) has approved CD19 CAR-T products for leukaemia and lymphoma. In contrast, it is hard that patients with solid tumors received benefits because of the deficiency of tumor-specific targets and physiologic barrier [5]. Surprisingly, some researchers engineered multiple CAR-T cells on OC in many studies have demonstrated desirable results, such as the NKG2D-CAR-T cell can specifically recognize and kill the OC cells expressing NKG2DL antigen [6]. CAR-T cells recognize and combine with the tumor cells expressing specific antigen via extracellular scFv fragment [7]. After recognizing the target cells, CAR-T cells release many cytokines such as IL-2, IL-6, TNF-α,and IFN-γ to activate T cells, stimulate NK cells, and promote the secretion of various factors for starting a series of killing effect [8]. However, some weaknesses were exposed, most CAT-T cell has one specific CAR molecule that targets one antigen of the tumor cells, it may cause the on-target off-tumor toxicity, difficulty in homing, absence of sustaining T cell effect, and lead cytokine release syndrome (CRS) in vivo [9][10]. Besides, single CAR-T cannot improve the tumor microenvironment. The immune escape caused by the influence of the tumor microenvironment cannot be avoided [11]. Overall, considering the previous good lethal effect and the deficiency of single-target CAR-T technology in many carcinomas, we hypothesized a 4 higher specificity CAR-T, dual-target CAR-T, would address the weakness and exhibit an outstanding lethal effect on EOC. For structuring valiant dual-target CAR-T, selecting specific antigens as targets are the crucial point. Mucin 16 (MUC16), as the glycoprotein with the most massive molecule weight in the mucin family, is also a critical biomolecule to maintain the intracellular balance and protect the epithelium [12]. It is expressed in a variety of tumor cells and involved in the proliferation and metastasis of tumor cells. Studies have shown that 80% EOC express MUC16, and its extracellular segment is cut and released in the peripheral blood, becoming a well-known tumor marker, namely CA125 [13]. Therefore, MUC16 is an ideal antigenic target for CAR molecules. Programmed cell death-1(PD1) is an immunosuppressive molecule widely expressed on the surface of activated T cells, B cells, antigen-presenting cells, and macrophages. It belongs to the CD28/cytotoxic T lymphocyte-associated antigen-4(CTLA-4) family [14]. PD-1 and its ligand PDL1 constitute the PD1/PDL1 signaling pathway, which plays an inhibitory role in T cell immunity. Current research suggests that T cells can secrete many cytokines such as IL-10, IFN-γ to induce the generation of CTLA ligand, such as PD1 expressing on OC cells by connecting with OC cells. At the same time, PD1 induces expression and combines with inhibitory receptors on the surface of T cells, thereby reducing the anti-activity of effector T cells, and guiding T cell reposition or making T cell failure to achieve immune escape [15][16][17][18][19]. In the experiment of melanoma-bearing mice, it was found that the upregulated expression of PDL1 in the tumor microenvironment led to the suppression of anti-tumor immune escape on T cells. After intraperitoneal injection of the PD1 antibody to block the PD1 pathway, T cell significantly increased infiltration [20][21][22]. There is also a study that shows that the five-year survival rate of patients with low expression of PDL1 is significantly higher than that of patients with high expression of PDL1 [21,[23][24]. From the above information of PD1, we infer PD1 would be another ideal target. In this study, we developed a novel tandem-specific CAR-T cell that targets MUC16 and PDL1 antigens and investigated whether the extracellular domain of PD1-antiMUC16 CAR-T can effectively recognize the targeted antigens, and further kill tumor cells, release cytokines and prolong the survival time of 5 tumor-bearing mice. Cell lines Lenti-X 293T cell line was provided by TaKaRa (Cat#632180, Osaka, Japan Construction of CAR molecule After designing the sequences, primers and templates were synthesized by Sangon Biotech. According to the PCR principle, the single chain antibody fragment (scFv) fragments of PD1, anti-MUC16, PD1-antiMUC16 CAR were obtained. The main structure of PD1 and anti-MUC16 was PD-1etco and 4H11-VH-(Gly4Ser)3-4H11-VL. The primary structure of PD1-antiMUC16 was the tandem of PD1 and anti-MUC16. The three scFv fragments were cloned into the pLVX-EFlα-IRES-mCherry plasmid (Clontech, TaKaRa, Osaka, Japan) through EcoR I and Mlu I cloning sites and named PD1-antiMUC16 CAR, anti-MUC16 CAR, and PD1 CAR, respectively. The plasmid has been genetically engineered to be a second-generation CAR containing CD8a hinge region, CD8 transmembrane region,4-1BB costimulation domain, and CD3ζ domains. The plasmids were amplified in bacterial solution, and the positive samples were selected by agarose gel electrophoresis and verified via sequencing analysis. Lentivirus packaging 6 Set experiment groups (pLV-PD1-anti-MUC16, pLV-anti-MUC16, and pLV-PD1) (Addgene, Massachusetts, United States) and Control group (control T). In each group, 13.7ug plasmid was taken and mixed with packaging plasmid containing 3.43ug PMD2.G,3.43ug PMDLgPRRE,3.43ug PRSV-REV to make DNA-mix. 7 × 10 6 cells Lenti-X 293T were added into the DNA-mix and the same volume of polyethylenimine (PE1, Polyscience, Pennsylvania, United States), then cultured in the incubator. overnight. Meanwhile, control T well was added with 200 ul GT-T551 H3 medium. All wells were added 800 ul UBMC (2.5 × 10 5 cells/well) and cultured in an incubator. After culturing for 6 hours, 2 ml T cells complete growth medium (GT-T551 H3 + 5%FBS + 40 IU/ml IL-2) were added. The following day, T cells were re-infected. After the second infection for 96 hours, every well was taken out of 1 × 7 10 6 cells, washed twice with PBS, removed of the magnetic beads, and stained with 5 ul Percp-cy5.5 antihuman PD-1 antibody and 10 ul FITC-Protein L antibody for 30minutes in the dark, followed by washing with PBS twice, and then detection of the positive rate of CAR structure by flow cytometry. Enzyme-linked immunosorbent assay (ELISA) The human IFN-gamma ELISA kit, IL-2 ELISA kit, and TNF-alpha ELISA kit (all kits, Dakewe, Beijing, China) were used to measure the concentrations of IFN-γ,IL-2, TNF-α,respectively. According to the instructions of the ELISA kit, three samples were processed, and the standard curve was prepared. Then the fluorescence value was measured by the enzyme-labeled instrument (TECAN, Mannedorf, Switzerland), and the cytokines quantity was calculated. Cytotoxicity Assay And Cytokine Release Assay Statistical analysis Statistical analysis was performed using SPSS 23.0 software. Data are showed as mean ± SD.In vitro killing assay, the significance of different groups was determined using nonparametric tests. For the in vivo assay, the Student't test was used to distinguish the difference between groups. The value for which P < 0.05 was considered significant. The mice survival curve was drawn using GraphPad Prism 8.3.0 software. Construction of dual-target CAR-T cells by lentiviral vector tranduction According to the above protocol, we designed the PD1-antiMUC16 CAR molecule structure, which comprised PD1-antiMUC16 or PD1 or anti-MUC16 extracellular scFv fragment, a hinge region, a transmembrane domain, followed by intracellular 41BB co-stimulation domain and CD3ζ domain. Besides, the scFv fragment contained three parts, PD1 ecto, 4H11-VH (heavy chain), and 4H11-VL (light chain), these three parts were connected by the linker peptide (Gly4Ser)3 and constructed the 1932 bp dual-target CAR molecule (Fig. 1a). Through gene recombination, CAR molecule sequences combined with the 9367 bp lentiviral vector, which were digested by two enzymes (EcoR I and Mlu I ) (Fig. 1b). Tested by agarose gel electrophoresis, PD1, anti-MUC16, PD1-antiMUC16, and plasmid skeleton fragment bands were observed at 510 bp,1422 bp,1932 bp, and 7435 bp respectively (Fig. 2a). Meanwhile, the sequence of the positive samples was analyzed and affirmed entirely consistent with the designed one (Fig. 2b). The plasmids were transferred into 293T cells to package lentivirus by polyetherimide (PEI) transfection. Anti-MUC16 and PD1 antigen were combined with 10 ul FITC-Protein L antibody and 5 ul Percp-cy5.5 antihuman PD-1 antibody, respectively, followed by detecting the positive rate of CARs through flow cytometry. Thereout, we harvested the favorable rates of PD1-antiMUC16 CAR, PD1 CAR, and antiMUC16 CAR on the surface of 293T cells were 10.45%,3.56%, and 18.54%, respectively (Fig. 3), and obtained respectively three viral titers 6.27 × 10 7 TU/ml, 2.14 × 10 7 TU/ml, and 1.11 × 10 8 TU/ml from the formula(1). After utilizing the same detection methods, the infection rates of PD1-antiMUC16 CAR-T cells, PD1 CAR-T cells, and anti-MUC16 CAR-T cells can be obtained (52.36%,46.03%, and 86.24%, respectively) (Fig. 3). It is indicated that CAR-T cells with single-and dual-targets were successfully constructed. Overexpressing MUC16 and PDL1 antigens of target cells According to the above principles and methods, including PCR, genic recombination, lentiviral vector transduction, we acquired the positive rate of PDL1 molecule on the surface of the LentiX-293T cell was 4.40% by flow cytometry after combining with 5 ul Percp-cy5.5 antihuman PD-1 antibody for 30 minutes in the dark. The lentiviral titer was 2.64 × 10 7 TU/ml, according to the previous formula (1). After that, the infection rate of the PDL1 structure on OVCAR3-luc cell was acquired (40.72%). The population that consists of OVCAR3-PDL1-luc cells and unstructured cells was named a pool. The positive samples were cultured and proliferated, followed by purifying to 100%(No.3 monoclonal sample) (Fig. 4a). Meanwhile, the lentiviral titer of MUC16 in the MUC16 group was 1.24 × 10 8 TU/ml, the positive rate of MUC16-GFP in the OVCAR3-MUC16-GFP-luc pool which was selected based on the expression GFP was 94.70%, and the MUC16 monoclonal sample purified to 99.93% (Fig. 4b). Moreover, we created MUC16-PDL1 antigen via structuring the PDL1 directly on the monoclonal sample of MUC16. The lentiviral titer of PDL1 in MUC16-PDL1 group was 2.52 × 107TU/ml, the positive rate of PDL1 on OVCAR3-MCU16-GFP-luc pool was 84.68%, and the PDL1 depurated to 99.75% (Fig. 4c). OVCAR3-luc cell lines overexpressing MUC16 and PDL1 antigens were successfully constructed, and the positive rate of tumor cell surface antigen was more than 99%. Functional activity of CAR-T cells in vitro To ascertain the cytotoxicity of CAR-T cells against MUC16 or PDL1 positive cancer cells in vitro, we co-cultured cells, PD1 and antiMUC16 CAR-T remained almost the same cytotoxicity efficacy with dual CAR-T cell (Fig. 5), except in specified assay. In the CAR-T versus OVCAR3-luc cell assay, PD1 CAR-T cell exhibited outstanding cytotoxicity efficacy on target cells than other CAR-T cells(P < 0.05) (Fig. 5a). In the other CAR-T versus target cells assay, the dual CAR-T cells always demonstrated not debased cytotoxicity efficacy than single CAR-T cells. CAR-T cells and cancer cells (1 × 10 4 cells/well In cytokine release test, to assess whether CAR structure enhanced the anti-tumor activity of T cells, co-cultures were established between CAR-T cells and target cells at 1:1 ratio (1 × 10 4 T cells versus 1 × 10 4 cancer cells) in a V-bottomed 96-well plate for 48 hours in incubator. The results revealed all CAR-T cells exerted a more robust capacity of secreting IL-2, IFN-γ, and TNF-α (Fig. 6), which harvested from supernatants and measured by ELISA kits. Disappointingly, dual CAR-T cells did not reveal higher levels of cytokines production than single CAR-T cells. Functional activity of CAR-T cells in vivo In order to determine the efficacy of CAR-T cells against ovarian cancer cells in vivo, we established intraperitoneal tumor-bearing models using NPG mice, which were injected OVCAR3-MUC16-GFP-PDL1-luc cells(5 × 10 5 cells) and 50 ul Matrigel into the abdominal cavity and raised 48 hours. As shown in Fig. 7a, all mice appeared well-distributed and stable size tumors, and measured fluorescence values by IVIS Spectrum and analyzed by Living Image as the pre-therapeutic evidence of tumor. Tumor-bearing mice models were randomized into four groups (n = 5) and injected CAR-T cells (1 × 10 6 cells)into the abdominal cavity on day 0, after that measured fluorescence values weekly to monitor the progress of the tumor. Miraculously, dual CAR-T exhibited significant regression of ovarian cells as detected on day 7 to day 14 ( Fig. 7bc), followed by slow proliferation. However, two single CAR-T groups did not show the excellent therapeutic effect as dual CAR-T cell instead of restraining the rapid progress of the tumor. From all models, we could discover that the first week after injecting CAR-T cells exhibited the best treatment efficiency (Fig. 7d). It could help set the dose and frequency of subsequent clinical trials. As time passed, all four groups of mice died for ovarian cancer; however, their tumor-bearing survival time was different. Dual CAR-T group demonstrated exceptionally longer survival time of mice than single CAR-T groups and control group, their mean survival time unexpectedly reached 80.6 ± 10.33days, compared with two single CAR-T groups, PD1 CAR-T group was 45.2 ± 6.34 days, and antiMUC16 CAR-T group was 23.0 ± 1.55 days, and the control group had the shortest survival time,only19.8 ± 2.14 days (Fig. 7e). From the perspective of extending the lifetime of tumor-bearing mice, the dual CAR-T group demonstrated the further capacity of prolonging the survival time of mice than others(P < 0.01). Discussion The aggressive ovarian cancer, exceptionally high-grade serous carcinoma, should be deemed an urgent unsolved problem in the 21st century in the field of malignancies, because of the low five-year survival time, the rapidly invasive progression and high recurrence rate. CAR-T technology has exhibited practical antitumor activities in hematologic malignancies, also showed potential in the treatment of OC. However, the paucity of specific antigens and the immune escape of OC is the primary obstacle. In our study, for increasing the target specificity, reducing immune escape, we adopted a tandem structure to design CAR molecule for two antigenic targets using second-generation CAR-T design conception. Meaningfully, the results verified both two CARs played antitumor activity rather than interacting with each other, which may be caused by the hinge domain supplying space for scFv folding. [25][26][27]. MUC16 and PDL1 are undoubtedly ideal target antigens for CAR-T technology against OC in our study. Additionally, both dual CAR-T cells and single CAR-T cells have favorable cytotoxic efficiency against various devised OVCAR-3 cells in vitro, especially at a high E/T ratio. However, the destructive effect of dual CAR-T cells is not superior to that of the single CAR-T cells. Although there is no apparent difference between the dual CAR-T cell and single CAR-T cell in cytotoxicity and cytokines production in vitro, dual CAR-T demonstrated remarkable tumor therapeutic effect in vivo and prolonged survival time of tumor-bearing mice models than single CAR-T cells. Why are the dual CAR-T cells exhibited disparity in vivo and in vitro? We assume that the critical point is that PD1, which recognizes target antigen correlated with the tumor microenvironment, hard to exert the highest capacity in vitro. This conjecture was verified in the assay. PD1 CAR-T exhibited potent cytotoxicity in mice models and significantly prolonged the survival rather than the malaise performance in vitro, especially against Nonetheless, there is still some unknown knowledge left, such as how to reduce the CRS, improve homing, and keep persistently in OC patients. Previously, some researchers have proposed that selecting target antigens with high specificity, optimizing the function of CAR-T cells, and blocking immunosuppressive molecules, such as PD1/PDL1 signal, could reduce the occurrence of CRS [28][29][30][31][32]. Moreover, combining higher specific CAR-T cells with chemokine receptors may improve homing and enhance therapeutic activity [5]. Nevertheless, it still needs to be verified by more cellular experiments, animal assay, even clinical trials. Simultaneously, natural killer (NK) cells may be another right candidate for CAR drivers because they can recognize and killing tumor cells directly [33]. Conclusion We demonstrated that PD1-antiMUC16 dual-target, and single-target CAR-T cells possess cytotoxicity against were not required to give informed consent to the study because they were all anonymous donors. Consent for publication Not applicable. Availability of data and material Data supporting the results in the article are available from the corresponding author upon reasonable request. Competing interests The authors declare that they have no conflict of interest. Authors' Contributions Conceptualization and statistics by TL, Methodology and design of experiment by JDW; Review and editing by JDW and TL; Project Administration by JDW. All authors read and approved the final manuscript.
4,400.2
2020-05-11T00:00:00.000
[ "Biology", "Medicine" ]
Dental service utilization and the COVID-19 pandemic, a micro-data analysis Background Global crises and disease pandemics, such as COVID-19, negatively affect dental care utilization by several factors, such as infection anxiety, disrupted supply chains, economic contraction, and household income reduction. Exploring the pattern of this effect can help policy makers to be prepared for future crises. The present study aimed to investigate the financial impact of COVID‐19 disruptions on dental service utilization. Methods Data on the number of dental services offered in Dental School Clinics of Tehran University of Medical Sciences was collected over a period of two years, before and after the initial COVID-19 outbreak in Iran. School of Dentistry operates two clinics; one with competitive service fees and one with subsidies. Regression analyses were performed to determine the effect of the pandemic on the number of dental services divided by dental treatment groups and these clinics. The analyses were adjusted for seasonal patterns and the capacity of the clinics. Results There was a significant drop in dental services offered in both clinics across all dental groups in the post-COVID period (on average, 77 (39.44%) fewer services per day). The majority of the procedure loss happened in the Private clinic. Adjusting for seasonal patterns and the service capacity, regression results documented 54% and 12% service loss in Private and Subsidized clinics following the pandemic, respectively. Difference-in-difference analysis documented that the Subsidized clinic performed 40% more treatments than the Private clinic in the post-COVID period. Conclusions Pandemic –reduction in dental care utilization could have long-term ramifications for the oral health of the population, and policymakers need to provide supportive packages to the affected segments of the economy to reverse this trend. Supplementary Information The online version contains supplementary material available at 10.1186/s12903-023-03740-2. Table A1: Level of Dental Services and the Pandemic (S+P).The table presents the estimated slope coefficients for the number services offered in Dental School clinics of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.P-values are estimated using robust standard errors (reported in parentheses).***, **, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively. Fig. A1 . Fig. A1.Service Capacity.The graph shows the histogram of the daily dental services offered in Subsidized (S) and Private (P) clinics of Tehran University of Medical Sciences by all treatment groups.The top and bottom panels present the results for the pre-COVID and post-COVID periods, respectively.The data ranges from April 21, 2019 to April, 21, 2021. Fig. A2 . Fig. A2.Treatments by Clinics.The graph shows the total number dental treatments offered in Subsidized (S) and Private (P) clinics of Tehran University of Medical Sciences, at the daily frequency.In addition, the fitted trend line for each group is shown.The data ranges from April 21, 2019 to April, 21, 2021. Group j COVID t + γ m Month m t + e Table A2 : Level of Dental Services and the Pandemic (S).The table presents the estimated slope coefficients for the number services offered in the Subsidized (S) clinic of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.P-values are estimated using robust standard errors (reported in parentheses).***, **, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.Group j COVID t + γ m Month m t + e Table A3 : Level of Dental Services and the Pandemic (P).The table presents the estimated slope coefficients for the number services offered in the Private (P) clinic of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).The regressions also include the month-fixed effects.Group j COVID t + γ m Month m t + e Table A4 : Level of Dental Services and the Pandemic (S&P).The table presents the estimated slope coefficients for the number services offered in Dental School clinics of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).PRIVATE is an indicator variable that takes the value of one for the services offered by the Private clinic.The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.The analysis includes the daily observations when both clinics and dental treatments are offering services.P-values are estimated using robust standard errors (reported in parentheses).***,**, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.COVID t × PRIVATE Clinici + γ m Month m t + e Table A5 : Level of Dental Services and the Pandemic (DID test).The table presents the estimated slope coefficients for the number services offered in Dental School clinics of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).PRIVATE is an indicator variable that takes the value of one for the services offered by the Private clinic.The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.The analysis includes all daily observations.P-values are estimated using robust standard errors (reported in parentheses).***,**, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.COVID t × PRIVATE Clinici + γ m Month m t + e Table A6 : Change in Dental Services and the Pandemic (S+P).The table presents the estimated slope coefficients for the percentage changes in the number of services in the post-COVID period offered in Dental School clinics of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).The growth rate of services for the observations in each month is measured by the average values in the same month of the year in the pre-COVID period.The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.P-values are estimated using robust standard errors (reported in parentheses).***, **, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.Group j COVID t + γ m Month m t + e − 1 = α + β Table A7 : Change in Dental Services and the Pandemic (S).The table presents the estimated slope coefficients for the percentage changes in the number of services in the post-COVID period offered in the Subsidized (S) clinic of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).The growth rate of services for the observations in each month is measured by the average values in the same month of the year in the pre-COVID period.The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.P-values are estimated using robust standard errors (reported in parentheses).***, **, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.Group j COVID t + γ m Month m t + e − 1 = α + β Table A8 : Change in Dental Services and the Pandemic (P).The table presents the estimated slope coefficients for the percentage changes in the number of services in the post-COVID period offered in the Private clinic of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).The growth rate of services for the observations in each month is measured by the average values in the same month of the year in the pre-COVID period.The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.P-values are estimated using robust standard errors (reported in parentheses).***, **, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.Group j COVID t + γ m Month m t + e − 1 = α + β Table A9 : Change in Dental Services and the Pandemic (S&P).The table presents the estimated slope coefficients for the percentage changes in the number of services in the post-COVID period offered in Dental School clinics of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).PRIVATE is an indicator variable that takes the value of one for the services offered by the Private clinic.The growth rate of services for the observations in each month is measured by the average values in the same month of the year in the pre-COVID period.The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.The analysis includes the daily observations when both clinics and dental treatments are offering services.P-values are estimated using robust standard errors (reported in parentheses).***,**, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.COVID t × PRIVATE Clinici + γ m Month m t + e Table A10 : Change in Dental Services and the Pandemic (DID test).The table presents the estimated slope coefficients for the percentage changes in the number of services in the post-COVID period offered in Dental School clinics of Tehran University of Medical Sciences by each dental treatment group and the pandemic indicator variable (COVID).PRIVATE is an indicator variable that takes the value of one for the services offered by the Private clinic.The growth rate of services for the observations in each month is measured by the average values in the same month of the year in the pre-COVID period.The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.The analysis includes all daily observations.P-values are estimated using robust standard errors (reported in parentheses).***,**, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.COVID t × PRIVATE Clinici + γ m Month m t + e Table A11 : Level of Relative Dental Services and the Pandemic.The table presents the estimated slope coefficients for the number of services the Subsidized clinic of Tehran University of Medical Sciences offered in excess of its Private clinic by each dental treatment group and the pandemic indicator variable (COVID).The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.P-values are estimated using robust standard errors (reported in parentheses).***, **, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.Group j COVID t + γ m Month m t + e Table A12 : Level of Relative Dental Services and the Pandemic.The table presents the estimated slope coefficients for the number of services the Subsidized clinic of Tehran University of Medical Sciences offered in excess of its Private clinic by each dental treatment group and the pandemic indicator variable (COVID).The regressions also include the month-fixed effects.The sample period is from April 21, 2019, to April 21, 2021 at the daily frequency.P-values are estimated using robust standard errors (reported in parentheses).***, **, and * denote statistical significance at the 1%, 5%, and 10% p-value levels, respectively.Group j COVID t + γ m Month m t + e
2,625
2024-01-04T00:00:00.000
[ "Medicine", "Economics" ]
Using Metrics in Stability of Stochastic Programming Problems Optimization techniques enter often as a mathematical tool into many economic applications. In these models, uncertainty is modelled via probability distribution that is approximated or estimated in real cases. Then we ask for a stability of solutions with respect to changes in the probability distribution. The work illustrates one of possible approaches (using probability metrics), underlying numerical challenges and a backward glance to economical interpretation. mathematical objects). Borrowed from the area of functional analysis, a tool known as probability metric (a metric on some space of probability measures) is now widely accepted. The problem is still not trivial: choosing a right metric is crucial and cannot be arbitrary done as we show in the next section. A great attention has already been paid in the literature to the stability in stochastic programming. For example, a notion of minimal information (m. i.) metric introduced by Rachev (1991) and Zolotarev (1983) is now considered as a starting point for forthcoming analysis. See Römisch (2003) for a basis and further references. Probability metrics Among a broad set of probability metrics we will now talk about two of them: Kolmogorov and Wasserstein distances. Besides that these two fall into the range of metrics with ae-structure (considered as m. i. metric for the stability for a certain specified class of stochastic programs), they both have an unquestionable virtue: they could be quite easily enumerated. This important property serves us to present a numerical illustration at the end of the paper. Next definitions could be found for example in Rachev (1991). Kolmogorov metric The Kolmogorov metric is defined on the space of all probability measures as a uniform distance of their distribution functions: where Ξ∈R s is a support of the probability measures and F, G are their distribution functions. Knowing both distribution functions, one could easily calculate the value of the corresponding Kolmogorov distance. Wasserstein metric The 1-Wasserstein metric for one-dimensional distribution is defined on the space of probability measures having finite first moment as a surface between their distribution functions: More general definitions for multidimensional Wasserstein metrics are to be found in Rachev (1991). The representation given here is again suitable for numerical treatment. It turns out that the problem of choosing a right metric is closely related to the properties of the original model and cannot be treated separately from it. Next example illustrates different properties of Kolmogorov and Wasserstein distances. Example 2 (Kaňková -Houda, 2002). The left part of Figure 1 represents approximation of a discrete (here degenerated) distribution with unknown mass point. In that case, the Kolmogorov metrics takes the value of one, so does not provide any useful information about how far the two distributions are. The right part of the figure represents a distribution with heavy tails: F and G take forms of F(z)=ε/(1-z) on [−K;0) with ε∈(0;1) and K>>0, and F(z)=G(z) are arbitrary on [0;+∞ ). Now the Wasserstein metric could take arbitrary high value as illustrated. Problem formulation and stability results A general decision problem in stochastic programming is formulated for a (say unknown or original) probability distribution F as minimise E F g(x;ξ) subject to x X, [3] where g is uniformly continuous function measurable in î, X is compact set of feasible solutions. In [3], x denotes control (or decision) variable, î is random input parameter with distribution F, E is the symbol for mathematical expectation. This class of problems is known as problems with fixed constraint set X. A generalisation for a "random" constraint set is still possible but not treated in this paper (see e. g. Birge -Louveaux, 1997, Römisch, 2003. For example, g could be a function representing production costs that we want to minimise; but such cost function g usually depends on a random element, so rather than minimising g directly we look at its expected value. In [3], let replace the original distribution F with an estimate G and denote ϕ(F) and ϕ(G) optimal values of the problem [3] for the two distributions. Stability theory asks about the difference between ϕ(F) and ϕ(G), i. e. look for an upper bound for the difference |ϕ(F)-ϕ(G)|. In Houda (2002), such bound is given for g that is Lipschitz continuous in î, and with respect to the Wasserstein metric: where L is Lipschitz constant for g. As we can see, the right hind side of [4] is double-structured: first part depends on the model structure by means of properties of g -here by the constant L; second part measures the distance between F and G by the Wasserstein metric. This is the simplest result that one can draw up. Similar but more complicated formulas could be obtained for other stochastic programs, with respect to other metrics, for solutions or solution sets of stochastic programs (see Römisch, 2003, and references therein). Empirical distributions Let now consider widespread approximations of probability distributions called empirical distributions. A one-dimensional empirical distribution function based on the sample ξ 1 ,ξ 2 ,... of random variables with the common distribution function F is a random variable (function) defined by [5] where I A denotes the indicator function of a set A. It is well known that the sequence of empirical distribution functions converges almost surely to the distribution function F under rather general conditions. For each realization of the sample î, F n (z) is actually a distribution function. We illustrate now convergence properties of the Wasserstein metric using this notion of the empirical distribution. Simulation study and comments All necessary algorithms and computation outputs have been created using R programming language and interface 1 . Here we only present results for normal distribution (representing a "good" distribution) and modified (cut) Cauchy distribution (representing "bad" distribution with heavy tails). More precisely, we estimate a distribution for Wasserstein distance W F F n ( , )for a given distribution (normal, Cauchy) and a length n (100 and 1000). Each estimate is based on the sample of 200 realizations of î for each pair (distribution, length). The left-hand sides of Figure 2 illustrate a fact that the Wasserstein metric converges to zero if the sample length n tends to infinity. But the speed of the convergence is about four times smaller for Cauchy distribution (compare values at horizontal axe). This is that we have actually expected. The right-hand sides of Figure 2 are estimates for the probability density of the stochastic process nW F F n ( , ), representing a convergence rate. Theoretically, the limiting distribution for this process could be explicitly given only for the uniform distribution on [0;1] (see Shorack -Wellner, 1986, chapter 3, p. 150). Again, simulations show that for ("good") normal distributions, the probability density stabilizes rapidlyhowever this is not true for Cauchy distribution. The reader can find analogous graphics for other distributions (uniform, exponential, and beta) in Houda (2004). From practical point of view, if we derive the Lipschitz constant of the model, we can directly conclude on upper bound in [4] and thus make comments about influence of the estimate of the distribution to the resulting solutions. As seen in Figure 2, these estimates are useful only in case of "good" distribution -i. e., when the used metric and original distribution "coincide". Otherwise, obtained upper bounds are too high to make reasonable conclusions. Conclusion We have restated one of the numerous results from the area of quantitative stability in stochastic programming. In order to quantify differences in optimal values and optimal solutions when an estimate or an approximation replaces the original distribution of the random element, a suitable notion of "distance between probability distributions" has to be defined. Example illustrates that the right choice of this distance (or probability metric) is closely related to the properties of the original probability distribution in the model, but also have a considerable consequence to the stability of the model. In real-life applications, this kind of post-optimization analysis -searching for errors with respect to changes in probability distribution -cannot be thoughtlessly automated without keeping in mind effective properties of the model. Prague for their invaluable help and suggestions in the area of stochastic optimization and related disciplines.
1,899.8
2005-03-01T00:00:00.000
[ "Computer Science" ]
Journal of Insect Science: Vol. 2006 | Article 21 Alternative male morphologies are common in a wide range of organisms and particularly extreme in horned beetles. Here, large males (majors) commonly develop extravagant weaponry such as horns or enlarged mandibles, whereas small males (minors) develop only rudimentary traits. In some taxa, including the genus Onthophagus, the transition from minors to majors occurs over a very small range of body sizes causing intermediate morphologies to be rare or absent from natural populations. Several studies have shown that majors use horns as weapons during male combat over females and that the possession of horns increases male fighting success, and presumably fitness. However, the advantages of a hornless morphology, if any, have remained elusive. Here the alternative male morphs are examined in the horn-polyphenic beetle Onthophagus nigriventris. In particular, the hypothesis was tested that lack of horns in minors increases their maneuverability inside tunnel systems in which these males sneak matings from major males. Using a simple behavioral assay the effects of horn possession on maneuverability were quantified inside an artificial tunnel. Minors were found to be significantly more mobile compared to majors. No such differences were found in mobility between similarly small and large females, which always lack horns. This suggests that mobility differences observed among male morphs are due to the presence or absence of horns rather than differences in body size. This notion was further supported in a second experiment in which surgical removal of horns significantly improved maneuverability, while subsequent re-attachment of horns reversed this effect. These results suggest that lack of horns increases male maneuverability inside tunnels and may thus be advantageous in the context of the particular social niche inhabited by minor males. The results are discussed in the context of the evolutionary ecology of horn-polyphenic beetles. Introduction Alternative male phenotypes are widespread in species with intense sexual selection, and often involve the expression of alternative aggressive fighter and non-aggressive sneaker morphs among competing males (Andersson 1989;West-Eberhard 2003). In most cases studied thus far fighter-sneaker dimorphisms are closely tied to male body size, with physically larger males engaging in fighting behavior to acquire mating opportunities, whereas smaller males engage in non-aggressive sneaking behaviors (Moczek and Emlen 2000). In many taxa such size-dependent expression of reproductive behaviors is also thought to have facilitated the evolution of corresponding alternative morphologies, such as exaggerated weaponry in fighter but not sneaker morphs (Shuster and Wade 2003). Discontinuous, size-dependent expression of male secondary sexual traits is particularly conspicuous in many beetle taxa, including the greatly exaggerated horns of many scarab beetles (Arrow 1951). Species in the genus Onthophagus often exhibit particularly extreme size-dependent expression of male horns, largely determined by differences in quantity and quality of food provisioned for larvae by their mothers in the form of brood balls. Typically, only males with access to optimal feeding conditions eclose at a large body size and express fully developed horns, whereas males with access to suboptimal feeding conditions eclose at a smaller body size and remain largely, or entirely, hornless (Emlen 1994;Hunt and Simmons 1997;Moczek and Emlen 1999). The transition from largely hornless (minor) morphs to fully horned (major) morphs often occurs over a surprisingly narrow body size range. As a consequence of this threshold action natural populations often exhibit a bimodal distribution of horn lengths, and intermediate phenotypes are typically rare. Several studies have now shown that beetle horns are used primarily as weapons in male-male combat (Eberhard 1978(Eberhard , 1979(Eberhard , 1982Otronen 1988;Rassmussen 1994) and that the possession of long horns measurably improves a male's chances of winning fights against other males (Emlen 1997;Moczek and Emlen 2000). Thus, large males that engage in fighting behavior clearly benefit from the expression of large horns. However, the absence of horns in small males and the paucity of intermediate morphologies in natural populations are far less well understood, and only two studies have provided some insight into the possible selective significance of hornlessness (Moczek and Emlen 2000;Hunt and Simmons 2001). Several hypotheses have been proposed to explain why smaller males exhibit greatly reduced horn expression, and why the transition from minor to major males often occurs over an extremely short range of body sizes, causing intermediate morphologies to be rare in nature. Hunt and Simmons (1997) observed a positive correlation between extent of horn expression, length of larval development and larval mortality in the horn polyphenic beetle Onthophagus taurus. This result suggested that by remaining hornless small males may be able to avoid these costs, however, a subsequent more detailed study (Moczek and Nijhout 2002) failed to replicate Hunt and Simmon's (1997) original correlation. Alternatively, Nijhout and Emlen (1999;see also Emlen 2001;Moczek and Nijhout 2004) showed that growth of horns appears to trade-off with the growth of other structures during larval development such as eyes, antennae, wings or genitalia. This suggested that individuals that develop disproportionately large horns may be constrained to develop disproportionately smaller, and possibly less functional, versions of other traits, and that smaller males may be able to avoid such costs by remaining hornless. Both hypotheses help explain why small males may gain a selective advantage by expressing relatively smaller horns, but fail to explain the sudden transition from largely hornless to fully horned male shapes observed in many species. A third hypothesis, originally put forward by Emlen (Emlen 1997;Moczek and Emlen 2000), addressed this issue by suggesting that hornlessness may be a direct adaptation to the social niche inhabited by small males. Small male Onthophagus typically rely on a high degree of agility inside a complex tunnel system underneath dung pads to locate and mate with females in the presence of horned guarding males. The possession of horns may reduce male maneuverability inside tunnels and thus be directly detrimental to the performance of males that engage in sneaking behaviors. If correct this would suggest that fighting and sneaking behaviors generate a disruptive selection environment, favoring long horns in males large enough to profitably engage in fighting behavior, but lack of horns in smaller males. In turn this would also help explain the sudden transition from horned to hornless morphs observed in natural populations, as males with intermediate morphologies would be expected to be inferior fighters and sneakers and thus selected against in both contexts (Emlen and Nijhout 2000). However, although intuitively appealing, evidence in favor of a mobility-handicap due to horn possession is largely anecdotal and only a single study has been able to provide some supporting behavioral data on one species of horn-dimorphic beetle (Moczek and Emlen 2000). Furthermore, a subsequent study on the same species (Hunt and Simmons 2001) was unable to detect a significant negative effect of horn length on the fitness of minor males, contrary to what would be expected if disruptive selection was operating on male morphology. Thus, the available evidence to characterize the selective conditions that shape male morphological diversity in horned beetles remains largely lacking and further studies on a wider range of species are clearly needed. Here we focus on a previously unstudied species of horn-dimorphic beetle, Onthophagus nigriventris, which expresses one of the more extreme male dimorphisms of the genus. A straightforward behavioral approach is used in combination with phenotypic manipulations to test experimentally whether horn possession measurably affects male maneuverability in this species. Large male Onthophagus nigriventris express a single, long and curved, medial pronotal horn, produced during a period of explosive growth during the prepupal stage during late larval development (reviewed in Moczek 2006). This large prothoracic horn is reduced to a short and pointy rudiment in small males and absent in all females. In addition, large male adults also express a small, more posterior thoracic outgrowth in a location similar to the horn rudiment of small males. However, unlike the long horn in large males and horn rudiment in small males, this second, more posterior outgrowth of large males arises developmentally from sculpting and retraction of pupal horn tissue around the area of the final outgrowth rather than active growth. Similar sculpting and retraction of pupal tissue do not take place in small males and females (Moczek 2006). Males exhibit great discontinuity in the scaling relationship between body size and horn length ( Figure 1) and the transition from rudimentary to complete horn expression occurs over a narrow body size range. The sigmoid nature of the scaling relationship is typical for beetles in this genus, and effectively divides males into two relatively discrete morphs. Intermediate morphologies exist but are relatively rare ( Figure 1). O. nigriventris breeds, develops, and behaves similar to other onthophagine species studied previously (Cook 1990;Emlen 1997;Moczek and Emlen 2000;Moczek et al. 2002). Female O. nigriventris reproduce by provisioning dung in the form of brood balls in subterranean tunnels. Brood balls contain a single egg and represent the sole amount of food available to developing larvae, which complete larval and pupal development inside. Male O. nigriventris compete for access to tunnels and females. Material and Methods Onthophagus nigriventris is a dung beetle native to the highlands of Kenya, but exotic populations exist in Australia and Hawaii. Animals used in the present study were part of a laboratory colony derived from a population on Manoa, Hawaii, and maintained in an insectary at Indiana University at a 16h:8h light: dark cycle and ad libitum food conditions. Maneuverability assay Maneuverability was quantified by allowing beetles to run through and turn around inside an artificial tunnel consisting of a clear plastic tube with a 13 mm interior diameter. Turning around inside tunnels is a task performed by males and females on a regular basis, as observed during pilot observations of O. nigriventris using ant farms. (Moczek unpublished, for similar observations in other species see Emlen 1997, Moczek andEmlen 1999). Tunnels dug by O. nigriventris vary in diameter from at least 6mm (dug by small individuals) to approximately 18mm at tunnel intersections. Using trial runs at a variety of tunnel diameters we determined 13mm as the tunnel diameter that allowed >95% of all males to eventually complete a turn-around inside the tunnel. At the beginning of the assay we placed a beetle head first into one end of the tunnel while a bright light stimulus was presented at the other end. Beetles responded reliably to the light stimulus by walking toward the light. Light orientation was then reversed, which invariably caused beetles to attempt to turn around inside the tunnel. We quantified maneuverability by measuring the time it took a beetle to completely turn around inside its artificial tunnel. Turn-around performances were recorded to the nearest 0.1 second using a handheld stopwatch. Experiment 1 To examine the effects of horn possession on maneuverability correlated changes in body size had to be control for. To do so female O. nigriventris were used as controls. Females exhibit the same range of body sizes as males yet are always hornless. We used handheld digital calipers to measure thorax widths (as an estimate of body size; for justification see Moczek and Emlen 1999) of male and female O. nigriventris and then divided them into four different categories: (i) large, horned males (6.2-7.0mm thorax width); (ii) small, hornless males (5.2-6.0mm); (iii) large females (6.2-7.0mm); and (iv) small females (5.2-6.0mm). Maneuverability of 20 individuals in each category was quantified using the assay described above. Each individual was only used once. If horn possession alone affects maneuverability reduced performance was predicted in major males but not in minors, large females and small females. Alternatively, if body size affects maneuverability we predicted reduced performance in major males as well as large females. Experiment 2 In this experiment horn possession was experimentally manipulated in individual majors. Ten large, horned males were examined using the same assay as outlined above through three consecutive trials: i) with their horn intact, ii) with their horn removed, iii) with their horn re-attached. Each individual was tested three times within each experimental trial and an average was used for further analysis. We removed horns at their base using micro-scissors immediately following the first trial. Males were then given 60 minutes to recover. Horns were re-attached using CrazyGlue™ immediately following the second trial, and males were given at least 30 minutes to recover. Surgery did not appear to injure males. The medial horn is largely solid, contains neither muscles nor nerves, and little to no bleeding occurred following surgery. Animals were obtained from a colony fed ad libitum at the time of the experiment, and great care was taken to keep beetles in a moist environment throughout the entire experiment to minimize any effects of dehydration on performance. At the end of the experiment beetles were released back into the colony and remained alive for at least several weeks following the experiment, as observed through weekly clean-up of dead animals. If horn possession impedes male maneuverability elevated performance (i.e. shorter turn-around times) was predicted in males whose horns had been removed, and a reversal of this effect after horns were re-attached. Statistical analysis Two-tailed t-tests were used to compare performances in experiment 1. Results are presented as p dgf , critical T = test statistic. To analyze results from experiment 2 Wilcoxon paired-signed rank tests were used to test for differences in individual performance after horn removal and re-attachment, respectively. Results are presented as p W+/W− = test statistic. Results Pilot observations using ant farms similar to those used by Moczek and Emlen (2000) indicated that in order to access breeding females, O. nigriventris males use a behavioral repertoire largely similar to that of other Onthophagus species (Moczek, unpublished). In each of eight fights staged between two fully horned males, competitors assumed a characteristic fighting position ( Figure 2). Opponents attacked each other head on, but the dorsoventral orientation of opponents within tunnels was opposite to one another (Figure 2). By attacking each other head on yet with backs oriented in opposite directions, males were able to interlock with their horns in a peculiar fashion. During fights, the long medial anterior horn of each male smoothly fit around the prothorax of his opponent. In some cases the tip of the horn is inserted into the space between thorax and abdomen, while the short, more posterior horn and associated hollowing in the cuticle anterior to it served as a receptacle of the short posterior horn of the opponent. Interlocked in this fashion, beetles engaged in shoving contests lasting 9.3 (±3.1) minutes (n = 8; for an excellent, detailed examination of a similar morphological situation see . Males entering the tunnel were either expelled by the tunnel owner or managed to maneuver their opponent to a location in the tunnel where they could pass him, turn around, and expel the former owner themselves. In all 8 fights observed winners remained with the female for at least the next 24 hours of brood ball production. In none of these cases did losers attempt to re-enter the tunnel for at least 2 hours following their initial defeat. Hornless, minor males, on the other hand, quickly withdrew from fights in each trial (mean fight duration 1.5 (± 0.34) minutes; n = 6 staged fights between one horned and one hornless males) yet in each case remained within the vicinity of the tunnel entrance and repeatedly attempted to re-enter the tunnel, however without success in any of the 6 trials observed. An obvious use for the rudimentary horn in small males was not determined. Combined, these pilot observations suggested that apart from a species-specific fighting position, male O. nigriventris seem to rely on a qualitatively very similar behavioral repertoire as has already been documented in great detail for other onthophagine species (O. binodis : Cook 1990;O. acuminatus: Emlen 1997;O. taurus:Moczek and Emlen 2000;Moczek et al. 2002). We therefore focused our experiments to address a particular question largely unexplored by previous studies: does horn possession reduce maneuverability in horn dimorphic beetles, or inversely, do hornless males experience increase maneuverability by not investing in the development of large horns. Experiment 1 Large, horned males required significantly more time to completely turn around inside artificial tunnels compared to their small, hornless counterparts (p 48, 2.01 = 0.006; Figure 3). Large and small females, however, performed equally well (p 48, 2.01 = 0.74) and similar to small males ( Figure 3). These results suggest that differences in performance between male morphs cannot be attributed to differences in body size, but instead appear to be due to the presence or absence of a horn. Experiment 2 Individual majors with intact horns performed similar to the large, horned males in Experiment 1. Horn removal, however, significantly reduced turn-around times, and thus increased maneuverability (p +54/−1 = 0.039). This effect was reversed once horns were re-attached, resulting in a significant reduction of maneuverability back to the original level (p +2/−53 = 0.0059; Figure 4). These results suggest that horns alone may be sufficient to impose a drastic reduction in male maneuverability inside tunnels. Discussion The results of both experiments suggest that horn possession alone is sufficient to impose a possibly significant mobility handicap to horned males. In the first experiment small, hornless males consistently outperformed their large and horned counterparts. Interestingly, no difference was found in performance between small and large females, which instead both performed similar to small hornless males, which suggests that size itself may have a negligible effect on beetle mobility. In the second experiment surgical removal of horns similarly improved male performance, an effect that was reversed once horns were re-attached. The results thus support the hypothesis that the absence of large horns in small males improves male maneuverability inside tunnels. However, increased maneuverability may clearly not be the only advantage hornlessness can convey to a minor male. By not initiating horn growth small males may also be able to allocate resources to other structures whose normal function would either otherwise be compromised (such as eyes or antennae as suggested by Nijhout and Emlen 1999;Emlen 2001), or whose function would be improved beyond that of large horned males. A particularly interesting candidate structure are testes, whose sizes play an important role in determining a given males ability to increase his reproductive success via sperm competition (Simmons et al. 1999;Tomkins and Simmons 2000). While no data are available for O. nigriventris, studies on other onthophagine species have found that minor males have indeed developed significantly larger testes and ejaculate volumes compared to their major male counterparts (Simmons et al 1999;Tomkins and Simmons 2000). If correct, this would make the increase in maneuverability due to lack of horns reported here an added advantage to minor males. Our study faces at least three possibly important limitations. First, the mobility assay relied on the use of an artificial, horizontal tunnel made of plastic tubing and light stimuli to manipulate beetle behavior. In nature, beetles run, compete, and mate inside subterranean tunnels dug through soil or sand (Cook 1990;Emlen 1997). Tunnels are of varying diameter and orientation, and may intercept with other tunnels (Moczek and Emlen 2000). Except near tunnel entrances beetles typically behave in complete darkness. Even though the diameter of the experimental tunnel was within the range of natural tunnel diameters, the assay thus relied on a relatively artificial environment to quantify male maneuverability. On the other hand, a critical advantage of the assay was that it could be standardized reliably across treatment groups. Performance-differences between treatment groups are therefore unlikely to be due to the assay, rather they are likely to reflect real differences in agility as a function of male horn phenotype. Surprisingly, relatively moderate sample sizes for each experiment were sufficient to detect measurable, and highly significant, mobility differences between horned and hornless male morphs, suggesting that horn possession has an immediate and drastic effect on male mobility inside tunnels. . While the magnitude of a mobility handicap is therefore clearly at least, in part, a function of the magnitude of horn development, we believe that the results can be extrapolated to other species within this genus. A second, related consideration concerns the importance of spatial context within which animals behave. A mobility advantage to small, hornless males may only exist, or be significant, in instances where individuals have to perform in confined spaces such as subterranean tunnels. This certainly applies to all species of the genus Onthophagus and many other dung beetle genera (e.g. Phanaeus, Caccobius) that also feature horned species, but it does not apply to may other taxa of horned beetles including the often spectacularly horned species in the subfamily Dynastinae (Siva-Jothy 1987; Kawano 1995Kawano , 2002Mizunima 1999). Here, fights occur arboreally outside the confines of a tunnel, and it remains to be investigated whether fights over access to entrances to nesting sites or feeding sites may have the potential to impose their own spatial constraints that could possibly magnify mobility advantages to small, hornless, sneaking males. The third and most conceptual challenge to our study lies in the fact that we were unable to measure fitness consequences of reduced or enhanced mobility. While the results suggest that the absence of horns increases maneuverability inside tunnels, this increase may have no effect on the reproductive success of minors and thus be selectively neutral. This is particularly noteworthy since a previous study also found behavioral evidence in support of a mobility handicap in horned O. taurus (Moczek and Emlen 2000), while a subsequent study on the same species failed to detect negative fitness effects of horn length on minor males (Hunt and Simmons 2001). It remains unclear at this point whether this absence of such negative fitness effects is indeed characteristic of onthophagine mating systems in general, confined to the particular species under study, or a limitation of the experimental design which quantified fitness as fertilization success over a 5 day period inside plastic buckets (Hunt and Simmons 2001). On the other hand, results from several other studies suggest that the reproductive success of minor males, especially if it would be quantified over individual life time and under more natural conditions, is likely to be profoundly impeded by the possession of horns. Minor males rely on speed, agility, and reduced copulation duration to sneak matings from physically superior horned, major males (O. binodis : Cook 1990;O. acuminatus: Emlen 1997;O. taurus: Moczek and Emlen 2000). In O. taurus, for example, mating success appears to be directly related to a hornless male's ability to circumvent a guarding male during a sneaking attempt. If successful, horned males appear to be unable to sense the presence of a sneaker male and ignore extra-pair copulations. However, if hornless males do make contact with horned guarding males, e.g. by failing to retreat fast enough from a successful sneaking attempt, this is invariably followed by a mating between the guarding male and focal female (Moczek and Emlen 2000). This in turn is likely to severely detract from the sneaker male's fertilization success due to a generally high last-male fertilization advantage in onthophagine beetles (Hunt and Simmons 2000). The present study was able to detect a highly significant mobility difference as a function of horn-possession using single replicates and a moderate sample size. If these results are representative, the life time mobility advantage of hornless males is likely to be dramatic and, as a consequence, likely to positively affect the fitness of small, hornless males that do not rely on horns as weapons in male-male combat. If correct this suggests that sneaking behavior in small males may indeed favor a hornless phenotype, opposite to the horned phenotype favored in males large enough to profitably engage in fighting behavior. In this scenario, beetles with a variety of horn sizes would be expected to be inferior fighters and sneakers because an intermediate horn size is not suitable for fighting with large, horned males yet at the same time would cause a significant handicap to a sneaking male, as shown here. Thus intermediates should be selected against in the context of either reproductive tactic. Combined, this would help explain the selective advantage of genotypes capable of facultative, size-dependent expression of hornless and horned male phenotypes and the often sudden, threshold-like transition between alternatives commonly observed in natural populations of onthophagine beetles. Clearly, direct estimates of male fitness as a function of body size and horn length in this and additional species will have to follow to further evaluate the significance of our results and of horns as mobility handicaps in horned beetles in general.
5,685.4
2006-09-22T00:00:00.000
[ "Biology", "Environmental Science" ]