text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
A Fast Forth-Order Method for 3 D Helmholtz Equation with Neumann Boundary We present fast fourth-order finite difference scheme for 3D Helmholtz equation with Neumann boundary condition. We employ the discrete Fourier transform operator and divide the problem into some independent subproblems. By means of the Gaussian elimination in the vertical direction, the problem is reduced into a small system on the top layer of the domain. The procedure for solving the numerical solutions is accelerated by the sparsity of Fourier operator under the space complexity of ( ) 3 O M . Furthermore, the method makes it possible to solve the 3D Helmholtz equation with large grid number. The accuracy and efficiency of the method are validated by two test examples which have exact solutions. Introduction Helmholtz equation appears from general conservation laws of physics and can be interpreted as wave equations.Helmholtz equation is widely applied in the scientific and engineering design problem.Many methods have been proposed for solving the Helmholtz equations, such as finite difference method [1], finite element method [2] [3] [4], spectral method [5] [6] and other methods [7] [8] [9].However, the computational cost of the finite element method increases greatly for large wave number problems.Additionally, boundary element method is limited to constant-coefficients problems.Finite difference schemes provide the simplest and least expensive avenue for achieving high-order accuracy.Some high order algorithms are proposed in [10] [11] [12] [13].In this paper, we derive a fourth-order finite difference scheme using 19 points for solving the three-dimensional Helmholtz equation. The discretization of the fully three-dimensional Helmholtz equation contains a large number of unknowns and requires considerable memory space.The time and space complexity increase exponentially as the grid number increases.In the meantime, to maintain a given accuracy, the mesh must be refined as the wave number increases.Some parallel algorithms are presented in [14] [15].However, this kind of parallel algorithms cannot settle the conflict between the grid number and the performance of the computer hardware. Fast Fourier transform is a powerful technique for solving the Helmholtz equation both in two and three dimensions [16] [17].However, fast algorithm in [18] requires much computational cost.In light of this, we propose a fast algorithm for solving the three-dimensional Helmholtz equation.The fast operator applies inexpensive transformation to break the large discretization matrix into small and independent systems.Therefore, the equation in the whole region is divided into some small equations in the vertical direction.Meanwhile, the algorithm saves much memory space and requires less computational time due to the sparsity of the fast operator.The problem is reduced on the aperture by introducing a Gaussian elimination and the Neumann boundary condition in the vertical direction. The paper is outlined as follows.In Section 2, a fourth-order finite difference method for the Helmholtz equation is derived.In Section 3 and Section 4, a fast algorithm is proposed by the Fourier transformation and Gaussian elimination. Two numerical experiments of the fast fourth-order algorithm are presented in Section 5.The paper is concluded in Section 6. Fourth-Order Finite Difference Method The model problem is described as follows in the cubic domain Ω with Neumann boundary condition ( ) where k is the wave number and Γ is one of the planes of domain. ( ) ( ) , , , , , x y z The 19-points finite difference stencil with h yields the following linear system where ) the symbol ⊗ represents the Kronecker product., , M A and N A are all tridiagonal Toeplitz matrices.Fourier-sine transformation can be applied to these matrices for accelerating the algorithm.Multiplying discrete Fourier-sine transformation matrices M S and N S on the both side of M A and N A , we have ( ) ( )  can be defined in the similar way. Therefore, multiplying on both side of Equation ( 4), we have where , . is given in Figure 1 when American Journal of Computational Mathematics , 1, 2, , ; 1, 2, , .12 In this paper, we take Γ as the top surface of the domain and it can be extended to the general situations.Since the solutions on the other surfaces are already known, we need to extract top B S which contains the parts of , , 1 , 2 , , 0, 0, ,1 2 6 . 1 Next, we use the Gaussian elimination with a row partial pivoting to solve Equation (7).American Journal of Computational Mathematics First of all, constructing a LU-decomposition for ij P , i.e. ij ij ij Discretization of Neumann Boundary Condition The fourth-order finite difference discretization of Equation ( 2) can be expressed as Using the fourth-order substitution of zzz φ we can derive ( ) where , , on both side of Equation ( 12), we can obtain where Moreover, replacing l with , 12 and the matrix form , :,:, . on both side of Equation ( 15), there follows Combining Equation (11) and Equation ( 17) and derive a linear system where Numerical Experiments sin π sin π , , 2sinh 2π sinh 2π 1 sinh 2π Table 1 fully corroborates the theoretical design rate of the convergence for the proposed method.We can see that a good accuracy (10 −7 ) is achieved with a small number of grid points (16 -32 in each direction).In the case of space complexity of ( ) ( ) ( ) ( ) ( ) Conclusion We propose a fast-high order method for solving the 3D Helmholtz equation with Neumann boundary condition.Fourier operator is used to generate block-tridiagonal structure of the discretization of the Helmholtz equation. Moreover, by using the Gaussian elimination in the vertical direction, the Helmholtz equation is reduced into a linear system in the layer of the domain. The validity and efficiency of the method are tested by two numerical experiments. x y z b x y z and ( ) , g x y are known function.The Helmholtz equation is approximated by a fourth-order finite difference discretization with N L I I I and MNL I are identity matrices, the subscripts denote their dimension., M N A A and L A are , M M N N × × and L L × tridiagonal matrices respectively.B Φ and B F are the boundary parts of Φ and F. Figure 1 . Figure 1.The sparse structure of M Finally we can get the numerical solution of the 3D Helmholtz equation. In this section, two numerical experiments are presented to test the validity and efficiency of the proposed method.Both experiments are implemented on MATLAB.All the equations are solved by the BiCG method.Equations in the two examples are solved in a cube [ ] [ ] [ ] 3 OM , the sparsity of Fourier operator accelerates the speed for solving the three-dimensional Helmholtz equation.Moreover, the comparison of the computational time of three times Fourier transformation and twice Fourier transformation are given in figures of the numerical solutions U with different wave number in Figure4and Figure5.As shown in Figure4and Figure5, the solutions of the Helmholtz equation are highly oscillating for large wave number. Table 1 . Here M American Journal of Computational Mathematics represent two different transform operators.As we can see from Table1, the algorithm proposed in this paper saves much computational time and makes it possible to solve the equation with large grid number.Meanwhile, we give the numerical solutions of Equation (19) in the whole domain and numerical solution on the face Table 1 . Convergence rate and comparisons of computational time (s) for solving Example 1 with different operators.
1,744.4
2018-08-31T00:00:00.000
[ "Mathematics", "Engineering" ]
Urban Catchment-Scale Blue-Green-Gray Infrastructure Classification with Unmanned Aerial Vehicle Images and Machine Learning Algorithms Green infrastructure (GI), such as green roofs, is now widely used in sustainable urban development. An accurate mapping of GI is important to provide surface parameterization for model development. However, the accuracy and precision of mapping GI is still a challenge in identifying GI at the small catchment scale. We proposed a framework for blue-green-gray infrastructure classification using machine learning algorithms and unmanned aerial vehicle (UAV) images that contained digital surface model (DSM) information. We used the campus of the Southern University of Science and Technology in Shenzhen, China, as a study case for our classification method. The UAV was a DJI Phantom 4 Multispectral, which measures the blue, green, red, red-edge, and near-infrared bands and DSM information. Six machine learning algorithms, i.e., fuzzy classifier, k-nearest neighbor classifier, Bayes classifier, classification and regression tree, support vector machine (SVM), and random forest (RF), were used to classify blue (including water), green (including green roofs, grass, trees (shrubs), bare land), and gray (including buildings, roads) infrastructure. The highest kappa coefficient was observed for RF and the lowest was observed for SVM, with coefficients of 0.807 and 0.381, respectively. We optimized the sampling method based on a chessboard grid and got the optimal sampling interval of 11.6 m to increase the classification efficiency. We also analyzed the effects of weather conditions, seasons, and different image layers, and found that images in overcast days or winter days could improve the classification accuracy. In particular, the DSM layer was crucial for distinguishing green roofs and grass, and buildings and roads. Our study demonstrates the feasibility of using UAV images in urban blue-green-gray infrastructure classification, and our infrastructure classification framework based on machine learning algorithms is effective. Our results could provide the basis for the future urban stormwater management model development and aid sustainable urban planning. INTRODUCTION Green infrastructure (GI) is a collection of areas that function as natural ecosystems and open spaces (Benedict and McMahon, 2006;Palmer et al., 2015), and it can maintain and improve the quality of air and water and provide multiple benefits for people and wildlife (Palmer et al., 2015;Benedict and McMahon, 2006;Environmental Protection Agency, 2015;Hashad et al., 2021). As an important part of urban ecosystems (Hu et al., 2021), GI provides green spaces for cities, and benefit people's physical and mental health (Venkataramanan et al., 2019;Zhang et al., 2021). In addition, GI can alleviate urban flooding and urban heat island effect (Venkataramanan et al., 2019;Dai et al., 2021;Ouyang et al., 2021;Bartesaghi-Koc et al., 2020), and accelerate sustainable development (Hu et al., 2021). GI and other infrastructures are important land types that have different runoff coefficients, which are essential for stormwater management models and urban energy balance models (Cui and Chui, 2021;Yang et al., 2021). Nitoslawski et al. (2021) pointed out that it is valuable to use emerging technologies to study urban green infrastructure mapping. However, the current related studies only carry out classification and mapping for part of infrastructure. For example, Narziev et al. (2021) mapped irrigation system, while Man et al. (2020) and Furberg et al. (2020) mapped urban grass and trees. There is a need to perform a more comprehensive classification and mapping of infrastructures. Generally, infrastructures are the facilities needed by the society, while the land covers are divided based on their natural and physical characteristics (Environmental Protection Agency, 2019; Gregorio and Jansen, 2000). For example, green roofs can be used to reduce the runoff and increase the aesthetic of buildings, which is one kind of GI. But it cannot be regarded as the land cover. To our best knowledge, there are no specific methods for the classification of GI. Boonpook et al. (2021) pointed out that the distinction between green roofs and ground grass is difficult because their spectral information is similar. Moreover, GI is usually scattered over urban areas and has different forms with a fine spatial scale. The mapping of GI based on remote sensing images with insufficient spatial resolution or fewer data features has a large uncertainty (Bartesaghi-Koc et al., 2020). In recent decades, using remote sensing for automatic classification and mapping of infrastructure is valuable for avoiding manual identification, which is time-consuming and laborious (Shao et al., 2021). Satellites, airborne vehicles, and unmanned aerial vehicles (UAVs) have usually been used to obtain images as inputs for automatic classification and mapping. Satellites can collect data and make repeated observations at regular intervals, even in difficult-to-reach locations . Satellite images have been widely used for GI identification. For example, Gašparović and Dobrinić (2021) used satellite synthetic aperture radar images to identify water, bare soil, forest, and low vegetation, and Furberg et al. (2020) used satellite images to analyze the changes in urban grassland and forest. However, the acquisition of satellite images is strongly affected by the atmospheric cloud conditions Wang et al., 2019), and the accuracy and precision are limited by the spatial resolution (Bartesaghi-Koc et al., 2020;Wang et al., 2019), especially in urban areas with complex features (Furberg et al., 2020). Airborne vehicles provide high-resolution images and can adjust the angles, positions, and instruments as required (Alakian and Achard, 2020). For example, Man et al. (2020) extracted grass and trees in urban areas based on airborne hyperspectral and LiDAR data. However, the costs of airborne vehicles are prohibitive and they require logistics management (Bartesaghi-Koc et al., 2020). Nowadays, the cost of UAVs is decreasing (Wang et al., 2019) and UAVs can carry a variety of sensors (e.g., multispectral cameras, LiDAR, and thermal infrared cameras) that can obtain targeted high-resolution data on a centimeter scale (Jiang et al., 2021). The amount of research on GI using UAV data has been increasing; for example, urban GI thermal effects (Khalaim et al., 2021), GI vegetation health (Dimitrov et al., 2018), and the classification of plant species (Fan and Lu, 2021;Jiang et al., 2021;Miura et al., 2021) have been investigated. Therefore, UAV data is more suitable for small catchment studies. Machine learning algorithms, such as fuzzy classifier (FC) (Trimble Germany GmbH, 2014a;Cai and Kwan, 1998), k-nearest neighbor classifier (KNN) (Bai et al., 2021;Li et al., 2016), Bayes classifier (Bayes) (Han et al., 2012;Brunner et al., 2021), classification and regression tree (CART) (Li et al., 2016;Zhang and Yang, 2020), support vector machine (SVM) (He et al., 2007;Ismail et al., 2021), and random forest (RF) (Li et al., 2016;Dobrinić et al., 2021) algorithms, have been widely used in land surface automatic classification, especially in land use/cover classification. For example, Zhang and Yang (2020) improved land cover classification based on the CART method; Dobrinić et al. (2021) built an accurate vegetation map using a RF algorithm. However, these algorithms have still not been effectively applied in the infrastructure classification in small urban catchments, and the optimal algorithm is still not clear. At present, although UAVs have advantages in vegetation identification, such as the classification of crops, trees, and grass species (Garzon-Lopez and Lasso, 2020;Fan and Lu, 2021;Jiang et al., 2021;Miura et al., 2021;Sudarshan Rao et al., 2021;Wicaksono and Hernina, 2021), the application of UAVs for infrastructure classification is still rare. Machine learning algorithms have not been widely used for infrastructure classification. In the present study, we take green infrastructure (all different green open spaces), blue infrastructure (surface water bodies), and gray infrastructure (artificial structures without vegetation) as the study objects, and classify the blue-green-gray infrastructure using an UAV at a small catchment scale (i.e., 0.1-10 km 2 ), and develop a highresolution object-based method using machine learning algorithms. In addition, we optimize the sampling method and discuss the effects of weather conditions, seasons, and different image layers on classification. Study Area and Data Acquisition The Southern University of Science and Technology (SUSTech) is located in Shenzhen, China (Figure 1), which has a subtropical monsoon climate with annual mean precipitation of 1935.8 mm (Meteorological Bureau of Shenzhen Municipality, 2021; Hu et al., 2008). The whole area of the campus is about 2 km 2 . There are hills in the northern and central parts of the campus, and a river runs through the southern part ( Figure 1). The combination of terrain and campus walls creates a small catchment. The vegetation is mainly plantation community, including lemon eucalyptus (Eucalyptus citriodora Hook. f.) community, acacia mangium (Acacia mangium Willd.) community, and lychee (Litchi chinensis Sonn.) forest (Hu et al., 2008). The buildings with various forms of roofs, asphalt roads and permeable pavements are mosaic in the campus. There are also lakes, streams and a river, and so on. Although they were built in different periods but all of them were constructed within 10 years. Their pictures could be found in Supplementary Figure S1. There are several types of GI (e.g., green roofs, trees, grass, and bare land), blue infrastructure (e.g., water), and gray infrastructure (e.g., buildings and roads) distributed across the campus (Figure 1). To test the application of UAV images and the method of classifying different types of infrastructure, the SUSTech campus was chosen as the study area. Flying missions were performed when the weather conditions were feasible (generally sunny) and the wind was below force 4 (wind speed <6 m/s), at regular intervals of 2-4 weeks. Due to the battery capacity of the UAV, we divided the study area into nine subareas to perform the flying missions. Each mission was carried out between 11:00 and 13: 30 in adjacent 2 days and we acquired a total of about 24,600 images from 4,100 photo locations. The UAV can record the precision position information, which could be used for post processed kinematics (PPK) to synthesize the images (https:// www.drdrone.ca/pages/p4-multispectral). We used DJI Terra (version 3.0) with orthophoto image correction algorithm to synthesize the images and generate the RGB orthophoto image, the spectral images for the five bands, the digital Classification Algorithms In this study, six widely used machine learning algorithms were compared. Descriptions, advantages, and disadvantages of the algorithms are shown in Table 1. Detailed explanations and the hyperparameter settings associated with the algorithms are given in Supplementary material S1. Figure 2 shows the steps for extracting and classifying the blue-green-gray infrastructure based on the UAV images. Firstly, the input images and RGB orthophoto image were retrieved from the UAV images. Secondly, with the RGB orthophoto image and field survey, we created the training and validation samples for different kinds of infrastructure. Methodology Thirdly, based on input images and training samples, we trained the algorithms and got the trained results. Finally, the validation accuracy and classification results were made based on the validation samples and trained results. Sample Creation Based on the classification categories of the European Commission (Maes et al., 2016), we classified blue-green-gray infrastructure in the SUSTech campus as water, trees (shrubs), grass, green roofs, bare land, buildings (no vegetation), and roads. The seven types of samples were made for model training and validation of the machine learning algorithms. ArcMap (version 10.6, included in ArcGIS for Desktop, Esri) was used to pre-process input images (Supplementary Figure S2) and make the sample shapefiles for training and validation ( Figure 3A). To ensure that the samples were random and the process could be repeated, we applied an equidistant sampling method (chessboard grid sampling method, Figure 3A), which made the samples uniformly distributed (Zhao et al., 2017). To avoid the overlap of samplings, the grids for the validation was obtained by shifting the grids for training, as shown in Figure 3A. To achieve a better trade-off between classification accuracy and efficiency, we compared the results derived from samples with different sampling intervals (i.e., 2.9, 5.8, 8.7, 11.6, 14.5, and 17.4 m) in the central part of the campus (referred to as the core area, which is subarea No. 10 in Supplementary Figure S3). The classification accuracies were evaluated at different sampling intervals, and the optimal sampling interval was determined. Algorithm Training and Validation Different machine learning algorithms were assessed with Trimble eCognition Developer (eCognition) (version 9.0.2) based on the object-based image analysis method (Trimble Germany GmbH, 2014a). Multiresolution segmentation was used to divide the UAV images into small objects ( Figure 3B). The classes were assigned to the objects corresponding to the training samples, and the features of the image layers were extracted to the objects. Each machine learning algorithm was used for training and classification ( Figure 4). The accuracy was assessed by the error matrix based on the validation samples and trained results. We used five widely used indices, producer's accuracy, user's accuracy, mean accuracy of different classes, kappa coefficient, and overall accuracy (OA), to evaluate the classification accuracy (Talebi et al., 2014;Dobrinić et al., 2021;Wang et al., 2019;Man et al., 2020). The producer's accuracy is the ratio of the number of correctly classified objects to validation objects for a class, and the user's accuracy is the ratio of the number of correctly classified objects to classified objects for a class (Talebi et al., 2014;Dobrinić et al., 2021). The mean accuracy is the average of the producer's accuracy and user's accuracy. The kappa coefficient uses information about the entire error matrix to evaluate the classification accuracy and is calculated as (Wang et al., 2019;Man et al., 2020) where N is the total number of objects, k is the number of classes of the classification, n ii is the number of correctly classified objects in class i, n i+ is the objects number classified as class i, and n +i is the number of validation objects of class i. Frontiers in Environmental Science | www.frontiersin.org January 2022 | Volume 9 | Article 778598 5 The OA is the proportion of all sample objects that are correctly classified (Man et al., 2020) and a larger OA value means a better classification result. OA is calculated as (Wang et al., 2019) OA The optimal algorithm was obtained by comparing the kappa coefficients and OAs. To test the stability of the best algorithm, we selected the core area and five subareas for accuracy comparison from the total 17 subareas of the study area using a random number generator (Supplementary Figure S3). According to the trial-and-error accuracy assessment, we optimized the selected image layers further by increasing or decreasing the number of layers (see Section 3.6 for details). Optimization of Sampling Method Our sampling method was first implemented with RF algorithm in the core area, which covered an area of 126,857 m 2 and contained seven types of infrastructure. The accuracies of different sampling interval scenarios for various infrastructure types are shown in Table 2. For all infrastructure types, a finer sampling interval corresponded to a higher classification accuracy. A kappa coefficient between 0.80 and 1.00 indicates almost perfect classification, whereas a coefficient between 0.60 and 0.80 indicates substantial classification (Landis and Koch, 1977). The number of samples represents the manual sampling load, and a smaller number indicates higher efficiency. Considering the trade-off between accuracy and sampling efficiency, an optimal sampling interval of 11.6 m was used in this study. Common methods for creating sample data include manual selection of the region of interest (Wang et al., 2019), simple random sampling, and equidistant sampling (Zhao et al., 2017). Manual selection is subjective and arbitrary, and thus the results depend on the operator. Simple random sampling is easy to perform, but it may cause polarization and give poor training results (Zhao et al., 2017). The equidistant sampling method, chessboard grid sampling, is objectively random and repeatable. The method produces uniformly distributed samples and mitigates polarization (Zhao et al., 2017), so it can be widely used in other areas. The optimal sampling interval may vary due to the differences in infrastructure type in the target area (e.g., college communities and typical urban areas) and the scale of the infrastructure. Similarly, prioritizing efficiency or accuracy requires different optimal sampling intervals. However, in the areas of the same type (e.g., different college communities), the optimal sampling interval is representative. Figure 5 shows the classification results of six algorithms in the core area, with a sampling interval of 11.6 m for both training samples and validation samples. The kappa coefficient and OA had the same ranking result, so we took kappa as a representative index for our analysis. Comparison of Classification Algorithms The RF algorithm exhibited the best performance in the core area, with a kappa coefficient of 0.807, demonstrating the advantages of this algorithm in processing high-dimensional data. The following two best classification methods were FC and Bayes, which had similar kappa coefficients of 0.772 and 0.761, respectively. The results calculated with the CART and KNN algorithms showed slightly worse performance. However, the classification results for the SVM algorithm were the worst, with a low kappa coefficient of 0.381. The main reason for this poor performance may be that the SVM algorithm has difficulty in handling large samples and multi-class problems (Bai et al., 2021;He et al., 2007). Validation in Other Subareas The subareas selected for the stability test with RF algorithm were Nos. 3, 5, 13, 14, 17, and 10 (the core area) (Supplementary Figure S3). The classification results, kappa coefficients, and OAs are shown in Figure 6 and Table 3. The kappa coefficients of the five subareas and OAs of the six subareas were greater than 0.8, which reflected an almost perfect performance (Landis and Koch, 1977). The kappa coefficient of subarea No. 3 was 0.592, which was moderate (0.4-0.6) (Landis and Koch, 1977), but the OA was as high as 0.867. This was because the proportion of trees in the training sample set reached about 75% (692/930), which reduced the relative consistency in the calculation (Eq. 1). The mean accuracies of bare land and roads in subarea No. 3 and bare land in subarea No. 13 were below 0.6 ( Table 3), which could be associated with the insufficient number of training samples for the related infrastructures. The training samples of them are less than 20. Meanwhile, the classification results were better based on training samples more than 30. From this perspective, to achieve a good result of mapping infrastructure, the training samples should be more than 30. Effect of Weather Conditions on Classification In the images taken between August 24, 2020 and December 26, 2020, there were eleven sunny days, three partly cloudy days, and one overcast day. The classification results with RF algorithm were better for the images taken on the overcast day than those taken on the sunny days. We discuss the results from the images taken on December 17, 2020 (overcast) and December 21, 2020 (sunny) as an example. The kappa coefficient and OA for the images taken on the overcast day were greater than those for the sunny day (Table 4). For each type of infrastructure, the mean accuracies of the results derived from the overcast day were higher than those from the sunny day for grass, buildings, and roads. However, for green roofs, better results were obtained from the sunny day. The spectral similarity of grass, trees and green roofs is high (Boonpook et al., 2021). Therefore, we used DSM elevation to increase the difference between green roofs on buildings and vegetation on the ground. Analysis of the locations and image features of the error points (red circles in Figure 7) showed that on a sunny day, the distinction between grass in the shadow of trees and green roofs at the same elevation increased (red circles 6 and 7). In addition, stronger sunlight increased the reflection intensity of leaves on the sunny side of trees (red circles 5 and 6), so the distinction between trees and green roofs also increased. Therefore, accuracy for green roofs was better on the sunny day than on the overcast day. On the sunny day, trees shadows had a strong shading effect on grass (red circle 8), which made it easy to confuse grass with the shaded side of trees. Similarly, shadows of tall buildings blocked out trees and grass (red circles 2 and 3); thus, vegetation in shaded areas could also be misclassified. Therefore, the accuracy of grass was better on the overcast day than on the sunny day. Shadows of tall buildings on the roads were easy to misclassify as water on the sunny day, but not on the overcast day (red circle 1). Under strong Sun on the sunny day, some special coatings on the buildings, such as solar panels, reflected sunlight, which could easily lead to misclassification (red circle 4). On balance, these effects meant that the overcast day image data resulted in better classification results. If the purpose of the flying mission is to obtain spectral data for classification, we recommend doing it on an overcast day with plenty of light. Effect of Seasons on Classification The kappa coefficients and OAs with RF algorithm in different months of the year (Figure 8) showed that the best classification results were in February (winter) and June (summer), and the kappa coefficient and OA for February were better than those for June. Table 5 shows that accuracies for grass, roads, and water for February were better than those for June. This may be because the grass was withered or dead in winter, and thus was more easily distinguished from trees on the spectrum. In addition, the tree canopies shrank, so that the increased quantity of light made it easier to identify grass in gaps. As discussed in Section 3.4, shadows made the classification results worse. The area of shadows increases in winter, which could decrease the accuracy. However, grass and trees accounted for a large proportion of the total area, whereas the area of shadows was small, leading to a better classification result in winter. Because the sunlight was weaker in winter, there was a smaller difference between the roads in shadow and the roads in sunlight. Therefore, the mean accuracy of roads was better in winter than in summer. In addition, when the sunlight was close to direct in summer, water surfaces tended to generate mirror reflections, which produced noisy points in the UAV images. Therefore, misclassification of water occurred easily, FIGURE 7 | Results on a sunny day (left) and an overcast day (right), and the kappa coefficients and OAs. The red circles indicate the main differences between the two subfigures. Frontiers in Environmental Science | www.frontiersin.org January 2022 | Volume 9 | Article 778598 9 as shown in the center of the subfigures in Figure 8 (April, June, and August). Comparing the February and June classification results in Figure 8 and Table 5, the mean accuracy for green roofs in June (i.e., 0.699) was better than that in February (i.e., 0.646). This may be because strong summer sunlight made other vegetation more distinct from the green roofs, which is consistent with the discussion in Section 3.4. Campus construction work caused the change in bare land area that resulted in the difference between the 2 months. Effect of Different Image Layers on Classification The combination of seven image layers (B, G, R, RE, NIR, NDVI, DSM) were used as the benchmark for comparing the results with RF algorithm. By comparing the classification accuracies after reducing the number of image layers, the effects of different layers on the results were analyzed. For convenience, we refer to the results from different layer combinations as cases I-VIII (Figure 9). Case I was the benchmark that included all seven layers. Except for green roofs, the mean accuracies in case I were the best. However, the classification result for water was not significantly affected by the different layers, with the accuracies in all the cases larger than 0.94. In most cases, the average accuracies of green roofs and grass were less than 0.8, which indicated that they were easily confused classes. Case II (B, G, R, RE, NIR, NDVI) did not include the DSM layer. Because green roofs and grass, as well as roads and buildings, have similar features (Boonpook et al., 2021), when the DSM layer was not included, the accuracies of green roofs, grass, buildings, and roads were considerably lower. Therefore, the DSM layer was key information that distinguished green roofs and grass, as well as buildings and roads. To improve the classification accuracy further, the method identifying the four types of infrastructure should be developed in future research. Because green roofs are located on buildings, it may be effective to first extract the buildings, and then identify vegetation on buildings as green roofs. For building recognition, some researchers have used manual extraction (Shao et al., 2021), which is time-consuming and laborious. Due to the different colors and types of building roofs, it is difficult to identify them effectively with only spectral images (Kim et al., 2011). Demir and Baltsavias (2012) and Wang et al. (2018) combined the slope from DSM, spectral images, and other information to identify buildings, and the accuracy was above 0.9. Kim et al. (2011) analyzed LiDAR data to obtain normalized digital surface model (nDSM), and then combined it with airborne images to identify buildings. The nDSM, which is the difference between DSM and digital elevation model, is created from a point cloud (Talebi et al., 2014;Sun and Lin, 2017;Kodors, 2019). Talebi et al. (2014) also used nDSM to distinguish roads, building roofs, and pervious surfaces, and the mean accuracy of building recognition was above 0.8. In summary, using the slope from DSM or nDSM combined with spectral images is effective for identifying buildings, and green roofs, grass, and roads can be accurately classified further. Case III (B, G, R, RE, NIR, DSM) did not include the NDVI layer. Comparing the results of cases I and III, the accuracies of most classes in case I were higher than those in case III, except for green roofs. The classification subfigures (Supplementary Figure S4) showed that green roofs were overclassified in case I, decreasing the accuracy for green roofs. In case IV (B, G, R, RE, NDVI, DSM), which did not include the NIR layer, the accuracy for green roofs was greatly improved, whereas the OA was still as good as that in case I. The results for case V (B, G, R, NIR, NDVI, DSM), which did not include the RE layer, were similar to those for case IV, but the accuracies for bare land and buildings were lower. Case VI (B, G, R, NDVI, DSM) did not include the RE and NIR layers, and the accuracy for green roofs was increased substantially by 0.2. The accuracies for grass and roads were decreased, but the other changes were small. The decrease in OA was small, and the classification results were good in general. Case VII (NIR, RE, NDVI, DSM) did not include the B, G, and R bands, and the accuracy for green roofs increased by 0.13. Cases VI and VII showed that appropriate redundancy removal of spectral image layers helped to identify green roofs accurately. In case VIII (B, G, R, DSM), which did not include the NIR, RE, and NDVI layers, in addition to green roofs and bare land, the accuracies of the other classes were decreased considerably. For overall evaluation, the OA was below 0.8. In particular, the accuracy of grass dropped below 0.6. Case VIII demonstrates the problem of insufficient data layers. This analysis demonstrated that the DSM layer is crucial for distinguishing green roofs and grass, as well as buildings and roads. Furthermore, appropriate removal of redundant spectral image layers increased the accuracy for green roofs. Case VI (B, G, R, NDVI, DSM) used five image layers, and the green roof accuracy was increased by 0.2 at the cost of decreasing the accuracy for grass by 0.034. The accuracies for the other classes were maintained, which indicated that case VI was the most appropriate combination. Limitations Firstly, although there have been several represented types of infrastructure considered in this study, some other types of green infrastructure (e.g., rain garden) are still missed. Besides, in gray infrastructure, roads can be further subdivided into asphalt roads and porous brick pavements. The porous pavement cannot be recognized in this study. The algorithms and multispectral data used in this paper are capable to recognize the green roofs but not able to distinguish the above finer types effectively. Frontiers in Environmental Science | www.frontiersin.org January 2022 | Volume 9 | Article 778598 11 Secondly, in terms of the sampling method, we only get the optimal sampling interval for college communities. For different types of study areas, we need to conduct trial and error tests to get the optimal sampling interval. This step will still require further improvement. Thirdly, the remote sensing data we used is limited. If LiDAR point cloud data or DEM data are available, in conjunction with DSM and spectral data, the classification accuracy of green roofs, grass, trees, buildings and roads will be greatly improved. CONCLUSION We developed a method to classify blue-green-gray infrastructure accurately using machine learning algorithms and UAV image data. Because the resolution of UAV images is on the centimeter scale, this method could identify all types of infrastructure on a sub-meter scale. The chessboard grid sampling method was used to ensure the randomness and objectivity of samples. Evaluating the accuracies with different sampling intervals showed that a sampling interval of 11.6 m ensured that the kappa coefficient and OA were in the almost perfect range (>0.8) and that the number of samples was reduced, which increased working efficiency. There are many machine learning algorithms that can be used for infrastructure classification. The different principles of the algorithms cause differences in their applicability. Evaluating the accuracies of the classification results from six widely used algorithms showed that the RF, FC, and Bayes algorithms were suitable for recognizing different infrastructure. RF was the best algorithm because of its ability to process highdimensional data well. In addition, the results in other subareas, in which the kappa coefficient and OA were generally greater than 0.8, showed that this method had universal applicability. For any type of infrastructure, more than 30 training samples were needed to ensure the reliability of classification. Comparing the classification results on a sunny day and an overcast day showed that overcast day data increased the recognition of grass, trees, and roads in shadow. The misclassification of roads in shadow as water was also reduced. The angle of sunlight changes with seasons, which in turn alters the shadow area. In winter, the shadow area is larger, which may reduce the classification accuracy. However, because trees and grass were the main infrastructure types in the study area, shriveled grass in winter increased the spectral difference and classification accuracy. The combination of the two effects resulted in more accurate classification in winter. To obtain better classification results, we used seven image layers. Through trial and error, we showed that appropriate removal of redundant spectral image layers, such as RE and NIR, increased the recognition accuracy of green roofs. The DSM layer was crucial for improving the distinction considerably between green roofs and grass, and buildings and roads. Using only five image layers (B, G, R, NDVI, and DSM) increased the accuracy for green roofs greatly at the cost of a small decrease in the accuracy for grass. Our method can identify small facilities on a sub-meter scale, and can obtain a distribution map of blue-green-gray infrastructure in urban small catchments (0.1-10 km 2 ) accurately and quickly. The classification of GI is fundamental for rational management and planning of GI, and contributes to sustainable development of urban areas. Combined with the rainwater use characteristics of various infrastructure, an accurate GI distribution map can help to simulate stormwater management and use effectiveness accurately in small urban catchments. DATA AVAILABILITY STATEMENT The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS JJ wrote the manuscript, built the method, acquired data, and conducted data processing and analysis. WC conceived the study, built the method, together wrote this manuscript, and supervised this study. JL conceived the study, acquired funding, supervised the study, and contributed to the manuscript writing.
7,448
2022-01-17T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience 1Big Data Research Center, Hunan University of Commerce, Changsha 410205, China 2Key Laboratory of High Performance Computing and Stochastic Information Processing (HPCSIP) (Ministry of Education of China), College of Mathematics and Computer Science, Hunan Normal University, Changsha 410081, China 3School of Business Administration, Hunan University, Changsha 410081, China 4School of Finance, Guangdong University of Foreign Studies, Guangzhou 510006, China Introduction The effective control of neuronal activity is one of the most exciting topics in theoretical neuroscience, with great potential for applications in healthcare.Nowadays, the application of stochastic control methods in neuroscience is becoming a significant portion of the mainstream research.Among many researches, for example, we refer to Holden (1976) for the models of the stochastic activity of neural aggregates, Iolov et al. [1] with respect to the optimal control of single neuron spike trains, and Roberts et al. [2] with respect to the review of the application of the stochastic models of brain activity. In this paper, we study trajectory planning and control in human arm movements.When a hand is moved to a target, the central nervous system must select one specific trajectory among an infinite number of possible trajectories that lead to the target position.The content of this paper includes two parts: the first part is modeling the activities incorporating stochastic process, and the second part is quantifying task goals as cost functions and applying the sophisticated tools of optimal control theory to obtain the optimal behavior.Feng et al. [3] reviewed two optimal control problems at a different levels, neuronal activity control and movement control.They also derived the optimal signals for these two control problems.Li et al. [4] considered the robust control of human arm movements.Based on the fuzzy interpolation of an nonlinear stochastic arm system, they simplified the complex noise-tolerant robust control of the human arm tracking problem by solving a set of linear matrix inequalities using Newton's iterative method via an interior point scheme for convex optimization.Singh et al. [5] modeled reaching movements in the presence of obstacles and solved a stochastic optimal control problem that consists of probabilistic collision avoidance constraints and a cost function that trades off between effort and end-state variance in the presence of a signal-dependent noise.For more details, we refer the reader to Campos and Calado [6], Berret et al. [7], and Mainprice et al. [8]. Advances in Mathematical Physics Yet, all the above studies discussed 1-dimensional or lower-dimension space, and the neuronal activity or movement trajectory would be involved in a higher dimension space.In this paper, motivated by Feng et al. [3], we consider a stochastic control problem for arm movement within the framework of -dimensional control space.Applying stochastic control theory, we solved the optimization problem explicitly and obtained the exact solution of the optimal trajectory, velocity, and the optimal variance. The remainder of this paper is organized as follows.Section 2 introduces the basic model setup of the highorder linear stochastic dynamical systems for movement trajectory.In Section 3, we derive the explicit expressions for the optimal trajectory, velocity, and variance.In Section 4, we provided a 3-dimensional optimization example, and concluding remarks are given in Section 5. Model Setup 2.1.The Integrate-and-Fire Model.In this subsection, we give the classical I & F (integrated-and-fire) model followed by Feng et al. [3], which describe the neuron activity. 𝑑𝐾 (𝑡 with (0) = rest < thres and where ] is the decay time constant.The synaptic input current is with = (), ≥ 0 and = (), ≥ 0 as Poisson processes with rates , () and , (), > 0 and > 0 being the magnitude of each excitatory postsynaptic potential (EPSP) and inhibitory postsynaptic potential (IPSP); a cell receives excitatory postsynaptic potentials (EPSPs) at synapses and inhibitory postsynaptic potentials (IPSPs) at inhibitory synapses.Once () crosses thre from below, a spike is generated and is reset to rest .This model is termed as the IF model. Let rest = 0, = , = , and use the usual approximation to approximate the IF models (see Feng et al. [3] and Zhang and Feng [9]); then (1) can rewriten as where {()} ≥0 is a standard Brownian motion, > 0 is a constant, and if = 0.5, it implies that the input is derived from a Poisson process.If > 0.5, it is the so-called the supra-Poisson inputs, and the other is the so-called sub-Poisson inputs if < 0.5.In addition, a larger leads to more randomness for the synaptic inputs. General Linear Stochastic Differential Equation. In this subsection, we extend the one-dimensional I & F model (3) to -dimensional stochastic differential equations in which the solution process enters linearly.Such processes arise in estimation and control of linear systems, in systems, in economics, and in various other fields (see Liu [10]), as where is an -dimensional Brown motion independent of the -dimensional initial vector , and × , × 1, and × matrices (), Λ(), and Σ() are nonrandom, measurable, and locally bounded, respectively.Now we define an × matrix function Φ(), satisfying the following matrix differential equation: where is the × identity matrix.We know that (3) has unique (absolutely continuous) solution defined for 0 ≤ < ∞, and, for each ≥ 0, the matrix Φ() is nonsingular.By Itô's rule, it is easily verified that solves ( 4).We suppose that ‖‖ 2 < ∞ and introduce the mean vector and covariance matrix functions as follows: () = (, ) .(7) From (4), we can show that hold for every 0 ≤ , < ∞.In particular, () and () solve the linear equations: Optimization Problem.For simplicity of notations, we let = 0.For a point ∈ R 1 (in fact, expresses the arriving position component of the trajectory in the direction of component ()) and two positive numbers , , we intend to find a control signal * () which satisfies the constrained condition: and such that the variance of 1 () arrives the minimum in [, + ]; that is, Let Φ() = ( ) × ; by (6), we have, for 0 ≤ < ∞, Therefore, by ( 15) and (17), we have By the calculation of matrix Φ(), we easily get the following results.( In particular, for ≥ 2, Proof.Since Φ () = Φ(), by the definition of (see ( 14)) and the multiplication of matrices, we get (19) at once.Since Φ(0) = , Φ (0) = , it is easy to get (20). Note. if has multiplicity > 1 as an eigenvalue of , and is diagonal matrix, we can also choose independent functions with the form exp(), = 0, 1, . . ., − 1.In this case, we can obtain the same result as that in the Theorem 2 by using the similar approach.By (17), it is easily seen that To this end, let us define the control signal set: where 1 , 2 , . . ., are the unique solution of the following (by the result of Theorem 2): The similar equations are true for the other components in -dimensional space. From the results above, we can obtain the following conclusions. Theorem 3.Under the optimal control framework as we set up here and > 1/2, the optimal mean trajectory is a straight line.When ≤ 1/2 the optimal control problem is degenerate; that is, the optimal control signal is a delta function, and ( 29) with Λ = Λ * gives us an exact relationship between time and variance. Proof.This proof is similar to that of Theorem 1 in Feng et al. [3]; we omit it. Remark 4. When = 1, the results of Theorem 3 are consistent with Feng et al. [3].The finding is also in agreement with the experimental Fitts law (see Fitts [11]); that is, the longer time of a reaching movement, the higher the accuracy of arriving at the target point. For a point D = ( , , ) ∈ 3 and two positive numbers , , we intend to find a control signal Λ() which satisfies where Λ ∈ L 2 [0, +] means that each component of it is in L 2 [0, +].To stabilize the hand, we further require that the hand will stay at for a while, that is, in time interval [, + ], which also naturally requires that the velocity should be zero at the end of movement.The physical meaning of the problem we considered here is clear; at time , the hand will reach the position (see (38)), as precisely as possible (see (39)).Without loss of generality, we assume that > 0, > 0, and > 0. To use the results in the previous section, we can rewrite the optimal control problem posed in the previous paragraph as a 2-order linear stochastic dynamical system in 2dimensional space, that is, The similar equation holds true for () and (). If we let V () express the moving velocity in the direction of -coordinate, (40) becomes the following 2-order linear SDE: Comparing (12), it is easy to know that Since where thus by calculating, we know that where Hence, by ( 8), we get that ) . (50) Conclusion The experimental study of movement in human has shown that voluntary reaching movements obey Fitts law: the longer the time taken for a reaching movement, the greater the accuracy for the hand to arrive at the end point.In this paper, we study a stochastic control problem for a reaching movement within a -dimensional space.We solve this stochastic control problem explicitly and obtain the analytical solutions for optimal signals, optimal velocity, and optimal variance.Furthermore, we find that the optimal control is also consistent with Fitts law.This implies that the straight line trajectory is a natural consequence of optimal stochastic control principles, under a nondegenerate optimal control signal.
2,268
2017-01-01T00:00:00.000
[ "Computer Science" ]
Functional regression control chart for monitoring ship CO 2$_{2}$ emissions On modern ships, the quick development in data acquisition technologies is producing data‐rich environments where variable measurements are continuously streamed and stored during navigation and thus can be naturally modelled as functional data or profiles. Then, both the CO 2$_{2}$ emissions (i.e. the quality characteristic of interest) and the variable profiles that have an impact on them (i.e. the covariates) are called to be explored in the light of the new worldwide and European regulations on the monitoring, reporting and verification of harmful emissions. In this paper, we show an application of the functional regression control chart (FRCC) with the ultimate goal of answering, at the end of each ship voyage, the question: given the value of the covariates, is the observed CO 2$_{2}$ emission profile as expected? To this aim, the FRCC focuses on the monitoring of residuals obtained from a multivariate functional linear regression of the CO 2$_{2}$ emission profiles on the functional covariates. The applicability of the FRCC is demonstrated through a real‐case study of a Ro‐Pax ship operating in the Mediterranean Sea. The proposed FRCC is also compared with other alternatives available in the literature and its advantages are discussed over some practical examples. process control (SPC) that provides a suite of methods to continuously give a solution to the urging issue of evaluating the stability over time of functional quality characteristics. Recent contributions are Colosimo and Pacella, 5 Grasso et al 6 and Menafoglio et al. 7 As in the classical SPC, where data are scalars, profile control charts have the task of monitoring the functional quality characteristic and of triggering a signal when assignable sources of variations (i.e. special causes) act on it. When this happens, the process is said to be out of control (OC). Otherwise, when only normal sources of variation (i.e. common causes) apply, the process is said to be in control (IC). Only recently, Centofanti et al 8 introduced the functional regression control chart (FRCC) framework to monitor a functional quality characteristic when this is influenced by one or more functional covariates. In particular, the aim of the FRCC is to improve the monitoring of the quality characteristic by including the information coming from the covariates. In this scenario, if one of these covariates manifests itself with an extreme realization, the quality characteristic may wrongly be judged to be OC, even though it shows perfectly reasonable values given the covariates. Otherwise, there may be situations where the covariates are not extreme and the quality characteristic may wrongly appear IC because the information in the covariates is not used to increase the power of the monitoring. The FRCC framework is the functional extension of the basic Mandel's idea, 9 where the quality characteristic is monitored after being adjusted for the effect of covariates. That is, the control variable is the residual obtained from a regression of the quality characteristic on the covariates, and the focus is spotted on the residual variability not explained by the knowledge of the observed value of the covariates. In a more direct phrasing, the FRCC answers the question: given the value of the covariates, is the quality characteristic as expected? Alternatively, the question could be phrased as: does the assumed model fit the reality of the quality characteristic? If the answer is no, then special causes may have occurred that are beyond the information brought by the covariates through the chosen regression model. In particular, in the FRCC framework, the quality characteristic and the covariates are linked through a multiple functional linear regression model (MFLR), where both the response and the explanatory variables can be described by functional data. Recent examples of MFLR model can be found in Palumbo et al, 10 Centofanti et al 11,12 and Chiou et al. 13 The idea of monitoring model residuals arises also in SPC with autocorrelated data, for example, time series. [14][15][16] In this setting, residuals from an autoregressive model are used to recover independence of the observations, because conventional control charts are known not to work well if the quality characteristic exhibits even low levels of correlation over time. 17 In the regression control chart idea, residuals are used to adjust the quality characteristic for the information in the covariates. In recent years, profile monitoring has emerged as an effective technique also in the field of maritime transport where the issue of CO 2 emission monitoring is becoming of paramount importance. 8,18,19 Indeed, in view of climate change and global warming crises, the maritime transport industry is currently facing new challenges related to CO 2 emissions. Indeed, the Marine Environment Protection Committee of the International Maritime Organization [20][21][22] has urged shipping companies to set up a framework for the monitoring, reporting and verification (MRV) of CO 2 emissions based on fuel consumption. In the face of these regulations, shipping companies are updating DAQ systems on their fleets, enabling large volumes of observational data to be automatically streamed and transferred to a remote server, bypassing human intervention. A large proportion of these data can be modelled as functional data, thus representing a new challenge for FDA and related SPC methods in this area. The DAQ system installed on modern ships facilitates in fact the collection of functional data related to CO 2 emissions as well as to other functional covariates affecting them, which include the speed of the ship, engine variables such as the propeller pitch variables, environmental variables that describe the wind and sea conditions or the cumulative navigation time (we refer to Section 2.2 for more details on the variables considered in this work). However, most of the approaches that have already appeared in maritime literature [23][24][25] do not take advantage of the potential help to managerial decision making represented by the modelling of the entire voyage profiles acquired and usually compress information in one or more scalar features extracted from them. In this setting, a recurrent request posed by maritime engineers is related to the CO 2 emissions corresponding to the given values of other recorded covariates. That is, they want to assess if CO 2 emissions are coherent with the values of the covariates, in order to identify unexpected behaviours and take corrective measures. Engineers are less concerned in identifying CO 2 emission profiles that are extreme with respect to their marginal distribution, if this can be explained by some extreme value in the covariates. They are rather interested in CO 2 emission profiles that are not consistent with covariates affecting them because, for example, they can reveal anomalous ship performance. The FRCC can be used to meet this engineering need. In this paper, we propose to use the FRCC to monitor ship CO 2 emissions throughout each voyage, in order to identify special causes, at given values recorded by functional covariates. Specifically, we consider a particular implementation of the FRCC framework, where the functional quality characteristic, hereinafter referred to also as response, and the functional covariates are related through the multivariate functional linear regression (MFLR) model. In this paper, the MFLR model is estimated based on the multivariate functional principal components analysis (FPCA). 13,26 Then, studentized residuals are monitored through the simultaneous application of the Hotelling's 2 and the squared prediction error (SPE) control charts. [4][5][6]27 In this paper, the FRCC framework is used both retrospectively, as an aid to the practitioner to determine the IC state of the process under study and to identify an IC reference sample (Phase I), and, prospectively to monitor any departure from the IC state at future voyages (Phase II). The applicability of the FRCC is demonstrated through a real-case study of a Ro-Pax ship operating in the Mediterranean Sea, courtesy of the shipping company Grimaldi group. This work does not directly aim at a real-time feedback control, that is, in the online monitoring and immediate actions during an ongoing voyage. The FRCC framework is indeed used to signal OC voyages once they are completed, as 2 and monitoring statistics are going to be calculated on the entire profiles. Instead, the proposed FRCC focuses on tracking automatically (and possibly the whole fleet) the future OC signals, or patterns and trends that may identify, for example, engine malfunctioning, the need for hull cleaning or for any other energy efficiency initiative. Even though no action could be taken during an ongoing voyage, we believe this paper motivates the use of a functional data approach as the FRCC can be indeed used by shipping companies to evaluate, at the end of a voyage, if the observed CO 2 emission profile is anomalous, given the value of the observed covariates. This information can help to prevent and disguise possible problems in future voyages, or to schedule maintenance operations. The paper is structured as follows. In Section 2, the structure of the data and technological details of the ship equipment are provided for the real-case study at hand. In Section 3, the main materials and methods behind the particular implementation of the FRCC framework are summarized. In Section 4, by means of the real-case study mentioned before, the FRCC is practically applied to monitor CO 2 emissions and compared with alternative methods already appeared in the SPC literature. In Section 5, we draw conclusions. All computations and plots have been obtained using the software environment R. 28 TECHNOLOGICAL BACKGROUND AND DATA STRUCTURE For confidentiality reasons, in what follows we omit the name of the ship considered in the real-case study. However, to give an idea of the ship type, in Section 2.1, we provide its main technical features. Whereas, in Section 2.2, we describe in detail all the variables and the data used for the analysis. Technical features The main technical features of the ship are illustrated in Table 1. The considered ship is characterized by two engine sets, each consisting of two main diesel engines for propulsion Wärtsilä, Type 16ZAV40S, four-stroke, with a maximum continuous rating of 11,520 kW at 510 revolutions per minute (rpm) and by two variable pitch propellers and a shaft generator for electric power supply. The main engine power is used both for propulsion and electrical generation through the shaft generators, which are themselves keyed on a gearbox. The gearbox has two fast inlet shafts powered by the engine shaft, a slow outlet shaft for the propeller and a faster one to which the shaft alternator is connected. The gear ratio between the engine shaft and the propeller shaft is equal to 3.24, whereas the gear ratio between the engine shaft and the shaft alternator is equal to 0.32. The main diesel engine can be powered by three types of fuel with different percentage of sulphur (S) content in order to be compliant with the regulation in force on the geographical area to be sailed: heavy fuel oil, very low sulphur fuel oil (≤0.5% ), ultra-low sulphur fuel oil or marine gas oil (≤0.1% ). The electrical power supply of the ship consists of three diesel generators (1840 kVA, 690 V), two shaft generators (2100 kVA, 690 V) and one emergency diesel generator (480 kVA). The main engines can supply power in two different modes, at fixed rpm (constant mode) or at variable rpm (combined mode). In the constant mode, shaft generators can be used to supply electric power, even though the maximum speed cannot be reached because speed variations are only possible by changing the pitch of the propellers. Whereas, in the combined mode, the ship speed can be regulated by increasing both pitch propeller and engine rpm, but, vice versa, the possibility to engage the shaft generator is lost. Data description Data come from a DAQ system installed on the ship that are transmitted to the cloud with different frequencies varying from 2 to 5 min. The data refer to a specific route, that is, each profile in the data set corresponds to a voyage of the ship and all voyages have the same departure and arrival ports. A period of 11 consecutive months following a dry-dock operation on the ship is considered. Voyages in the first nine months (i.e. from the beginning of February 2020 to the end of October 2020) are used in Phase I, that is, to identify a reference data set, estimate the model and control chart limits. Voyages in the last 2 months (i.e. from the end of October 2020 to the end of December 2020) are used in Phase II to show the performance of the FRCC on monitoring new voyages. We start with a data set of 190 voyages for the Phase I (Section 4.1); then we remove outliers as described in Section 4.1.2 and we end up with a reference data set of 169 voyages; finally, we consider 22 voyages for the Phase II (Section 4.2). Note that the data refer to the navigation phase. More specifically, the navigation phase begins with the finished with engine order (when the ship leaves the departure port) and ends with the stand by engine order (when the ship enters the arrival port). Moreover, we need to identify an adequate functional domain for each voyage. Even if time is naturally suitable as a functional domain, total travel time can vary from voyage to voyage. Therefore, we prefer to use the fraction of distance traveled over the voyage as the common domain (0,1) of the data. All the signals acquired by the DAQ system are summarized into several variables that we describe here to select later the functional covariates and response considered in the MFLR model. The ship is tracked by its global positioning system (GPS), which provides longitude and latitude coordinates. The course over ground (COG) is the actual direction of progress of a vessel, between two points, with respect to the surface of the earth, measured in degrees. The sailed distance over ground (SDOG) is the distance travelled by the vessel between two points, measured in nautic miles (NM), calculated from the GPS sensor through the Haversine formula. The speed over ground (SOG) is measured in knots (kn) and is the ratio between SDOG and the sailing time, measured in hours. The propeller pitch (P) is measured in degrees and represents the angle between the intersection of the chord line of the blade section and a plane normal to the propeller axis. An anemometer sensor provides data about the true speed ( ), measured in knots, and direction ( ), measured in degrees, of the wind. The latter is obtained as the difference between the true wind angle in earth system and the COG. Additional information on the wind variables can be found on Bocchetti et al. 25 From the two anemometer variables, the longitudinal component of the wind is calculated as cos , while the transversal component is calculated as | sin |. Note that a positive (respectively, negative) longitudinal component of the wind means that the wind blows from stern (respectively, bow). Moreover, a data fusion process also allows the integration of marine data into the data set, that is, weather forecasts about the sea state furnished by private held weather service provider. The sea state is characterized by the provider through the typical parameters, namely, height and period, used to model waves that, in turn, are roughly divided into two components: wind-driven waves, or simply waves (generated by the immediate local wind) and swell (generated by distant weather systems and usually having larger period). In particular, the height, measured in meters, is defined as the vertical distance from wave crest to wave trough; whereas, the period, measured in seconds, represents the time between successive crests of a train of waves passing a fixed point in a ship, at a fixed angle of encounter. 29 Regarding the CO 2 measurement, MRV regulation proposes direct and indirect methods. The direct method determines the amount of CO 2 emitted measuring the flow of these emissions passing in exhaust gas funnels. Instead, the indirect method calculates the CO 2 emissions based on the fuel consumption. The direct method is based on the determination of CO 2 emitted that flow in exhaust gas stacks based on the measurement of the CO 2 in the exhaust gas and the measurement of the volume of the exhaust gas flow per unit of time. This method is very sensitive to the calibration and the uncertainty related to the measurement devices. Whereas, the class of indirect method calculates CO 2 emissions as a product of the whole amount of fuel consumption of the main and auxiliary engines, boilers, gas turbines and inert gas generators times the so-called emission factor, which is calculated as the average emission rate of a greenhouse gas (GHG) relative to the activity data of a source stream, assuming complete oxidation for combustion and complete conversion for all other chemical reactions. In this paper, we use the indirect method and we focus on the main engines only. In what follows, we list the functional variables chosen for the analysis in this work that are obtained from the signals acquired by the DAQ system. The functional response is the signal corresponding to the CO 2 emissions per hour along the TA B L E 2 Functional covariates used in the MFLR Variable name Unit of measurement entire voyage. In order to select functional covariates among the available signals, a very long preliminary investigation was carried out to identify the covariates that could better explain the CO 2 emissions. However, in practice, many signals that could have played the role of covariates were not able to be measured accurately. The intersection between the set of candidate and truly measurable covariates was finally identified after an intensive exchange of information and experience with marine engineers, shipping managers and operators. The following nine functional covariates have been identified, which are, thus, assumed as a characterization of the ship operational conditions: SOG, left propeller pitch, right propeller pitch, transversal and longitudinal component of the wind, wave height, wave period, swell height and derivative of the cumulative navigation time. Table 2 reports the complete list of the functional covariates considered in this work. Moreover, Figures 1 and 2 show the profiles of covariates and response, respectively, included in the reference data set used for model building and control chart limits estimation. In Section 4, we describe with more details how smooth profiles are obtained from the discrete observations acquired by the DAQ system over time, with 2-5 min frequency. METHODOLOGY The FRCC is a general framework for profile monitoring that can be divided into three main steps. 8 First, (i) define an MFLR model which links the functional response variablẽ, defined on the compact domain  and a vector̃of random functional covariates̃1, … ,̃, defined on the compact domain . Secondly, (ii) define the estimation method of the chosen model and thirdly, (iii) define the monitoring strategy of the functional residual, which is defined as the difference between the fitted value and̃. In what follows, we assume that̃1, … ,̃and̃have smooth realizations in 2 () and 2 ( ), that is, the Hilbert spaces of square integrable functions defined on ,  ⊂ ℝ. A specific implementation of the FRCC can be obtained by assuming for the step (i) the following MFLR model: where and are the standardized versions of̃and̃, obtained through the transformation approach of Chiou et al. 13 The regression coefficient = ( 1 , … , ) , is a vector where 's are square integrable bivariate functions defined on the closed interval  ×  , and the random error function has zero mean and variance function 2 , and is independent of . For the step (ii), we use an estimation method based on the multivariate Karhunen-Loève's Theorem. 30 In particular, we assume that the standardized covariate and response variables can be represented as follows: An estimator of ( , ) can be readily obtained by considering the truncated version of Equation (3), that is with = E( )∕ , and , < ∞. Plugging Equation (4) into Equation (1), due to the orthonormality of the PCs and , we obtain where = ( 1 , … , ) , = ( 1 , … , ) , = ( 1 , … , ) and = { } =1,…, , =1,…, , with = ∫  ( ) ( ) . Therefore, the problem of estimating reduces to the estimation of the matrix , which can be obtained through least squares given independent realizations (̃,̃) of (̃,̃). Then, given the least-squares estimator̂of , the estimator̂of can be calculated aŝ wherê= (̂1 , … ,̂) ,̂= (̂1 , … ,̂) , witĥand̂= (̂1, … ,̂) estimators of and , respectively. Finally, an estimator̂of iŝ= wherêare the entries of̂and̂= ∑ =1 ∫  ( )̂( ) . For the step (iii), we can define the raw functional residual as However, by following the remarks in Centofanti et al, 8 we shall better consider a scaled version of it, named studentized functional residual, and defined as The residual variance function is estimated asĈov ( −̂)( ) =̂2( ) +̂( , ), for ∈  , wherê2 is an estimator of 2 and̂is defined aŝ wherêis the estimator of the score vector of ,̂̂is the estimator of Cov( , ),̂is the estimator of the vector of the first eigenfunctions of , and̂is the estimator of Cov( ). As stated in Ref. [ 8 ], the use of the studentized functional residual, in place of the raw residual, is needed to reduce the effect of covariate mean shifts on the performance of the FRCC to identify OC condition of the quality characteristic, which can be large especially when the coefficient function is poorly estimated. In this case, it can be demonstrated that the interpretation of the FRCC becomes cumbersome, indeed a point falling outside the FRCC control limits could be wrongly assigned to a mean shift in the quality characteristic even though it shows a perfectly reasonable behaviour given the value of the covariates. This is problematic because the aim of the FRCC is to monitor the quality characteristic assigned the value of the covariates, and, thus, if a mean shift in the covariates causes an OC, the chart is saying that the value of quality characteristic disagrees with those of the covariates that, of course, it is not true. In this respect, the studentized functional residual is less influenced by covariate mean shifts than the raw residual. Indeed, the aim of Cov ( −̂) 1∕2 is to weight the raw residual on the basis of its uncertainty, such that for an extreme realization of , the residual is heavily scaled. Thus, the probability of the quality characteristic to be judged IC, when no special causes apply to the process, is larger than that corresponding to the raw residual. Note that, for a large sample sizes, consistently with the data set complexity, the studentized functional residual leads to the same results of the raw residual, because in this case it is independent of the values achieved by the covariates. Therefore, the use of studentized residual allows controlling the false alarm rate in presence of covariate mean shifts, which comes at a cost of reducing the power of the FRCC in identifying OCs in the quality characteristic, especially for extreme values of the covariates. However, this behaviour is inevitable because comes from the greater uncertainty in the model estimation originated by a limited number of extreme realizations in the reference sample. We use a monitoring strategy based on the Hotelling's 2 and the control charts 4, 6 27, 32 applied to in Equation (9). In particular, the studentized functional residual is approximated as where the scores = ∫  ( ) ( ) and the PCs are the eigenfunctions corresponding to the eigenvalues in descending order of the covariance function of . The Hotelling's statistic 2 is obtained as follows: where = diag( 1 , … , ) is the variance-covariance matrix of = ( 1 , … , ) . Note that 2 is the squared distance of the projection of from the origin of the space spanned by the PCs standardized for the score variances. Analogously, changes along directions orthogonal to the latter space are monitored by the statistic The control charts are designed in Phase I by means of a set of functional studentized residuals , , = 1, … , , obtained by independent realizations (̃,̃) acquired under IC conditions. Phase I includes also the estimation of the MFLR model unknown parameters, the PCs and the matrix (calculated by means of the sample covariance) as well as the estimation of the control limits for both the Hotelling's 2 and the control charts. The latter can be obtained by means of the (1 − )-quantiles of the empirical distribution of the two statistics, where is chosen to control the overall type I error probability. In the monitoring phase (Phase II), the functional studentized residuals of new data are calculated and an alarm signal is issued if at least one of the corresponding 2 and statistics violates the control limits. RESULTS AND DISCUSSION In this section, we show the results of the application of the FRCC to the data set described in Section 2.2. In particular, the retrospective and prospective phases, that is, Phase I and Phase II, are described in Section 4.1 and Section 4.2, respectively. Moreover, in Section 4.3, a comparison with simpler monitoring approaches is shown. We have used the R package funcharts to build the FRCC and perform all the analysis shown in this paper. The package is available on CRAN at https://cran.r-project.org/web/packages/funcharts/index.html. Moreover, in the supplementary material we provide an R script and the data to reproduce the results in this paper. Note that the data have been scaled for confidentiality reason. Phase I Phase I comprises the recovery of smooth functional data from the discrete observations for each voyage (Section 4.1.1), the identification of the reference data set of IC voyages (Section 4.1.2) and, the estimation of the MFLR model as described in Section 3 (Section 4.1.3). Data smoothing The first step of the analysis is to get smooth functional data from the discrete observations for each voyage of the ship. We use B-spline basis expansion and penalized least squares to estimate the corresponding basis coefficients. A common approach is to set a quite large number of basis functions and then select the optimal smoothing parameter by minimizing the generalized cross-validation (GCV) error. 1 However, the number of available discrete points is above 200 for each voyage and, by following this approach, the GCV criterion leads to choosing the smoothing parameter equal to zero in practice for all functional variables. This is a typical problem of overfitting, as also pointed out by Reiss and Ogden, 33 which show that at finite sample sizes GCV is likely to develop multiple minima and undersmooth. Therefore, we encourage parsimony and achieve regularization by choosing a small, efficient number of basis functions, with the smoothing parameter fixed to a small positive value (i.e. 10 −10 ) to ensure identifiabilty. In Figure 3, we plot the GCV error against the number of B-spline basis functions. While increasing the number of basis functions reduces the GCV error, we select 25 basis functions for all functional variables as the elbow point of these curves. Reference data set Once functional data are obtained, in order to monitor a set of voyages in a considered period, it is necessary to identify a reference data set of previous IC voyages that can be used for model building and estimation of the Hotelling 2 and control chart limits. Starting from an historical data set, any voyage that is not representative of the IC conditions has to be removed from it. Specific Phase I techniques are designed for the problem of eliminating anomalous voyages from the historical data set and generally lead to the construction of control charts with different limits from the ones calculated in Phase II. However, the problem of selecting the best method to perform Phase I is beyond the scope of this paper, which is instead mainly focused on Phase II monitoring. In this paper, we use the same FRCC framework also in Phase I and based on experts' opinion, to establish which voyages are actually considered anomalous and have to be excluded. That is, we first use the historical data set to build the FRCC and estimate the control chart limits, and we plot the FRCC for the same voyages; then, voyages signalled as anomalous are carefully investigated by maritime engineers to understand if special causes occurred; in these cases, voyages are removed from the data set; the process is repeated until the final reference data set is obtained, which contains only voyages considered as IC. Starting from the initial historical data set of 190 voyages, at the end of this iterative process of identifying outliers, detecting anomalies, removing anomalous voyages and refitting the model, we get a reference data set of 169 voyages. Figure 4 shows the FRCC applied to the initial data set. The -axis label indicates the voyage number (VN) and is here intended to be a progressive counting label to denote subsequent voyages in the data set. Moreover, Figure 5 shows some OC studentized residual profiles correctly signalled as anomalous. Model building The FRCC relies on the choice of and in Equation (4), as well as in Equation (11). Figure 6 shows the cumulative fraction of variance explained by the functional PCs in the multivariate functional covariates, the functional response and the functional studentized residuals, respectively. Based on the results in Figure 6, we opt for a more parsimonious choice than that suggested in Centofanti et al 8 95%, respectively, that is, we select = 7, = 1 and = 8. The corresponding actual fractions of variance explained are 81%, 96% and 96%. To allow for a possible interpretation of the selected functional PCs, in Figure 7 we plot the eigenfunctions of the covariance operator of the standardized multivariate functional covariates. Since they all have unit norm, we multiply them by the square root of the corresponding eigenvalues so that profiles with larger norm are PCs that explain a larger fraction of the total variance in the data. It can be readily envisaged that the first PC depends almost entirely on the two propeller pitch variables, the SOG and the navigation time. The latter is negatively correlated with the other variables, and for all these variables, weights are almost constant over the entire functional domain. The second PC strongly depends on the sea state descriptors (swell height, wave height and wave period) and only on the transversal component of the wind. These functional variables all have positive weight, with some parts of the domain showing slightly larger weight than others. The third PC seems to mainly depend on the longitudinal component of the wind alone. In other words, the first PC describes how fast the ship is moving, while the second and third PCs capture two distinct aspects of environmental conditions. Figure 8A shows the first PC (i.e. eigenfunction) of the covariance operator of the standardized functional response, that alone explains most of the variability in the data. This highlights that, after standardization, all Figure 8B shows the eigenfunctions of the covariance operator of the studentized residuals. The first PC depends on the average value over the entire voyage. The second PC accounts for the difference between the first and the second half of the voyage. Whereas, some of the following PCs seem to assign a larger weight to the boundaries of the functional domain, but, however, their interpretation becomes less straightforward. Figure 9 shows the estimated functional coefficients obtained as in Equation (6). Since the functional response is approximated with a single functional PC, which, in practice, is constant over the entire domain, the functional coefficient shows only vertical bands along the direction of . The most important predictors are those associated with the first and third Figure 10 shows the effect of this choice, where some of the more extreme raw functional residuals, shown in Figure 10A, are properly attenuated by the studentization (Figure 10B), when, as in this case, they correspond to more extreme profiles of covariates as described in Section 3. Figure 11 shows the FRCC used in Phase II, that is, the actual monitoring phase. The points correspond to 22 subsequent voyages, each of them is denoted by its VN. For simplicity, we count and label voyages again with VN 1 through 22, even though these voyages must not be confused with the first 22 ones in Phase I in Figure 4. Some Phase II voyages are signalled as OC in Figure 11. In particular voyages 1, 3, 15 and 16 are OC in both Hotelling 2 and control charts, while voyages 20 and 21 are OC in the control chart only. OC voyages are generally characterized by some unexpected behaviour in the CO 2 emissions, because they have not been properly predicted by the MFLR model in some specific part of the domain, or because the prediction error was overall moderately large for a considerable part of the voyage. Functional studentized residuals for these OC voyages are plotted in Figure 12 against the studentized residuals in the reference data set. Voyages 1, 3 and 15 plot far above the upper control limits in both the Hotelling's 2 and control charts. In particular, the functional studentized residual of voyage 1 is signalled as OC because it is larger than usual on average and at the end of the voyage. Voyage 3 shows an unusual studentized residual that is positive in the first part of the voyage, negative in the second part and extremely large at the very end of the voyage with respect to the Phase I reference profiles (depicted in grey lines). Voyage 15 shows the same unusual negative deviation at the end of the voyage as in voyage 3 and the residual profile remains negative for the whole voyage. Voyages 16, 20 and 21 are signalled as OC in Figure 11, even though they are close to the upper control limits and show less dramatic behaviour, as shown by the plots reported in the second row of panels in Figure 12. Studentized residual profiles corresponding to these voyages have in fact less prominent peaks/valleys. In particular, it is worth noting that voyage 16 is signalled as anomalous plausibly because its residual profile achieves large positive values for almost the entire voyage. Residual profile of voyage 20, even if more regular, as it is near zero in the middle of the voyage, has mild peak and valley at the extremes of the voyage. Residual profile of voyage 21, if considered pointwise, is instead almost entirely in the range of the Phase I studentized residuals. However, it is signalled as OC plausibly because of a sudden profile jump in the middle of the voyage. Finally, we want to highlight the importance of using both the Hotelling's 2 and the monitoring statistics to detect OC voyages, by discussing their difference. With the FPCA model on the functional residuals, we use the first = 8 components to approximate the data as in Equation (11), since they explain 95% of their variability ( Figure 6C). In particular, note that the first component, which alone explains about 75% of the variability in the data ( Figure 6C), in practice looks at shifts in mean during the entire voyage. Then, the Hotelling's 2 statistic in Equation (12), which is calculated on the scores of the first eight components, monitors the deviation within the FPCA model and from the discussion above we can state that a functional residual with a particularly large shift in mean will have a large 2 . On the other hand, the statistic in Equation (13) monitors the remaining part, that is, the error (11) of approximating the residual function with its projection based on FPCA, which should be small when the profile is consistent with the FPCA model. Therefore, a large value indicates that a profile is clearly different than the underlying FPCA model. In Figure 11, all the voyages signalled have above the upper control limit, but voyages 20 and 21 have the 2 IC. These OC voyages show smaller shifts in mean than the others in Figure 12, then it makes sense that the 2 statistic is IC, however they still show an anomalous behaviour, as the other OC voyages, which is well captured by the statistic. Comparison with other methods In this section, we try to show if it is actually convenient to use the FRCC rather than simpler approaches. Centofanti et al 8 showed that the FRCC is more powerful than the index based (INBA) control chart, which monitors the area under the response variable, and the RESP control chart, which monitors the coefficients coming from the functional PC decomposition of the response via Hotelling's 2 and control charts. We compare the FRCC with these two control charts and discuss if there are different results in terms of detection of OC voyages in this specific application. Figure 13 shows the INBA control chart, which is not able to detect any of the voyages signalled by the FRCC and seems not appropriate for this type of application. Moreover, this control chart only detects voyage 22 as OC, which is signalled also by the RESP control chart in Figure 14, but, on the other hand, is IC in the FRCC. Apparently, voyage 22 could be an anomalous one that the FRCC is not able to correctly identify. By observing the functional response profile plotted in the left panel of Figure 15, it is plausible that this voyage is signalled as OC by the INBA and RESP control charts because, marginally, the CO 2 emissions were particularly low during the entire voyage. However, by observing that the corresponding functional studentized residual plotted in the right panel of Figure 15, we can conclude that these low values of the response variable are predicted well by the MFLR model. Therefore, conditionally on the functional covariate profiles, the response variable profile of voyage 22 is correctly judged as IC by the FRCC. This example highlights the convenience in using the FRCC with respect to simpler approaches, when the interest is in monitoring a functional quality of interest conditionally on functional covariates having influence on it. The RESP control chart seems to correctly detect some of the voyages identified by the FRCC, that is, voyages 1, 3, 15 and 16, but it misses voyages (i.e. 20 and 21), while it signals voyage 11 that is IC in the FRCC. Note that voyage 11 is close the upper control limits in both the RESP control chart and the FRCC. CONCLUSIONS A particular implementation of the FRCC framework proposed in Centofanti et al 8 is applied in this paper to monitor the ship CO 2 emission profiles, in order to identify any special causes at given values of the functional covariates that may have an influence on them. The CO 2 emission profiles are adjusted for the effect of these covariates by means of an MFLR model estimated via multivariate FPCA. That is, the residuals from the MFLR model are monitored by using jointly the Hotelling's 2 and the control charts built on their functional PC decomposition. The specific implementation of the FRCC relies on the use of the studentized functional residual to take into account the different residual variance at different covariate values. The proposed FRCC demonstrated to be effective in the identification of anomalous voyages over the real-case study presented, which is concerned with the data collected during 2020 on a Ro-Pax ship operating in the Mediterranean Sea. Moreover, the particular implementation of the FRCC framework was compared against alternative approaches available in the literature, which, however, only look at the marginal distribution of the functional response, or only to some specific features. All these competing methods showed a lack of ability to signal some important OC and, in other situations, provided false alarms. Since the monitoring statistics are calculated on the entire profiles acquired at the end of each voyage, this work does not directly allow for a real-time feedback control, that is, to guide actions during an ongoing voyage. Instead, the FRCC can be used to track automatically (and possibly the whole fleet) patterns and trends that may identify malfunctioning in the engines, the need for hull cleaning, or for any other energy efficiency initiative. This information can help to diagnose and prevent possible problems in future voyages, or to schedule maintenance operations. Finally, one important output achieved in our research is the technological transfer of the FRCC tool to the shipping company Grimaldi Group. The very practical applicability of these statistical tools is in fact further investigated by providing the energy saving department of the company with R code that is able to automatically import new data from the company server, to envelop the mathematical and numerical details provided in the paper, and to routinely produce automatic voyage reports for some Ro-Pax ships of interest from their fleet. A C K N O W L E D G E M E N T S The authors are deeply grateful to the anonymous referees and to the editors for their suggestions and help in significantly improving the manuscript. The authors are also extremely grateful to the Grimaldi Group's Energy Saving Department engineers Dario Bocchetti, Andrea D'Ambra and Rosa Di Matteo for the access to observational data, the maritime domain insight and the general support over the course of these activities. A U T H O R B I O G R A P H I E S Christian Capezza is a postdoc researcher at the Department of Industrial Engineering of the University of Naples Federico II. He works on advanced statistical methodologies for engineering applications and his research project regards the development of interpretable statistical methods for the analysis of complex systems in industry 4.0. Fabio Centofanti is a PhD student at the Department of Industrial Engineering of the University of Naples Federico II, Italy. His main research interests include functional data analysis and statistical process monitoring for industrial applications. Antonio Lepore is an Assistant Professor at the Department of Industrial Engineering of the University of Naples Federico II, Italy. His main research interests include the industrial application of statistical techniques to the monitoring of complex measurement profiles from multisensor acquisition systems, with particular attention to renewable energy and harmful emissions. Alessandra Menafoglio is an Assistant Professor at MOX, Department of Mathematics, Politecnico di Milano. Her research interests focus on the development of innovative statistical models and methods for the analysis and statistical process control of complex observations (e.g. curves, images and functional signals), possibly characterized by spatial dependence. Biagio Palumbo is an Associate Professor in 'Statistics for experimental and technological research' at the Department of Industrial Engineering of the University of Naples Federico II, Italy. His major research interests include reliability, design and analysis of experiments, statistical methods for process monitoring and optimization and data science for technology. Simone Vantini is an Associate Professor of Statistics at the Politecnico di Milano, Italy. He has been publishing widely in Functional and Object-Oriented Data Analysis. His current research interests include: permutation testing,
10,106.6
2021-07-10T00:00:00.000
[ "Environmental Science", "Engineering" ]
2-OGC: Open Gravitational-wave Catalog of Binary Mergers from Analysis of Public Advanced LIGO and Virgo Data We present the second Open Gravitational-wave Catalog (2-OGC) of compact-binary coalescences, obtained from the complete set of public data from Advanced LIGO’s first and second observing runs. For the first time we also search public data from the Virgo observatory. The sensitivity of our search benefits from updated methods of ranking candidate events including the effects of nonstationary detector noise and varying network sensitivity; in a separate targeted binary black hole merger search we also impose a prior distribution of binary component masses. We identify a population of 14 binary black hole merger events with probability of astrophysical origin >0.5 as well as the binary neutron star merger GW170817. We confirm the previously reported events GW170121, GW170304, and GW170727 and also report GW151205, a new marginal binary black hole merger with a primary mass of that may have formed through hierarchical merger. We find no additional significant binary neutron star merger or neutron star–black hole merger events. To enable deeper follow-up as our understanding of the underlying populations evolves, we make available our comprehensive catalog of events, including the subthreshold population of candidates and posterior samples from parameter inference of the 30 most significant binary black hole candidates. Introduction The Advanced LIGO (LIGO Scientific Collaboration et al. 2015) and Virgo (Acernese et al. 2015) observatories have ushered in the age of gravitational-wave astronomy. The first and second observing runs (O1 and O2) of Advanced LIGO and Virgo covered the period from 2015 to 2017. This provided a total of 171 days of multidetector observing time. To date, these instruments have observed a population of binary black holes (BBHs) and a single binary neutron star, GW170817, which has become one of the most observed astronomical events (Abbott et al. 2017a). Ten BBH mergers and a single binary neutron star merger have been reported in this period by the LIGO and Virgo Collaborations (Abbott et al. 2019a). Several independent analyses have examined publicly released data (Antelis & Moreno 2019;Nitz et al. 2019a;Venumadhav et al. 2019a), including an analysis targeting BBH mergers that reported several additional candidates . The first open gravitational-wave catalog (1-OGC) searched for compact-binary coalescences during O1 (Nitz et al. 2019a). We extend that analysis to cover both O1 and O2 while incorporating Virgo data for the first time. During the first observing run, only the two LIGO instruments were observing. Joint three-detector observing with the Virgo instrument began in 2017 August during the second observing run. We make additional improvements to our search by accounting for short-time variations in the network sensitivity and power spectral density (PSD) estimates directly in our ranking of candidate events. A similar procedure for tracking PSD variations was independently developed in Venumadhav et al. (2019aVenumadhav et al. ( , 2019b and Zackay et al. (2019b). We produce a comprehensive catalog of candidate events from our matchedfilter search which covers binary neutron star (BNS), neutron star-black hole (NSBH), and BBH mergers. 7 While not individually significant on their own, subthreshold candidates can be correlated with gamma-ray burst candidates (Nitz et al. 2019c), high-energy neutrinos (Countryman et al. 2019), optical transients (Andreoni et al. 2019;Setzer et al. 2019), and other counterparts to uncover new, fainter sources. In addition to our broad search, we conduct a targeted analysis to uncover fainter BBH mergers. It is possible to confidently detect BBH mergers that are not individually significant in the context of the wider search space by considering their consistency with the population of confidently observed BBH mergers. The collection of highly significant detected events constrains astrophysical rates and distribution with relatively small uncertainties (Abbott et al. 2019b). For this reason, we do not yet employ this technique for binary neutron star or NSBH populations, as their rates and mass and spin distributions are much less constrained. We improve over the BBH focused analysis introduced in Nitz et al. (2019a) by considering an explicit population prior (Dent & Veitch 2014). This focused approach is most directly comparable to the results ofVenumadhav et al. (2019b), who consider only BBH mergers, rather than a broad parameter search such as that employed in Abbott et al. (2019a). We find eight highly significant BBH mergers at false alarm rates less than 1 per 100 yr in our full analysis along with the binary neutron star merger, GW170817. No other individually significant BNS or NSBH sources were identified. However, if the population of these sources were to be better understood, it may be possible to pick out fainter mergers from our population of candidates. When we apply a ranking to search candidates that optimizes search sensitivity for a population of BBH mergers similar to that already detected, we identify a further six such mergers with a probability of astrophysical origin above 50%. These include GW170818 and GW170729 which were reported in Abbott et al. (2019a) along with GW170121, GW170727, and GW170304 which were reported in Venumadhav et al. (2019b). We report one new marginal BBH candidate, GW151205. Our results are broadly consistent with both Venumadhav et al. (2019b) and Abbott et al. (2019a). LIGO and Virgo Observing Period We analyze the complete set of public LIGO and Virgo data (Vallisneri et al. 2015). The distribution of multidetector analysis time and the evolution of the observatories' sensitivities over time is shown in Table 1 and Figure 1 respectively. To date, there have been 288 days of Advanced LIGO and Virgo observing time. Two or more instruments were observing during 171 days. There were only 15.2 days of full LIGO-Hanford, LIGO-Livingston, and Virgo joint observing. O2 was the first time that Virgo conducted joint observing with the LIGO interferometers since initial LIGO (Abbott et al. 2016a). The Virgo instrument significantly surpassed the average BNS range during the last VSR2/3 science run (∼10 Mpc; Abadie et al. 2012) to achieve an average of 27Mpc during the joint observing period of O2. While the amount of triple-detector observing time is limited during the first two observing runs, the ongoing third observing run will considerably improve the availability of three-detector joint observing time. The methods demonstrated here will be applicable to future analysis of the O3 multidetector data set. Our analysis during triple-detector time remains sensitive to signals that appear only in two of the three detectors, as discussed below in Section 3.3. We note that there are ∼117days of single-detector observing time. In this work we do not consider the detection of gravitational-wave mergers during this time; however, methods for assigning meaningful significance to such events have been proposed (Callister et al. 2017) and will be investigated in future work. Single-detector observing time has been used in follow-up analyses where a merger could be confirmed by electromagnetic observations (Abbott et al. 2019c;Nitz et al. 2019c). Search for Binary Mergers We use a matched-filtering approach as implemented in the open source PyCBC library (Allen 2005;Usman et al. 2016;Nitz et al. 2019b). This toolkit has been similarly employed in LIGO/Virgo collaboration and independent analyses (Abbott et al. 2019a;Nitz et al. 2019a). We extend the approach used in the 1-OGC analysis (Nitz et al. 2019a) to handle the analysis of Virgo data. We also incorporate improvements to the ranking of candidates by accounting for time variations in the PSD and network sensitivity. The search procedure can be summarized as follows. The data from each detector are correlated against a set of possible merger signals. Matched filtering is used to calculate a signalto-noise (S/N) time series for each potential signal waveform. Our analysis identifies peaks in these time series and follows up the peaks with a set of signal consistency tests. These singledetector candidates are then combined into multidetector candidates by enforcing astrophysically consistent time delays between detectors, as well as enforcing identical component masses and spins. Finally, these candidates are ranked by the ratio of their signal and noise model likelihoods (see Section 3.3). Search Space Our analysis targets a wide range of BNS, NSBH, and BBH mergers. We perform a matched filter on the data with waveform models that span the range of desired detectable sources. Although the space of possible binary component masses and spins is continuous, we must select a discrete set of points in this space as templates to correlate against the data: we use the set of ∼400,000 templates introduced in Dal Canton & Harry (2017) which has been previously used in Nitz et al. (2019a) and Abbott et al. (2019a). This bank of templates is suitable for the detection of mergers up to binary masses of several hundred solar masses, under the conditions that the dominant gravitational-wave emission mode is adequate to describe the signal and that the effects of precession caused by misalignment of the orbital and component object angular momenta can be neglected Dal Canton & Harry (2017). Neglecting precession causes a 7% (14%) loss in sensitivity to BBH (NSBH) sources with mass ratio 5 (14) when assuming an isotropic distribution of the components' spins; the loss is negligible for mergers with comparable component masses . Figure 2 shows the distribution of template detector-frame component masses. We use the spinning effective-one-body model (SEOBNRv4) for templates corresponding to mergers with (redshifted, detector frame) total mass Taracchini et al. 2014;Bohé et al. 2016). The TaylorF2 post-Newtonian model is used in all other cases (Sathyaprakash & Dhurandhar 1991;Droz et al. 1999;Blanchet 2002;Faye et al. 2012). Note. We use here the abbreviations H, L, and V for the LIGO-Hanford, LIGO-Livingston, and Virgo observatories respectively. Note that some data (∼0.5%) may not be analyzed due to analysis constraints. Only the indicated combination of observatories were operating for each time period, hence each is exclusive of all others. Single-detector Candidates The first stage of our analysis is to identify single-detector candidates. These correspond to peaks in the S/N time series of a particular template waveform. Each is assigned a ranking statistic as we will discuss below. In this work, we do not explicitly conduct a search for sources that only appear in a single detector. However, a ranking of single-detector candidates forms the first stage of our analysis. For each template waveform and detector data set we calculate a signal-to-noise time series ρ(t) using matched filtering. This can be expressed using a frequency-domain convolution as where h is the normalized (Fourier domain) template waveform and s˜is the detector data. S n is the noise PSD of the data which is estimated using Welch's method. The integration range extends from a template-dependent lower frequency limit f l (ranging from 20 to 30 Hz in our search) to an upper cutoff f h given by the Nyquist frequency of the data. Peaks in the S/N time series are collected as single-detector candidates (triggers). To control the rate of single-detector candidates to be examined, our analysis preclusters these triggers. Only those that are among the loudest 100 every ∼1 s within a set of predefined chirp-mass bins are kept. The binning ensures that loud triggers from a specific region (which may be caused by non-Gaussian noise artifacts) do not cause quiet signals elsewhere in parameter space to be missed. We remove candidates where the instrument state indicates the data may be adversely affected by instrumental or environment noise artifacts as indicated by the Gravitational-Wave Open Science Center (GWOSC; Vallisneri et al. 2015;Abbott et al. 2016bAbbott et al. , 2018. This affects ∼0.5% of the observation period. However, there remain classes of transient non-Gaussian noise in the LIGO data which produce triggers with large values of S/N (Nuttall et al. 2015;Abbott et al. 2016bAbbott et al. , 2018Cabero et al. 2019). The surviving single-detector candidates are subjected to the signal consistency tests introduced in Allen (2005) and Nitz (2018). These tests check that the accumulation of signal power as a function of frequency, and power outside the expected signal band, respectively, are consistent with an astrophysical explanation. They produce two statistic values which are χ 2 distributed: c r 2 and c r sg , 2 respectively (Nitz 2018). These are used to reweight (Babak et al. 2013) the single-detector signal strength in two stages. This reweighting allows candidates that well match an expected astrophysical source to be assigned a statistic value similar to their matched filter S/N, while downweighting many classes of non-Gaussian noise transients. For all candidates we apply The latter test is only applied to these short duration, higher mass signals as the test is computationally intensive and has the greatest impact for short duration signals which may be otherwise confused with some classes of transient non-Gaussian noise (Nitz 2018;Cabero et al. 2019). Otherwise, we set r r =˜. This statistic r is the same used in the 1-OGC analysis (Nitz et al. 2019a) and LVC O2 catalog Abbott et al. (2019a). We further improve upon this by accounting for short-term changes in the overall PSD estimate. The issue of PSD variation was also addressed in Venumadhav et al. (2019a). Previously we modeled the PSD for each detector as a function of frequency S n ( f ), which is estimated on a 512 s timescale. We now introduce a time-dependent factor v S (t) which accounts for short-term O(10 s) variations in sensitivity, estimated using the method described in Mozzon et al. (2020). Short-term variation in the PSD will introduce variation in ρ as we use the estimated PSD S( f ) to calculate it. The estimated PSD over short timescales S s ( f ) can be different from a PSD estimated over a longer duration S l ( f ) if the noise is nonstationary. To track the variation in the PSD we use the variance of the S/N. In the absence of a signal, this is given by To estimate the variance of the S/N, we first filter the detector data s˜with where  is a normalization constant and h f |˜( )| is an approximation to the Fourier domain amplitude of CBC templates. Using Parseval's theorem, we can then estimate the variation in the PSD at a given time t 0 as ( ) is the convolution between the filter and the data and Δt is chosen to match the typical timescale of nonstationarity. After finding v S (t), we evaluate its correlation with the S/Ns and rates of noise triggers empirically. The rate of noise triggers above a given statistic threshold is r > R N t (ˆ), where the statistic r is (proportional to) the S/N obtained by matched filtering using the long-duration PSD S l ( f ). The noise trigger rate varies over time due to the nonstationarity of the PSD and is thus a function of the short-duration PSD variation measure. Since S/N scales as S f 1 ( ), we naively expect the noise trigger rate to be a function of a "corrected" S/N r v ; Ŝ in practice we allow for a more general dependence, which we write as Here f N is a fitting function for the expected noise distribution. Empirically we find, for data without strong localized non-Gaussian transients (glitches), Linearizing the PSD variation measure v S (t) around unity, , the logarithm of the trigger rate above threshold will vary as r a r akr > - By determining the slope of the log-rate versusò S dependence for various thresholds r we estimate κ∼0.33, thus if we construct a "corrected" statistic the rate of noise triggers above a given threshold of the corrected statistic is on average no longer affected by variation in v S (t). The analysis of Venumadhav et al. (2019a) included a similar correction factor and Zackay et al. (2019b) indicate a modest improvement in sensitivity for the sources they consider. In our analysis, the greatest improvement is for sources corresponding to long-duration templates (BNS and NSBH) while there is negligible improvement for the shorterduration BBH sources. Multidetector Coincident Candidates In the previous section, we discussed how we identify singledetector candidates and assign them a ranking statistic. We now combine single-detector candidates from multiple detectors to form multidetector candidates (Davies et al. 2020). We introduce a new ranking statistic formed from models of the relative signal and noise likelihoods for a particular candidate. This ranking statistic is based on the expected rates of signal and noise candidates, and is thus comparable across different combinations of detectors by design. We are then able to search for coincident triggers in all available combinations of detectors (for instance, during HLV time, coincidences can be formed in HL, HV, LV, and HLV), and then compared to one another, clustering and combining false alarm rates while maintaining near-optimal sensitivity. Our signal model is composed of two parts. First, the overall network sensitivity of the analysis at the time of the candidate. Assuming a spatially homogeneous distribution of sources, the signal rate is directly proportional to the sensitive volume. We approximate this factor using the instantaneous range of the least sensitive instrument contributing to the multidetector candidate for a given template labeled by i, s i min, , relative to a representative range over the analysis, which is defined by the median network sensitivity in the HL detector network, s i HL, , for that template. Note that the detectors that contribute to the candidate are not necessarily all of the available detectors at that time. The second part is the probability, given an isotropic and volumetric population of sources, that an astrophysical signal would be observed to have a particular set of parameters defined by q  , including time delays, relative amplitudes, and relative phases between the network of observatories. This probability distribution q p S ( | )  is calculated by a Monte Carlo method similarly to Nitz et al. (2017). For this work we have extended this technique to three detectors for the first time. Combined, our model for the density of signals recovered with network parameters q  in a combination of instruments characterized by s i min, can be expressed as The noise model is calculated in the same manner as in Nitz et al. (2017). We treat the noise from each detector as being independent and fit our single-detector ranking statistic to an exponential slope. This fit is performed separately for each template. The fit parameters (such as the slope and overall amplitude of the exponential) are initially noisy due to low number statistics, so they are smoothed over the template space using a three-dimensional Gaussian kernel in the template duration, effective spin c eff , and symmetric mass ratio η parameters. The rate density of noise events in the ith template with contributing detectors labeled by n and single-detector rankings r n {˜} can be summarized as here r n i , and α i are the overall amplitude and slope of the exponential noise rate model respectively. The prefactor A n { } is the time window for which coincidences can be formed, which depends on the combination of detectors {n} being considered. The three-detector coincidence rate is vastly reduced compared to the two-detector rate; in a representative stretch of O2 data, the HLV coincidence rate is found to be around a factor of 10 4 lower than that in HL coincidences. Details of both the signal and noise model calculations will be provided in Davies et al. (2020). The ranking statistic for a given candidate in template i is the log of the ratio of these two rate densities: where we drop the dependences on q  and r n {˜} for simplicity of notation. Typically, one signal event (or loud noise event) in the gravitational-wave data stream may give rise to a large number of correlated candidate multidetector events within a short time, in different templates and with different combinations of detectors {n}. To calculate the significance of such a "cluster" of events, we will approximate their arrival as a Poisson process: in order to do this, we keep the event from each cluster with highest L -typically the highest-ranked event within a 10 s time window-and discard the rest. Comparing this new statistic to the one employed for the 1-OGC analysis (Nitz et al. 2019a) using a simulated population of mergers, we find an average 8% increase in the detectable volume during the O1 period at a fixed false alarm rate of 1 per 100 yr. This population is isotropically distributed in sky location and orientation, while the mass distribution is scaled to ensure a constant rate of signals above a fixed S/N across the log-component-mass search space in Figure 2. In addition to this improved sensitivity for events where H1 and L1 contribute, this search will also benefit by analyzing times where Virgo and only one LIGO detector are operating (as in Table 1), and also by improved sensitivity in times when all three detectors are operating, due to the ability to form threedetector events. Such sensitivity improvements are detailed in Davies et al. (2020). Statistical Significance In the previous section we introduce the ranking statistic used in our analysis. We empirically measure the statistical significance of a particular value of our ranking statistic by comparing it to a set of false (noise) candidate events produced in numerous fictitious analyses. Each analysis is generated by time-shifting the data from one detector by an amount greater than is astrophysically allowed by light travel time considerations (Babak et al. 2013;Usman et al. 2016). Otherwise, each time-shifted analysis is treated in an identical manner as the search itself. By repeating this procedure, upwards of 10 4 yr' worth of false alarms can be produced from just a few days of data. By construction, the results of these analyses cannot contain true multidetector astrophysical candidates, but may contain coincidences between astrophysical sources and instrumental noise. We use a hierarchical procedure as in Abbott et al. (2016c) and Nitz et al. (2019a) to minimize the impact of astrophysical contamination while retaining an unbiased rate of false alarms (Capano et al. 2017): a candidate with large L is removed from the estimation of background for less significant candidates. This method has been employed to detect significant events in numerous analyses (Abbott et al. 2009(Abbott et al. , 2019aAbadie et al. 2012;Nitz et al. 2019a;Venumadhav et al. 2019b). The validity of the resulting background estimate follows from an assumption that the times of occurrence of noise events are statistically independent between different detectors; see Was et al. (2010), Capano et al. (2017) for further discussion of empirical background estimation and the time shift method. This is a reasonable assumption for detectors separated by thousands of kilometers (Abbott et al. 2016b). The time shift method has the advantage that no other assumptions about the noise need be accurate: the populations and morphology of noise artifacts need not be uncorrelated or different between detectors, only the times at which they occur. In fact the LIGO and Virgo instruments share common components and environmental coupling mechanisms which may produce similar classes of non-Gaussian artifacts. Targeting Binary Black Hole Mergers Given a population of individually significant BBH mergers, it is possible to incorporate knowledge about the overall distribution and rate of sources to identify weaker candidates. A similar approach was employed in Nitz et al. (2019a) and is the basis of astrophysical significance statements in Abbott et al. (2019a). In this catalog we improve over the strategy of Nitz et al. (2019a) which considered an excessively conservative parameter space for BBH and did not use an explicit model of the distribution of signals and noise within that space. In addition, we restrict to sources that are consistent with our signal models by imposing a threshold on our primary signal consistency test to reject any single-detector candidate with χ r >2.0. Simulated signals within our target population, and the individual highly significant candidates previously detected are consistent with this choice. (The full, non-BBH-specific analysis allows a much greater deviation from our signal models before rejection of a candidate.) As a first step in obtaining the targeted BBH results we restrict the analysis to a subspace of the full search, illustrated in Figure 2. Rather than applying this constraint after obtaining the set of "clustered" candidates via selecting the highest ranked event within 10 s windows, as in Nitz et al. (2019a), here we apply the constraint to candidates prior to the clustering step. This allows us to choose a less extensive BBH region containing fewer templates than employed in Nitz et al. (2019a) without loss of sensitivity. (The previous method used a wider BBH template set to allow for the possibility that a signal inside the intended target region is recovered only by a template lying outside that region, due to clustering.) Our BBH region is specified by , and <  M 60  . The upper boundary is consistent with the redshifted detector-frame masses that would be obtained by the observed highest-mass sources near the detection threshold. Applying a prior over the intrinsic parameters of the distribution of detectable sources was proposed in Dent & Veitch (2014) and tested in Nitz et al. (2017). In this work, we impose an explicit detection prior that is flat over chirp mass. As seen in Figure 2, the distribution of templates is highly nonuniform. The BBH region of the template is placed first using a stochastic algorithm (Ajith et al. 2014;Dal Canton & Harry 2017), where density of templates directly correlates to density of effectively independent noise events. The template density over  scales as - 11 3 , which we verify empirically for our bank. Our detection statistic aims to follow the relative rate density of signal versus noise events at fixed S/N, and we make the simple choice of assuming a signal density flat over : thus the ranking statistic receives an extra term describing the ratio of signal-to-noise densities over component masses: Roughly, any given lower-mass template is less likely to detect a signal than a higher-mass template given that templates are much sparser at high masses. Our choice of BBH region and detection prior has a similar effect as the highly constrained search space and multiple chirp mass bins used in Venumadhav et al. (2019b) but avoids the multiple boundary effects present there and provides a more clearly implemented and astrophysically motivated prior distribution. Furthermore, our method provides a path forward to more accurate assessment of lower-S/N candidates as our understanding of the overall population evolves. To estimate the probability p astro that a given candidate is astrophysical in origin we combine the background of this targeted BBH analysis with the estimated distribution of observations. We improve upon the analysis in Nitz et al. (2019a), which employed an analytic model of the signal distribution and a fixed conservative rate of mergers by using the mixture model method developed in Farr et al. (2015) and similar to that employed in Abbott et al. (2019a). This method requires the distribution of noise and signals over our ranking statistic, which we take from our time-slide background estimates and a population of simulated signals respectively. 8 Using a simulated set of mergers that is isotropically distributed in orientation and uniformly distributed over mass to cover the targeted BBH region, we find that the targeted BBH analysis recovers a factor of 1.5-1.6 more sources at a fixed false alarm rate of 1 per 100 yr than the full parameter space analysis. The majority of this change in sensitivity is attributed to the inclusion of only background events consistent with BBH mergers. The choice of ranking statistic to optimize sensitivity to a target BBH signal population has a smaller effect. Observational Results We present compact binary merger candidates from the complete set of public LIGO and Virgo data spanning the observing runs from 2015 to 2017. This comprises roughly 171 days of multidetector observing time which we divide into 31 subanalyses. Except as noted, each analysis contains ∼5 days of observing time which allows for estimation of the false alarm rate to <1 per 10,000 yr. This interval allows us to track changes in the detector configuration which may result in timechanging detector quality. All data was retrieved from GWOSC (Vallisneri et al. 2015), and we have used the most up-to-date version of bulk data released. We note that an exceptional data release was produced by GWOSC which contains background data relating to GW170608. We have analyzed this data release separately to preserve consistent data quality. The top candidates sorted by FAR from the complete analysis are given in Table 2. All of the most significant candidates were observed by LIGO-Hanford and LIGO-Livingston which are the two most sensitive detectors in the network and contribute the bulk of the observing time. There are 8 BBH and 1 BNS candidates at a FAR less than 1 per 100 yr. These sources are confidently detected in the full analysis without optimizing the search for any specific population of sources. The most significant following candidates correspond to GW170729, GW170121, GW170727, and GW170818 respectively. A similar PyCBC-based analysis was performed in Abbott et al. (2019a) but used a higher singledetector S/N threshold than employed in our analysis (ρ>5.5 versus 4.0); as the latter three events were found with ρ<=5.1 in the LIGO-Hanford detector, we would not expect this earlier analysis to identify them. Binary Black Holes Using the targeted BBH analysis introduced in Section 3.5 we report results for BBH mergers consistent with the existing set of highly significant merger events in Table 3. The probability that a candidate is astrophysical in origin, p astro , is calculated for the most significant candidates. Our analysis identifies 14 BBH candidates with p astro >50%, meeting the standard detection criteria introduced in Abbott et al. (2019a) and similarly followed in Venumadhav et al. (2019b). Our results are broadly consistent with the union of those two analyses as our candidate list includes all previously claimed BBH detections. We confirm the observation of GW170121, GW170304, and GW170727 reported in Venumadhav et al. (2019a) as significant. We also report the marginal detection of GW151205. Several marginal events reported in Venumadhav et al. (2019bVenumadhav et al. ( , 2019a are found as top candidates, but do not meet our detection threshold based on estimated probability of astrophysical origin. Numerous differences between these two analyses-including template bank placement, treatment of data, choice of signal consistency test, and method for assigning astrophysical significance-may be the cause of reported differences. The consistency of results for less marginal candidates indicates that differences in analysis sensitivity are likely marginal. Cross comparison with a common set of simulated signals would be required for a more precise assessment. Future analyses incorporating more sophisticated treatment of the source distribution may yield different results for the probability of astrophysical origin for some subthreshold candidates. For example 151216+09:24:16UTC, which was first identified in Nitz et al. (2019a) and is now assigned ã p 0.2 astro , could obtain a higher probability of being astrophysical under a model with a distribution of detected mergers peaked close to its apparent component masses, rather than uniform over  as taken here. In any case the astrophysical probability we assign assumes that the candidate event, if astrophysical, is drawn from an existing population. The prior applied here to the population distribution over component masses could be extended to the distribution over component-object spins. (Here, we implicitly apply a prior over spins which mirrors the density of templates, which is not far from uniform over c eff .) As 151216+09:24:16UTC may have high component spins, if the set of highly significant observations does not include any comparable systems its probability of astrophysical origin could be arbitrarily small, depending on a choice of prior distribution over spins. We infer the properties of our BBH candidates using Bayesian parameter inference implemented by the PyCBC library . We use the IMRPhenomPv2 model which describes the dominant gravitational-wave mode of the inspiral-merger-ringdown of precessing noneccentric binarie-s (Hannam et al. 2014;Schmidt et al. 2015). For each candidate, we use a prior isotropic in sky location and binary orientation. As in Abbott et al. (2019a), our prior on each component object's spin is uniform in magnitude and isotropic in orientation. Since many of the candidates are at large (>1 Gpc) distances, we assume a prior which is uniform in comoving volume, and a prior uniform in source-frame component mass. We use standard ΛCDM cosmology (Planck Collaboration et al. 2016) to relate the comoving volume to luminosity distance, and to redshift the masses to the detectors' frame. This choice of prior differs from previous analyses (Abbott et al. 2019a;Venumadhav et al. 2019aVenumadhav et al. , 2019b, which used a prior uniform in volume (ignoring cosmological effects) and detector-frame masses. A prior uniform in comoving volume assigns lower weight to large luminosity distances than a prior uniform in volume. Consequently, the luminosity distances we obtain for some candidates is slightly lower than previously reported values (e.g., we obtain 1400 Mpc). The marginalized parameter estimates of the component masses, effective spin, and luminosity distance for the top 30 BBH candidates are given in Table 3. Plots of the marginalized posteriors for the BBH candidates with  p 0.5 astro are show in Figure 3. For candidates previously reported by the LVC, our results broadly agree with existing parameter estimates Abbott et al. 2019a). Similarly, we find no clear evidence for precession in our candidates. Venumadhav et al. .15 , which excludes c~0 eff . In addition to assigning these candidates lower Notes. x Also identified in GWTC-1 (Abbott et al. 2019a), y 1-OGC (Nitz et al. 2019a), or z Venumadhav et al. (2019a. Candidates are sorted by FAR evaluated for the entire bank of templates. Note that ranking statistic and false alarm rate may not have a strictly monotonic relationship due to varying data quality between subanalyses. The mass and spin parameters listed are associated with the template waveform yielding the highest ranked multidetector event for each candidate, and may differ significantly from full Bayesian parameter estimates. Masses are quoted in detector frame, and are thus larger than source frame masses by a factor (1+z), where z is the source redshift. a The FAR is limited only by the available background data. A short analysis period is used for the 170,608 data which was released separately due to an instrument angular control procedure affecting data from the Hanford observatory (Abbott et al. 2017b). Notes. x Also identified in GWTC-1 (Abbott et al. 2019a), y 1-OGC (Nitz et al. 2019a), or z Venumadhav et al. (2019a. The source-frame masses, c eff , and luminosity distance, D L , are estimated with Bayesian parameter inference (see Section 4.1) and are given with 90% credible intervals. a The false alarm rate is limited by false coincidences arising from the candidate's time-shifted LIGO-Livingston single-detector trigger. If removed from its own background, the FAR is <1 per 10,000 yr. b Parameter estimates for this candidate are derived only from the LIGO-Hanford and Virgo detectors. LIGO-Livingston was operating at the time, but did not produce a trigger that contributed to the event (see the discussion in Section (4.1)). . This indicates that the discrepancy in c eff between our analysis and that of Venumadhav et al. (2019b) cannot be entirely explained by differences in prior choice; the difference may be due to differing analysis methods. We find three other events with < p 0.3 astro that have c eff and masses similar to that of 151216+09:24:16 UTC. These are illustrated in Figure 4. The four events differ from the other events listed in Table 3 in that the posterior distribution of c eff strongly deviates from the prior, with the peak in the posterior between c~0.5 eff and ∼0.7. All four events also have similar chirp masses. If these events are from a new population of BBHs, then ongoing and future observing runs should yield candidates with similar properties at high astrophysical significance. Alternatively, they may indicate a common noise feature selected by our analysis. GW151205, a BBH merger withp 0.53 astro , may challenge standard stellar formation scenarios if astrophysical. Models that account for pulsational pair instability supernovae or pairinstability supernovae in stellar evolution suggest the maximum mass of the remnant black hole is ∼40-50 M   (Belczynski et al. 2016;Woosley 2017;Marchant et al. 2019;Woosley 2019;Stevenson et al. 2019). We estimate that there is >95% probability that the primary black hole has a source-frame mass >50 M  , which may suggest formation through an alternate channel such as a hierarchical merger. Studies have proposed that GW170729 may have a similar origin (Kimball et al. 2020;Khan et al. 2020;Yang et al. 2019). However, Fishbach et al. (2019) showed that when all of the BBHs are analyzed together, GW170729 is consistent with a single population of binaries formed from the standard stellar formation channel. Likewise, GW151205 will need to be analyzed jointly with the other events to determine if there are one or more populations present. The least significant candidate in the targeted BBH analysis, 170818+09:34:45 UTC, was identified in the LIGO-Hanford and Virgo detectors by the search pipeline; the parameter estimates in Table 3 are derived using these observatories alone. However, the LIGO-Livingston detector was operational at the time of the event. Our search does not currently enforce that a candidate observed only in a subset of detectors is consistent with a lack of observation in the others. We find that if LIGO-Livingston is included in the parameter estimation analysis, the log likelihood ratio is significantly reduced. This suggests that the event is not astrophysical in origin. Zackay et al. (2019a), we find that 151216+09:24:16UTC has support at zero effective spin. The candidate bears striking resemblance to 170201+11:03:12UTC, and, to a lesser extent, 151217+03:47:49UTC and 170629+04:13:55UTC. All four have c eff posteriors that diverge strongly from the prior (with peaks between ∼0.5 and ∼0.7) and similar chirp masses, which distinguishes them from the other BBH candidates in Table 3. This may indicate a new population of BBHs, or a common noise feature. If the former, ongoing and future observing runs should yield more candidates with similar properties and larger astrophysical significance. Neutron Star Binaries Our analysis identified GW170817 as a highly significant merger; however, no further individually significant BNS nor NSBH mergers were identified. As the population of BNS and NSBH sources is not yet well constrained, we cannot reliably employ the methodology used to optimize search sensitivity to an astrophysical BBH merger distribution. However, BNS candidates especially are prime candidates for the observation of electromagnetic counterparts such as GRBs and kilonovae. It may be possible by correlating with auxiliary data sets to determine if weak candidates are astrophysical in origin. An example is the subthreshold search of Fermi-GBM and 1-OGC triggers (Nitz et al. 2019c), which defined, based on galactic neutron star observations (Ozel et al. 2012), a likely BNS merger region to span < <  1.03 1.36 and effective spin c < 0.2 eff | | . This region is highlighted in Figure 2 and the top candidates are shown in Table 4. Data Release We provide supplementary materials online which provide information on each of ∼10 6 subthreshold candidates (Nitz et al. 2020). Reported information includes candidate event time, S/N in each observatory, and results of the signalconsistency tests performed. A separate listing of candidates within the BBH region discussed in Section 3.5 is also provided, including estimates of the probability of astrophysical origin p astro for the most significant of these candidates. To help distinguish between this large number of candidates, our ranking statistic and estimate of the false alarm rate are also provided for every event. Configuration files for the analyses performed and analysis metadata are also provided. For the 30 most significant BBH candidates, we also release the posterior samples from our Bayesian parameter inference. Conclusions The 2-OGC catalog of gravitational-wave candidates from compact-binary coalescences spanning the full range of binary neutron star, NSBH, and BBH mergers is an analysis of the complete set of LIGO and Virgo public data from the observing runs in 2015-2017. A third observing run (O3) began in 2019 April Abbott et al. (2016a). Alerts for several dozen merger candidates have been issued to date during this run. 9 The first half of the run (O3a) ended on 2019 October 1 with a planned release of the corresponding data in Spring 2021. As the data is not yet released, the catalog here covers only the first two observing runs. We use a matched-filtering, template-based approach to identify candidates and improve over the 1-OGC analysis (Nitz et al. 2019a) by incorporating corrections for time variations in PSD estimates and network sensitivity. Furthermore, we have demonstrated extending a PyCBC-based analysis to handle data from more than two detectors. The 2-OGC catalog contains the most comprehensive set of merger candidates to date, including 14 BBH mergers with > p 50% astro along with the single BNS merger GW170817. We independently confirm many of the results of Abbott et al. (2019a) and Venumadhav et al. (2019b). We find no additional individually significant BNS or NSBH mergers; however, we provide our full set of subthreshold candidates for further analysis (Nitz et al. 2020). Note. The chirp mass  of the candidate's associated template waveform is given in the detector frame. All candidates here were found by the LIGO-Hanford and LIGO-Livingston observatories. The table lists the false alarm rate for each candidate in the context of the full search (FAR FULL ) or just the selected BNS region (FAR BNS ).
9,683.8
2019-10-11T00:00:00.000
[ "Physics" ]
Improving CNN-Based Texture Classification by Color Balancing : Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems. However, their performance may be dampened by the fact that texture images are often characterized by color distributions that are unusual with respect to those seen by the networks during their training. In this paper we will show how suitable color balancing models allow for a significant improvement in the accuracy in recognizing textures for many CNN architectures. The feasibility of our approach is demonstrated by the experimental results obtained on the RawFooT dataset, which includes texture images acquired under several different lighting conditions. Introduction Convolutional neural networks (CNNs) represent the state of the art for many image classification problems [1][2][3].They are trained for a specific task by exploiting a large set of images representing the application domain.During the training and the test stages, it is common practice to preprocess the input images by centering their color distribution around the mean color computed on the training set.However, when test images have been taken under acquisition conditions unseen during training, or with a different imaging device, this simple preprocessing may not be enough (see the example reported in Figure 1 and the work by Chen et al. [4]). The most common approach to deal with variable acquisition conditions consists of applying a color constancy algorithm [5], while to obtain device-independent color description a color characterization procedure is applied [6].A standard color-balancing model is therefore composed of two modules: the first discounts the illuminant color, while the second maps the image colors from the device-dependent RGB space into a standard device-independent color space.More effective pipelines have been proposed [7,8] that deal with the cross-talks between the two processing modules. In this paper we systematically investigate different color-balancing models in the context of CNN-based texture classification under varying illumination conditions.To this end, we performed our experiments on the RawFooT texture database [9] which includes images of textures acquired under a large number of controlled combinations of illumination color, direction and intensity. Concerning CNNs, when the training set is not big enough, an alternative to the full training procedure consists of adapting an already trained network to a new classification task by retraining only a small subset of parameters [10].Another possibility is to use a pretrained network as a feature extractor for another classification method (nearest neighbor, for instance).In particular, it is common to use networks trained for the ILSVRC contest [11].The ILSVRC training set includes over one million images taken from the web to represent 1000 different concepts.The acquisition conditions of training images are not controlled, but we may safely assume that they have been processed by digital processing pipelines that mapped them into the standard sRGB color space.We will investigate how different color-balancing models permit adapting images from the RawFooT dataset in such a way that they can be more reliably classified by several pretrained networks. The rest of the paper is organized as follows: Section 2 summarizes the state of the art in both texture classification and color-balancing; Section 3 presents the data and the methods used in this work; Section 4 describes the experimental setup and Section 5 reports and discusses the results of the experiments.Finally, Section 6 concludes the paper by highlighting its main outcomes and by outlining some directions for future research on this topic. Color Texture Classification under Varying Illumination Conditions Most of the research efforts on the topic of color texture classification have been devoted to the definition of suitable descriptors able to capture the distinctive properties of the texture images while being invariant, or at least robust, with respect to some variations in the acquisition conditions, such as rotations and scalings of the image, changes in brightness, contrast, light color temperature, and so on [12]. Color and texture information can be combined in several ways.Palm categorized them into parallel (i.e., separate color and texture descriptors), sequential (in which color and texture analysis are consecutive steps of the processing pipeline) and integrative (texture descriptors computed on different color planes) approaches [13].The effectiveness of several combinations of color and texture descriptors has been assessed by Mäenpää, and Pietikäinen [14], who showed how the descriptors in the state of the art performed poorly in the case of a variable color of the illuminant.Their findings have been more recently confirmed by Cusano et al. [9]. In order to successfully exploit color in texture classification the descriptors need to be invariant (or at least robust) with respect to changes in the illumination.For instance, Seifi et al. proposed characterizing color textures by analyzing the rank correlation between pixels located in the same neighborhood and by using a correlation measure which is related to the colors of the pixels, and is not sensitive to illumination changes [15].Cusano et al. [16] proposed a descriptor that measures the local contrast: a property that is less sensitive than color itself to variations in the color of the illuminant.The same authors then enhanced their approach by introducing a novel color space where changes in illumination are even easier to deal with [17].Other strategies for color texture recognition have been proposed by Drimbarean and Whelan who used Gabor filters and co-occcurrence matrices [18], and by Bianconi et al. who used ranklets and the discrete Fourier transform [19]. Recent works suggested that, in several application domains, carefully designed features can be replaced by features automatically learned from a large amount of data with methods based on deep learning [20].Cimpoi et al., for instance, used Fisher Vectors to pool features computed by a CNN trained for object recognition [21].Approaches based on CNNs have compared against combinations of traditional descriptors by Cusano et al. [22], who found that CNN-based features generally outperform the traditional handcrafted ones unless complex combinations are used. Color Balancing The aim of color constancy is to make sure that the recorded color of the objects in the scene does not change under different illumination conditions.Several computational color constancy algorithms have been proposed [5], each based on different assumptions.For example, the gray world algorithm [23] is based on the assumption that the average color in the image is gray and that the illuminant color can be estimated as the shift from gray of the averages in the image color channels.The white point algorithm [24] is based on the assumption that there is always a white patch in the scene and that the maximum values in each color channel are caused by the reflection of the illuminant on the white patch, and they can be thus used as the illuminant estimation.The gray edge algorithm [25] is based on the assumption that the average color of the edges is gray and that the illuminant color can be estimated as the shift from the gray of the averages of the edges in the image color channels.Gamut mapping assumes that for a given illuminant, one observes only a limited gamut of colors [26].Learning-based methods also exist, such as Bayesian [27], CART-based [28], and CNN-based [29,30] approaches, among others. The aim of color characterization of an imaging device is to find a mapping between its device-dependent and a device-independent color representation.The color characterization is performed by recording the sensor responses to a set of colors and the corresponding colorimetric values, and then finding the relationship between them.Numerous techniques in the state of the art have been proposed to find this relationship, ranging from empirical methods requiring the acquisition of a reference color target (e.g., a GretagMacbeth ColorChecker [31]) with known spectral reflectance [8], to methods needing the use of specific equipment such as monochromators [32].In the following we will focus on empirical methods that are the most used in practice, since they do not need expensive laboratory hardware.Empirical device color characterization directly relates measured colorimetric data from a color target and the corresponding camera raw RGB data obtained by shooting the target itself under one or more controlled illuminants.Empirical methods can be divided into two classes: the methods belonging to the first class rely on model-based approaches, that solve a set of linear equations by means of pseudo-inverse approach [6] , constrained least squares [33], exploiting a non-maximum ignorance assumption [33,34], exploiting optimization to solve more meaningful objective functions [7,35,36], or lifting the problem into a higher dimensional polynomial space [37,38].The second class instead contains methods that do not explicitly model the relationship between device-dependent and device-independent color representations such as three-dimensional lookup tables with interpolation and extrapolation [39], and neural networks [40,41]. RawFooT The development of texture analysis methods heavily relies on suitably designed databases of texture images.In fact, many of them have been presented in the literature [42,43].Texture databases are usually collected to emphasize specific properties of textures such as the sensitivity to the acquisition device, the robustness with respect to the lighting conditions, and the invariance to image rotation or scale, etc.The RawFooT database has been especially designed to investigate the performance of color texture classification methods under varying illumination conditions [9].The database includes images of 68 different samples of raw foods, each one acquired under 46 different lighting conditions (for a total of 68 × 46 = 3128 acquisitions).Figure 2 shows an example for each class.Images have been acquired with a Canon EOS 40D DSLR camera.The camera was placed 48 cm above the sample to be acquired, with the optical axis perpendicular to the surface of the sample.The lenses used had a focal length of 85 mm, and a camera aperture of f/11.3; each picture has been taken with four seconds of exposition time.For each 3944 × 2622 acquired image a square region of 800 × 800 pixels has been cropped in such a way that it contains only the surface of the texture sample without any element of the surrounding background.Note that, while the version of the RawFooT database that is publicly available includes a conversion of the images in the sRGB color space, in this work we use the raw format images that are thus encoded in the device-dependent RGB space. To generate the 46 illumination conditions, two computer monitors have been used as light sources (two 22-inch Samsung SyncMaster LED monitors).The monitors were tilted by 45 degrees facing down towards the texture sample, as shown in Figure 3.By illuminating different regions of one or both monitors it was possible to set the direction of the light illuminating the sample.By changing the RGB values of the pixels it was also possible to control the intensity and the color of the light sources.To do so, both monitors have been preliminarily calibrated using a X-Rite i1 spectral colorimeter by setting their white point to D65. With this setup it was possible to approximate a set of diverse illuminants.In particular, 12 illuminants have been simulated, corresponding to 12 daylight conditions differing in the color temperature.The CIE-xy chromaticities corresponding to a given temperature T have been obtained by applying the following equations [44]: where a 0 = 0.244063, a 1 = 0.09911, a 2 = 2.9678, a 3 = −4.6070if 4000 K ≤ T ≤ 7000 K, and a 0 = 0.23704, a 1 = 0.24748, a 2 = 1.9018, a 3 = −2.0064if 7000 K < T ≤ 25,000 K.The chromaticities were then converted in the monitor RGB space [45] with a scaling of the color channels in such a way that largest value was 255.The twelve daylight color temperatures that have been considered are: 4000 K, 4500 K, 5000 K, . . ., 9500 K (we will refer to these as D40, D45, . . ., D95). Similarly, six illuminants corresponding to typical indoor light have been simulated.To do so, the CIE-xy chromaticities of six LED lamps (six variants of SOLERIQ R S by Osram) have been obtained from the data sheets provided by the manufacturer.Then, again the RGB values were computed and scaled to 255 in at least one of the three channels.These six illuminants are referred to as L27, L30, L40, L50, L57, and L65 in accordance with the corresponding color temperature. Figure 4 shows, for one of the classes, the 46 acquisitions corresponding to the 46 different lighting conditions in the RawFooT database.These include: • In this work we are interested in particular in the effects of changes in the illuminant color.Therefore, we limited our analysis to the 12 illuminants simulating daylight conditions, and to the six simulating indoor illumination. Beside the images of the 68 texture classes, the RawFooT database also includes a set of acquisitions of a color target (the Macbeth color checker [31]).Figure 5 shows these acquisitions for the 18 illuminants considered in this work. Color Balancing An image acquired by a digital camera can be represented as a function ρ mainly dependent on three physical factors: the illuminant spectral power distribution I(λ), the surface spectral reflectance S(λ), and the sensor spectral sensitivities C(λ).Using this notation, the sensor responses at the pixel with coordinates (x, y) can be described as: where ω is the wavelength range of the visible light spectrum, and ρ and C(λ) are three-component vectors.Since the three sensor spectral sensitivities are usually more sensitive respectively to the low, medium and high wavelengths, the three-component vector of sensor responses ρ = (ρ 1 , ρ 2 , ρ 3 ) is also referred to as the sensor or camera raw RGB triplet.In the following we adopt the convention that ρ triplets are represented by column vectors.As previously said, the aim of color characterization is to derive the relationship between device-dependent and device-independent color representations for a given device.In this work, we employ an empirical, model-based characterization.The characterization model that transforms the i-th input device-dependent triplet ρ I N into a device-independent triplet ρ OUT can be compactly written as follows [46]: where α is an exposure correction gain, M is the color correction matrix, I is the illuminant correction matrix, and (•) γ denotes an element-wise operation. Traditionally [46], M is fixed for any illuminant that may occur, while α and I compensate for the illuminant power and color respectively, i.e., The model can be thus conceptually split into two parts: the former compensates for the variations of the amount and color of the incoming light, while the latter performs the mapping from the device-dependent to the device-independent representation.In the standard model (Equation ( 6)) α j is a single value, I j is a diagonal matrix that performs the Von Kries correction [47], and M is a 3 × 3 matrix. In this work, different characterization models have been investigated together with Equation ( 6) in order to assess how the different color characterization steps influence the texture recognition accuracy.The first tested model does not perform any kind of color characterization, i.e., The second model tested performs just the compensation for the illuminant color, i.e., it balances image colors as a color constancy algorithm would do: The third model tested uses the complete color characterization model, but differently from the standard model given in Equation ( 6), it estimates a different color correction matrix M j for each illuminant j.The illuminant is compensated for both its color and its intensity, but differently from the standard model, the illuminant color compensation matrix I j for the j-th illuminant is estimated by using a different luminance gain α i,j for each patch i: The fourth model tested is similar to the model described in Equation ( 9) but uses a larger color correction matrix M j by polynomially expanding the device-dependent colors: where T(•) is an operator that takes as input the triplet ρ and computes its polynomial expansion.Following [7], in this paper we use T(ρ) = (ρ(1), ρ(2), ρ(3), ρ(1)ρ( 2), ρ(1)ρ(3), ρ(2)ρ( 3)), i.e., the rooted second degree polynomial [38].Summarizing, we have experimented with five color-balancing models.They all take as input the device-dependent raw values and process them in different ways: 1. device-raw: it does not make any correction to the device-dependent raw values, leaving them unaltered from how they are recorded by the camera sensor; 2. light-raw: it performs the correction of the illuminant color, similarly to what is done by color constancy algorithms [5,30,48] and chromatic adaptation transforms [49,50].The output color representation is still device-dependent, but with the discount of the effect of the illuminant color; 3. dcraw-srgb: it performs a full color characterization according to the standard color correction pipeline.The chosen characterization illuminant is the D65 standard illuminant, while the color mapping is linear and fixed for all illuminants that may occur.The correction is performed using the DCRaw software (available at http://www.cybercom.net/~dcoffin/dcraw/);4. linear-srgb: it performs a full color characterization according to the standard color correction pipeline, but using different illumination color compensation and different linear color mapping for each illuminant; 5. rooted-srgb: it performs a full color characterization according to the standard color correction pipeline, but using a different illuminant color compensation and a different color mapping for each illuminant.The color mapping is no more linear but it is performed by polynomially expanding the device-dependent colors with a rooted second-degree polynomial. The main properties of the color-balancing models tested are summarized in Table 1. Table 1.Main characteristics of the tested color-balancing models.Regarding the color-balancing steps, the open circle denotes that the current step is not implemented in the given model, while the filled circle denotes its presence.Regarding the mapping properties, the dash denotes that the given model does not have this property.Device-raw (Equation ( 7)) --Light-raw (Equation ( 8)) --Dcraw-srgb (Equation ( 6)) fixed for D65 Linear 1 Linear-srgb (Equation ( 9)) Linear 1 for each illum.Rooted-srgb (Equation ( 10)) Rooted 2nd-deg.poly. 1 for each illum. All the correction matrices for the compensation of the variations of the amount and color of the illuminant and the color mapping are found using the set of acquisitions of the Macbeth color checker available in the RawFooT using the optimization framework described in [7,36].An example of the effect of the different color characterization models on a sample texture class of the RawFooT database is reported in Figure 6. Experimental Setup Given an image, the experimental pipeline includes the following operations: (1) color balancing; (2) feature extraction; and (3) classification.All the evaluations have been performed on the RawFooT database. RawFooT Database Setup For each of the 68 classes we considered 16 patches obtained by dividing the original texture image, that is of size 800 × 800 pixels, in 16 non-overlapping squares of size 200 × 200 pixels.For each class we selected eight patches for training and eight for testing alternating them in a chessboard pattern.We form subsets of 68 × (8 + 8) = 1088 patches by taking the training and test patches from images taken under different lighting conditions. In this way we defined several subsets, grouped in three texture classification tasks. 1. Daylight temperature: 132 subsets obtained by combining all the 12 daylight temperature variations.Each subset is composed of training and test patches with different light temperatures. 2. LED temperature: 30 subsets obtained by combining all the six LED temperature variations.Each subset is composed of training and test patches with different light temperatures. 3. Daylight vs. LED: 72 subsets obtained by combining 12 daylight temperatures with six LED temperatures. Visual Descriptors For the evaluation we select a number of descriptors from CNN-based approaches [51,52].All feature vectors are L 2 -normalized (each feature vector is divided by its L 2 -norm.).These descriptors are obtained as the intermediate representations of deep convolutional neural networks originally trained for scene and object recognition.The networks are used to generate a visual descriptor by removing the final softmax nonlinearity and the last fully-connected layer.We select the most representative CNN architectures in the state of the art [53] by considering different accuracy/speed trade-offs.All the CNNs are trained on the ILSVRC-2012 dataset using the same protocol as in [1].In particular we consider the following visual descriptors [10,54]: BVLC AlexNet (BVLC AlexNet): this is the AlexNet trained on ILSVRC 2012 [1]. • Fast CNN (Vgg F): it is similar to that presented in [1] with a reduced number of convolutional layers and the dense connectivity between convolutional layers.The last fully-connected layer is 4096-dimensional [51]. • Medium CNN (Vgg M): it is similar to the one presented in [55] with a reduced number of filters in the fourth convolutional layer.The last fully-connected layer is 4096-dimensional [51]. • Medium CNN (Vgg M-2048-1024-128): it has three modifications of the Vgg M network, with a lower-dimensional last fully-connected layer.In particular we use a feature vector of 2048, 1024 and 128 size [51]. • Slow CNN (Vgg S): it is similar to that presented in [56], with a reduced number of convolutional layers, fewer filters in layer five, and local response normalization.The last fully-connected layer is 4096-dimensional [51]. • Vgg Very Deep 19 and 16 layers (Vgg VeryDeep 16 and 19): the configuration of these networks has been achieved by increasing the depth to 16 and 19 layers, which results in a substantially deeper network than the previously ones [2]. ResNet 50 is a residual network.Residual learning frameworks are designed to ease the training of networks that are substantially deeper than those used previously.This network has 50 layers [52]. Texture Classification In all the experiments we used the nearest neighbor classification strategy: given a patch in the test set, its distance with respect to all the training patches is computed.The prediction of the classifier is the class of the closest element in the training set.For this purpose, after some preliminary tests with several descriptors in which we evaluated the most common distance measures, we decided to use the L2-distance: d(x, y) = ∑ N i=1 (x(i) − y(i)) 2 , where x and y are two feature vectors.All the experiments have been conducted under the maximum ignorance assumption, that is, no information about the lighting conditions of the test patches is available for the classification method and for the descriptors.Performance is reported as classification rate (i.e., the ratio between the number of correctly classified images and the number of test images).Note that more complex classification schemes (e.g., SVMs) would have been viable.We decided to adopt the simplest one in order to focus the evaluation on the features themselves and not on the classifier. Results and Discussion The effectiveness of each color-balancing model has been evaluated in terms of texture classification accuracy.Table 2 shows the average accuracy obtained on each classification task (daylight temperature, LED temperature and daylight vs LED) by each of the visual descriptors combined with each balancing model.Overall, the rooted-srgb and linear-srgb models achieve better performance than others models with a minimum improvement of about 1% and a maximum of about 9%.In particular the rooted-srgb model performs slightly better than linear-srgb.The improvements are more visible in Figure 7 that shows, for each visual descriptor, the comparison between all the balancing models.Each bar represents the mean accuracy over all the classification tasks.ResNet-50 is the best-performing CNN-based visual descriptor with a classification accuracy of 99.52%, that is about 10% better than the poorest CNN-based visual descriptor.This result confirms the power of deep residual nets compared to sequential network architectures such as AlexNet, and VGG etc.To better show the usefulness of color-balancing models we focused on the daylight temperature classification task, where we have images taken under 12 daylight temperature variations from 4000 K to 9500 K with an increment of 500 K.To this end, Figure 8 shows the accuracy behavior (y-axis) with respect to the difference (∆T measured in Kelvin degrees) of daylight temperature (x-axis) between the training and the test sets.The value ∆T = 0 corresponds to no variations.Each graph shows, given a visual descriptor, the comparison between the accuracy behaviors of each single model.There is an evident drop in performance for all the networks when ∆T is large and no color-balancing is applied.The use of color balancing is able to make uniform the performance of all the networks independently of the difference in color temperature.The dcraw-srgb model represents the most similar conditions to those of the ILSVRC training images.This explains why this model obtained the best performance for low values of ∆T.However, since dcraw-srgb does not include any kind of color normalization for high values of ∆T we observe a severe loss in terms of classification accuracy.Both linear-srgb and rooted-srgb are able, instead, to normalize the images with respect to the color of the illumination, making all the plots in Figure 8 almost flat.The effectiveness of these two models also depends on the fact that they work in a color space similar to those used to train the CNNs.Between the linear and the rooted models, the latter performs slightly better, probably because its additional complexity increases the accuracy in balancing the images. Conclusions Recent trends in computer vision seem to suggest that convolutional neural networks are so flexible and powerful that they can substitute in toto traditional image processing/recognition pipelines.However, when it is not possible to train the network from scratch due to the lack of a suitable training set, the achievable results are suboptimal.In this work we have extensively and systematically evaluated the role of color balancing that includes color characterization as a preprocessing step in color texture classification in presence of variable illumination conditions.Our findings suggest that to really exploit CNNs, an integration with a carefully designed preprocessing procedure is a must.The effectiveness of color balancing, in particular of the color characterization that maps device-dependent RGB values into a device-independent color space, has not been completely proven since the RawFooT dataset has been acquired using a single camera.As future work we would like to extend the RawFooT dataset and our experimentation acquiring the dataset using cameras with different color transmittance filters.This new dataset will make more evident the need for accurate color characterization of the cameras. Figure 1 . Figure 1.Example of correctly predicted image and mis-predicted image after a color cast is applied. Figure 2 . Figure 2. A sample for each of the 68 classes of textures composing the RawFooT database. Figure 3 . Figure 3. Scheme of the acquisition setup used to take the images in the RawFooT database. Figure 4 . Figure 4. Example of the 46 acquisitions included in the RawFooT database for each class (here the images show the acquisitions of the "rice" class). Figure 5 . Figure 5.The Macbeth color target, acquired under the 18 lighting conditions considered in this work. Figure 7 . Figure 7. Classification accuracy obtained by each visual descriptor combined with each model. Table 2 . Classification accuracy obtained by each visual descriptor combined with each model, the best result is reported in bold.
6,213.2
2017-07-27T00:00:00.000
[ "Computer Science" ]
Approximate Solutions of the LRS Bianchi Type-I Cosmological Model In this current study, we explore the modified homogeneous cosmological model in the background of LRS Bianchi type-I space–time. For this purpose, we employ the Homotopy Perturbation Method (HPM). HPM is an analytical-based method. Further, we calculated the main field equations of the cosmological model LRS Bianchi type-I space–time. Furthermore, we discuss the necessary calculations of HPM. Therefore, we investigate the analytical solution of our problem by adopting HPM. In this response, we discuss five different values of parameter n. We also give a brief discussion about solutions. The main purpose of this study is to apply the application of HPM in the cosmological field. Introduction It is fact that the analytical treatment of cosmological models is a challenging task. Finding the exact solutions is seldom problematic and it is not considered easy. This is identified with the major non-linearity of the fundamental conditions in cosmology. This issue turns out to be exceptionally troublesome. In this response, different approximate and analytically schemes are employed to find exact and approximate solutions of different cosmological problems, for example, weak-field scheme in General Relativity (GR) [1][2][3][4], the slow-roll calculation in inflationary cosmology, etc. Significantly, over the span of such approximations, one needs to overlook a few terms in the equations, while osing the universality of the calculated solutions. The fundamental equation used is known as the Friedmann equation. Meanwhile, this equation is important to numerous cosmological models [5][6][7][8]. The HPM was introduced by the Chinese mathematician named He [9] in 1999 to solve differential and integral equations. The essential idea of this method was to introduce a homotopy parameter p, where p ∈ [0, 1]. When p = 0 the system of differential or integral equations reduces to a simplified form. As p increases to 1, the system goes to a sequence of deformation, the solution of each of which is close to that at the previous stage of the deformation, the system takes the original form of the equation at p = 1 and final stage of deformation gives the desired solutions. Therefore, this method has been broadly considered over several years and effectively employed by various researchers [10][11][12][13][14][15][16][17]. It is recognized that HPM is a combination of homotopy in topology and classic perturbation techniques. The HPM has a huge preferred position in that it gives an analytical approximate solution for a wide scope of nonlinear problems in applied sciences. The HPM is used to solve fractional differential equations (FDE), nonlinear differential equations, nonlinear integral equations, and differences differential equations. It has been shown that HPM permits to solve the nonlinear problems very easily, most effectively, and accurately. The HPM provides a solution generally with one or two iterations with high accuracy. The HPM has a very rapid convergence of the solution series in most cases considered so far in literature. In this current study, we discuss this method in cosmology in the scope of the locally rotational symmetric Bianchi type-I model [18] to find the solutions. By considering the HPM approach we shall explore the approximate solutions of modified cosmological problems in the background of LRS Bianchi type-I space-time. The plan of our this current study is as follows: In Section 2 we shall calculate the cosmological problem for LRS Bianchi type-I space-time with the help of Einstein fields equations for LRS Bianchi type-I model. In Section 3, we shall present the basic calculations of HPM with all its necessary conditions. In Section 4, we shall explore some approximate solutions for five different values of parameter n. At the end, we shall summarise our main results and achievements. Cosmological Model The Basic setup for Einstein's field equations [19,20] with a extra term Λg αβ , is defined as where R αβ , defines the Ricci tensor, R denotes a Ricci scalar, g αβ reveals a metric of a space-time, and T αβ , is energy-momentum tensor. The geometry of a LRS Bianchi type-I space-time can be given as In the response of matter profile the energy-momentum tensor is presented as where u µ = √ g 00 (1, 0, 0, 0) is the four-velocity in co-moving coordinates. Using the Equations (2) and (3) in Equation (1), we get the following fields equations for LRS Bianchi type-I space-time. On adding Equations (5) and (6) we have the following calculated expression From Equations (1) and (7) we get the following modified continuity equatioṅ By plugging the A = lB n in Equation (8) we get the following equatioṅ Here, we fix l = 1 for simplicity purpose. The above Equation (9), can be expressed aṡ where w m = p ρ , is equation of state parameter. When w m = 0, i.e., p = 0, it is known as dust case. On the integration, we get the below relation ln(ρ) + (1 + w m )(n ln(B) + 2 ln(B)) = ln ρ 0 (11) where ln ρ 0 , is assumed a constant of integration. It can be written as. Using above equation in Equation (1) we have the following calculatioṅ On simplifying the above Equation (13), we have the following final expressioṅ The above equation can be re-written aṡ where H 2 Λ = 8πΛ 1+n . On reshaping the above Equation (15), we havė where Ω mΛ = ρ 0 Λ . By plugging the dimensionless cosmic time τ = H Λ t, we can rewrite the Equation (16) as where the prime denotes the derivative with respect to cosmic parameter τ. Basic Formulation of HPM For the briefly discussion of the HPM. We assume the following non-linear differential equation: with boundary conditions where A(u) denotes the differential operator, B(u, ∂u ∂n ) represents the boundary condition, f (r) reveals the analytical function, and Γ is the boundary of the domain Ω. The operator A can be written into two parts which are mention as L and N, where L is used for linear part and N is utilizing for nonlinear part. Finally, the above equation can be written as according to the HP scheme, we construct a following Homotopy v(r, p) : Ω × [0, 1] −→ R, which reveals as where p ∈ [0, 1] is an embedding parameter and u 0 ) is an initial approximation of Equation (20). The above equation implies as The process of taking the values of p from zero to 1, implies that the homotopy v(r, p) changing from u 0 (r) to u(r). This is called deformation, and also L(v) − L(u 0 ) and A(v) − f (r) are called Homotopic relations in topology. According to the HPM, we can first used the embedding parameter p as a small parameter and assume that the solution of the Equation (20) can be written as a power series in With setting p = 1 the approximate solution can be obtained as Application of HPM In this section, we will discuss the applications of HPM on a cosmological model which is presented in Equation (17), with equation of state parameter w m . Many researchers have put a lot of efforts to deal the universe in the background of different kinds of matters. According to the different sources of matter, which can be discussed by employing the equation of state parameter,i.e., w m = p ρ , where p and ρ mention the pressure and density function, respectively. The different values of equation of state parameter, i.e., w m = 0, 1 3 , & 1 describe the dust, stiff, and radiation nature of matter source, respectively. On the other hand w m = −1 represents the vacuum case, w m ∈ (−1, −1 3 ) provides the quintessence era, and w m < −1 demonstrates the phantom region. For simplicity, we shall take dust case i.e., w m = 0, the Equation (17), gets the following form In the above Equation (25), the parameter n, has some important role. In this study, we shall explore the HPM base solution of Equation (25), by plugging the five different values of parameter n, i.e., n = 1, n = 2, n = 3, n = 4, and n = 5. The main objective of this study to explore solutions of cosmological model. The exact solution of this current cosmological models is difficult in a proper way, so that is why we shall employ the HPM method. The HPM provided the analytical solutions. Solution for n = 1 From above Equation (25) we have the following non-linear differential equation after putting the values of parameter n = 1 as Now, by employing the HPM scheme, we have the following Homotopy Here, we assumed the initial condition B 0 (0) = const =B and B i (0) = 0 where i ≥ 1. The Equation (26) becomes after equating the powers of p. · · · · · · · · · On solving the differential Equations (27)-(29), we get the following solutions Adding Equations (30)-(32) we get the final B(t) This is a approximate solution for n = 1. In this calculated approximate solutions the parameter Ω mΛ , has some crucial role. The different can be seen from the left penal of the Figure 1, under the different values of parameter Ω mΛ . The increases and positive trend shows that our HPM solutions are physically acceptable. Solution for n = 2 From Equation (25) we get the non-linear differential equation by taking n = 2 as By HPM scheme We considered the initial condition B 0 (0) = const =B and B i (0) = 0 where i ≥ 1. The Equation (34) becomes with equating the powers of p. · · · · · · · · · On solving the Equations (35)-(37), we have This is a approximate solution for n = 2. The different developments can be seen from the right penal of the Figure 1, under the different values of parameter Ω mΛ . The positively increasing behavior shows that our HPM solutions are physically considerable. The approximate solution for n = 3. In this approximate solutions the parameter Ω mΛ , has some important role. The different developments can be seen from the left penal of the Figure 2, under the different values of parameter Ω mΛ . The increasing attribute shows that our HPM solutions are physically acceptable. Solution for n = 4 From above equation Equation (25) we have the following non-linear differential equation after putting the values of parameter n = 4 as Now, by using HPM scheme In solving the Equations (51)-(53), By Equations (54)-(56), This is a approximate solutions for n = 4, its graphically representation can be seen from the right side of the Figure 2. Solution for n = 5 By Equation (25) with parameter n = 5 we have the following non-linear equation By employing HPM technique. The approximate solution for n = 5. In this approximate solution, parameter Ω mΛ has some important role. The different developments can be seen from the Figure 3, under the different values of parameter Ω mΛ . Special Case In this special case, we discuss the comparative study in the background of an exact solution of Equation (26) and HPM solution by Equation (33) with Ω mΛ = 0.4. The exact solution of Equation (26) is calculated as where C 1 is a constant of integration. The comparison analysis of exact solution and HPM based solution can be seen from the right penal of from the Figure 3. We have also estimated the quantitative values of the exact solution for n = 1, which was possible to calculate and have shown these values in Table 1. Conclusions In this article, we have studied the LRS Bianchi type-I space-time and calculated the fields equations. We have also calculated a cosmological model, which was presented in Equation (17). Further, we have developed an HPM scheme for a nonlinear differential equation. Furthermore, we have tried to find the solutions for the spatially modified cosmological model for LRS Bianchi type-I space-time, when the exact solution could not be found due to nonlinearity. In this response, we have examined the five different values of parameter n, i., n = 1, n = 2, n = 3, n = 4, and n = 5. We have also explored our solution for the four different values of parameter Ω mΛ , i.e., Ω mΛ = 0.10, Ω mΛ = 0.20, Ω mΛ = 0.30, and Ω mΛ = 0.40. Our obtained solutions can be seen from the Figures 1-3 for five different cases. The variational development revealed that parameter n, has some special and important role in this study. Further, it is also noticed from Figures 1-3 that parameter Ω mΛ also has a crucial role in this modified cosmological problem. The increased values of parameter Ω mΛ described the increased nature of our obtaining solutions. In our view, the results of the current study revealed that the HPM is very effective and simple for obtaining approximate solutions of the modified Friedmann equation in cosmology. Our purpose of this study is showing the applications of HMP in the cosmology field. In this study, we have obtained physically acceptable solutions with a positive nature, which support the study of HPM. Conflicts of Interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
3,147.2
2020-03-04T00:00:00.000
[ "Physics" ]
Retracted: Electrical conductivity of hydrogenated armchair nanoribbon as a gas sensor using non-equilibrium Green's function method This article was mistakenly published twice. For this reason this duplicate article has now been retracted. For citation purposes please cite the original:http://www.inljournal.com/?_action=articleInfo&article=16 Nanosensing properties of hydrogenated edge armchair graphene nanoribbons (HAGNR) are investigated. Using non-equilibrium Green's function method in the tight-binding approach, the effects of hydrogen and oxygen adsorption on current–voltage (I-V) characteristics and also the electrical conductivity of these systems are calculated. We found that the I-V curves of these systems change by the adsorption of hydrogen or oxygen molecules. Also, we found that conductivity of these systems at low adsorption concentrations increases, while at high adsorption, concentrations decrease. This could be explained in terms of semiconducting or metallic properties of the adsorbed system which was obtained from electronic properties of our clean HAGNR system. On the other hand, the local density of states of some sites has a metallic behavior, and that of other sites has a semiconducting behavior. Note that our results are investigated at a fixed temperature T = 300 K, i.e., room temperature. By calibrating conductivity in terms of adsorbed gas molecules, one can make a gas nanosensor Here, we extend our previous work [14] to investigate the gas sensing properties of these systems. We focus on the AGNRs which were hydrogenated from their edges, called hydrogenated AGNR (HAGNR). Note that, in the previous work, we considered a clean AGNR and ZGNR without adsorption of hydrogen or oxygen molecules, but in this paper, we considered a HAGNR where hydrogen or oxygen molecules were adsorbed on its surface. Our results show that at low adsorption concentrations, more adsorption of hydrogen or oxygen molecules leads to the increase of conductivity of the system, while at high adsorption concentrations, more adsorption leads to the reduction of conductivity. Therefore, conductivity could identify the percentage of adsorbed gas molecules. This paper is organized as follows: In the 'Methods' section, we describe the model and theoretical tools that we used to calculate the local density of states, transmission, and current-voltage curves of the system. In the 'Results and discussion' section, we applied this method to our system and analyzed the obtained results. The last section is the conclusion part. Method Consider a HAGNR which is attached to two leads from the left to the right. The HAGNR is called the device. To convert this system to a one-dimensional system, we divided it into similar cells along the transport direction where each cell included N 0 atoms (each lattice site included N 0 sublattice atoms). This is illustrated in Figure 1. In the NEGF formalism, in the absence of applied voltage, the total transmission is given by [14,16], where Im[x] stands for the imaginary part of x, and G r ii′ G a ii′ À Á is the retarded (advance) Green function matrix from i unit cell to i′. The unit cell number i = 0 refers to the right edge of the left lead, and i = N + 1 refers to the left edge of right lead. Note that we have N unit cells in the device region (i = 1,. . .,N) and that in our system, N is equal to 5. The retarded and advance Green functions, G r and G a , are defined by and respectively, where S and I are the overlap matrix and unitary matrix, respectively. H is the Hamiltonian function of the system, and η is a positive infinitesimal energy. P L/R is the left/right lead self-energies. By applying the mode-matching method and using the boundary conditions (the source is put in the left lead), these selfenergies are given by where they contain all information about the coupling between the leads and scattering device region (HAGNR) as well as the information about the scattering boundary condition. H LD/RD stands for coupling matrix between neighbor cells in the left/right lead and the device. Note that self-energy functions are energy-dependent and expressed in terms of the Bloch matrices F L/R where the Bloch matrix depends on the lead modes. In the real space, H is a Hamiltonian matrix where its elements are matrices with a dimension of N 0 × N 0 . For definition of H matrix elements, we used the tight-binding representation in the nearest neighbor's approximation. We consider the hopping between the nearest neighbor atoms in the device equal to 2.75 eV and calculate other hopping and on-site energies with respect to this value. Using the Landaur approach, we apply a weak bias voltage between the left and right regions in the transport direction. In addition, we ignored phonons effects, and we fixed temperature at room temperature, T = 300 K. The total current using scattering formalism is given as follows: [15] where Fermi function, and μ is the chemical potential. Note that the chemical potential of the two leads varies by the applied bias voltage, V, as μ L À μ R ¼ eV (source is at the left lead). We assumed that before applying bias voltage, the chemical potential of the whole system is −6β where β is the hopping energy in the device. After applying bias voltage, all atoms in the left lead cells have the same chemical potential μ L , while the atoms in the right lead have the same chemical potential μ R . However, in the device region, the chemical potential varies linearly with respect to the distance from the left edge of the device, i.e., along the transport direction (as shown in Figure 1). Our clean system is a HAGNR which included 90 atoms (70 carbon and 20 hydrogen atoms). Now, we consider hydrogen and oxygen molecule adsorption on the top of this system. We assumed that the leads are two-dimensional semi-infinite periodic wires of copper with body centered square lattices where the widths of all regions are the same. In the next section, we present our results. Result and discussion To see effects of hydrogen/oxygen molecule adsorption on the electronic properties of HAGNRs, first, we investigate electronic properties of our clean HAGNR system. Figure 2a HAGNR construct metallic regions separated by semiconducting regions. Figure 3a shows the metallic and semiconducting regions for the lower edge nanowire. The middle nanowire is constructed from metallic and semiconducting regions of double-site atoms along the length of the HAGNR. Figure 3b shows this nanowire. Hence, at low adsorptions, adsorption of a hydrogen/oxygen molecule by a metallic or semiconducting carbon atom of the HAGNR could change the electronic properties of the HAGNR in different ways. Adsorptions of H 2 /O 2 by metallic sites lead to a reduction of current, while those by semiconducting sites lead to an increase in current. Figure 4a,b shows the current-voltage characteristics of our system when one hydrogen/oxygen molecule is adsorbed by sites 39, 41, 43, and 45 along the width of the HAGNR. To explain this result, we should look at the LDOS of these sites for the adsorbed system. Figures 5a,b,c,d and 6a Figures 7a,b,c and 8a,b,c show LDOS of sites 26, 41, and 45, respectively, when one H 2 /O 2 is adsorbed by site 43. The semiconducting sites 26 and 41 are converted to metallic sites. In the next stage, we investigate adsorption concentration effects on the I-V curve of the HAGNR. Figure 9a,b shows I-V curves of the HAGNR when three H 2 or three O 2 molecules are adsorbed by (a) sites 21, 39, and 57; (b) sites 23, 41, and 59; (c) sites 25, 43, and 61; and (d) sites 27, 45, and 63 along its length, respectively. We found that at high adsorption concentrations, adsorption by semiconducting sites with bigger energy gap increases current more than smaller energy gap semiconductor sites, and adsorption by more metallic sites decreases current with respect to those sites with low metallic features. Figure 10 (Figure 10d). We found that by increasing adsorption concentration, current decreases. To complete our results, we calculated the conductivity of the system in terms of H 2 /O 2 adsorption concentration. Figure 11a,b illustrates conductivity in terms of H 2 /O 2 concentration, respectively. Since at high adsorptions these systems have metallic features, so for high concentrations, we expect such reductions. Conclusions Using the NEGF (non equilibrium Green's function) method in the tight-binding approach, effects of gas adsorption on the electronic properties of hydrogenated armchair graphene nanoribbons are investigated. We found that at a low adsorption concentration, adsorption by metallic HAGNR sites decreases current, while adsorption by semiconductor HAGNR sites increases current in the current-voltage curve of these systems. At high adsorption concentrations, the whole system becomes metallic, so by increasing adsorption concentrations, current always decreases. Also, we found that at low adsorption, the conductivity of the system increases by increasing adsorption, while at high adsorptions, the conductivity of the system decreases by increasing adsorption. These results could be used to make a gas nanosensor.
2,081.8
2012-04-20T00:00:00.000
[ "Physics" ]
On a quest for cultural change Surveying research data management practices at Delft University of Technology The Data Stewardship project is a new initiative from the Delft University of Technology (TU Delft) in the Netherlands. Its aim is to create mature working practices and policies regarding research data management across all TU Delft faculties. The novelty of this project relies on having a dedicated person, the so-called ‘Data Steward,’ embedded in each faculty to approach research data management from a more discipline-specific perspective. It is within this framework that a research data management survey was carried out at the faculties that had a Data Steward in place by July 2018. The goal was to get an overview of the general data management practices, and use its results as a benchmark for the project. The total response rate was 11 to 37% depending on the faculty. Overall, the results show similar trends in all faculties, and indicate lack of awareness regarding different data management topics such as automatic data backups, data ownership, relevance of data management plans, awareness of FAIR data principles and usage of research data repositories. The results also show great interest towards data management, as more than ~80% of the respondents in each faculty claimed to be interested in data management training and wished to see the summary of survey results. Thus, the survey helped identified the topics the Data Stewardship project is currently focusing on, by carrying out awareness campaigns and providing training at both university and faculty levels. Introduction The importance of effective research data management (RDM) and sharing practices in research is nowadays highly recognised by funding bodies, governments, publishers and research institutions.The commitment to the Findable, Accessible, Interoperable and Re-usable (FAIR) principles (Wilkinson et al., 2016) is not only a requirement for all projects funded by the European Commission's Horizon 2020 funding scheme (European Commission, 2017), but they are also one of the fundamental principles of the European Open Science Cloud (European Commission, 2018).In addition to that, in the Netherlands, the Dutch government declared that Open Science and Open Access should be the norm (Regeerakkoord, 2017(Regeerakkoord, -2021)).The two major national funding bodies, the Dutch Research Council (NWO) and the Netherlands Organisation for Health Research and Development (ZonMW), have detailed requirements for data management and data sharing as part of their research grant conditions (NWO, 2016;ZonMW, 2018).In parallel, more and more journals and publishers require that research data supporting research articles are made available (e.g., Nature research, 2016;PLOS, 2014).Last but not least, research institutions have also recognised the importance and necessity of good data management and transparency in research.In the Netherlands, this has been reflected in the National Plan Open Science 1 (NPOS), signed in 2017 by the Association of Universities in the Netherlands (VSNU), and in the Netherlands Code of Conduct for Research Integrity published in October 2018. 2 Consequently, in order to ensure that high-level policies are reflected in dayto-day research practices, research institutions have started offering additional support services for RDM.At TU Delft, central library support for RDM and data sharing has been in place already for several years.Furthermore, TU Delft is part of the 4TU consortium of technical universities in the Netherlands and it is home to the 4TU.Centre for Research Data archive 3 (4TU.ResearchData), which functions as a certified, trusted repository (Data Seal of Approval 4 ) for long-term preservation and sharing of research data.Both, the TU Delft Research Data Services and 4TU.ResearchData Services have been evaluated using the Research Infrastructure Self-Evaluation Framework (RISE) (Rans & Whyte, 2017).This framework helped assessing the maturity levels of the provided services regarding research data management.From this, it was clear that more effort had to be injected into policy Liber Quarterly Volume 29 2019 development and training. 5In line with the fact that bottom-up communitydriven approaches are favoured at TU Delft, 6 we believe that data management support needs to be discipline-specific in order to be truly relevant to our research communities. Heading in such direction, TU Delft's executive board provided funding for three years (2018)(2019)(2020) to initiate the Data Stewardship project at TU Delft.A dedicated Data Steward with a subject specific background (a PhD or an equivalent experience in the faculty-related research area) was hired at every TU Delft faculty.All Data Stewards are coordinated by the Library at TU Delft, and constantly interact with other support staff in order to develop mature working practices for RDM across the campus. How can we approach such a task?We reasoned that we first need to understand what the current practices are, and based on that, develop a system which allows us to improve such practices and regularly assess their progress.Hence, two main strategies were adopted: 1) conducting qualitative, semistructured interviews with researchers across the faculties; 7 and 2) run quantitative surveys about data management practices at TU Delft in a periodic fashion.The semi-structured interviews provide an important in-depth insight into researchers' needs and are necessary for building trust and connections with the research community.Additionally, having a broader quantitative overview of RDM practices is necessary to provide robust benchmarking of the project.This paper presents the results of the first RDM quantitative survey carried out at TU Delft.The survey is partly based on the Data Asset Framework (DAF) (Johnson, Parsons, & Chiarelli, 2016).The DAF survey is a comprehensive tool that allows institutions to assess researchers' data management practices and identify gaps in service provisions.However, since the DAF survey is rather lengthy (consisting of over 60 questions), it was decided that the general principle of the DAF framework would be followed, but that the questionnaire itself would be substantially simplified into a survey containing a total list of 22 questions. Method The survey was developed as a web-based questionnaire and it was distributed via email to all staff members of 6 out of the 8 faculties of the TU Delft. The 2 remaining faculties did not have a Data Steward before July 2018 (Data Stewards were incorporated at different times, and the survey was carried out only at the Faculties that had a Data Steward in place). The survey was sent in two runs.The first run was carried out in November 2017 at the Faculties of Aerospace Engineering (AE), Civil Engineering and Geosciences (CEG), and Electrical Engineering, Mathematics and Computer Science (EEMCS).The second run was carried out in the months of May-June 2018 at the Faculty of Technology, Policy and Management (TPM), and in the months of May-July 2018 at the Faculties of Mechanical, Maritime and Materials Engineering (3mE) and Applied Sciences (AS). The survey consisted of 22 questions about RDM, aside questions asking for demographic information (e.g., position, institution, faculty, department, among others).The topics included automatic data backups, time frame and frequency of data loss, use of dedicated tools for RDM, data ownership, data stewardship, data management plans (DMPs), awareness of FAIR data principles, and use of research data repositories.The response scheme was mostly multiple choice with categorical answers (e.g., 'Yes,' 'No' and 'Not sure' options).The analysis shown in this article was carried out using the software Tableau Reader v2018.2. In order to encourage responses, the respondents were given the possibility to be included in a draw for vouchers of a known commercial house in the Netherlands.Those who wanted to participate in the draw and/or wanted to receive information about the results were asked to provide their email addresses.The results of the draw carried out at each Faculty were disseminated accordingly by each Data Steward.Data was anonymized by removing identifiable features, and the raw files were destroyed. Data Availability A description of the survey and the questions are publicly available in Open Science Framework under the name 'Quantitative assessment of research data management practice' (Teperek et al., 2019).The anonymized data is publicly available in Zenodo under the title 'Quantitative assessment of research data management practice' (Krause et al., 2018) The survey was also carried out at the Ecole Polytechnique Federale de Lausanne (EPFL) at the end of 2017.The report of those results can be found in the website of the EPFL Library. 9The results given in this work correspond to those for TU Delft only. Response Rates The survey was sent to all staff members per faculty.The total number of respondents was 680.Among these, 628 respondents correspond to 'Full Professors,' 'Associate Professors,' 'Assistant Professors,' 'Postdocs/ Researchers' and 'PhD candidates.'Table 1 lists the response rates per academic position per faculty.Considering Full Professors, Associate Professors, Assistant Professors, Postdocs/ Researchers and PhDs candidates, the total response rates per faculty varied from 8% at EEMCS to 37% at AE.The majority of the respondents were PhD candidates, representing 52% of the responses (see Figure 1).The response rate from Full Professors on the other hand was of 5% (varying from no responses at CEG to 48% at AE). In the following section, the results will be presented considering the responses from Full/Associate/Assistant Professors, Postdocs/Researchers and PhD candidates in order to restrict the answers to data associated with research.3. Results Data Backup & Data Loss Figure 2 presents the responses regarding automatic backups of research data.About 43% of the respondents do not have the data automatically backed up, while the percentage of people answering 'Yes' to the question 'Is your research data automatically backed up?' is 42% on average, ranging from 39 to 47% across faculties (see Table 2).2. Liber Quarterly Volume 29 2019 Responses from different faculties appear to be similar, with the exception of the responses from TPM faculty, where the percentage of respondents not doing automatic backups is the lowest across all faculties (28% compared to 39 to 52% for the other faculties).However, the overall share of those who do not know if the data is backed-up at TPM is the highest. Focusing on the answers per position, the percentage of respondents in higher positions of the academic career (i.e.Full/Associate/Assistant Professors) that do automatic backups is greater than that of the PhD candidates that replied to the survey (see Table 3). Regarding data loss, Figure 3 shows the responses per faculty to the question 'Did you lose any research data in the past year?.' Table 4 lists the responses per academic position.According to Figure 3, answers across all faculties appear to behave similarly.On average, about 13% of the respondents in each faculty claim to have lost data in the past year.Also percentages of data loss Results are given in percentages relative to the total number of respondents from each faculty. The percentages have been rounded to the nearest integer.The numbers are given relative to the total number of respondents in each academic position considering all faculties.All percentages have been rounded to the nearest integer. are at a similar level considering the responses per academic position (see Table 4).Interestingly, PhD candidates and Assistant Professors show the largest percentages of data loss (14 and 15% respectively). Cross-correlating the responses between doing automatic backups and losing data, it is interesting to see that in almost all faculties, the percentage of data loss (in the past year) indicated by respondents that do automatic backups is lower than the percentage of data loss indicated by respondents that do not do automatic backups (see Table 5).Only for the TPM faculty it turned out to The percentages are given per academic position considering all faculties.Percentages have been rounded to the nearest integer.be the other way around.As listed in Table 5, data loss percentages of respondents that do automatic backups is of 8% on average, while that indicated by respondents that do not do automatic backups is of 17%. Research Data Repositories When queried about being aware of research data repositories, respondents could choose one of the following answers: 'Yes, I am already using them to find existing datasets or to share my own data;' 'Yes, I am aware of research data repositories, but I have not used them;' 'Not sure;' 'No, I have no idea what these are.' Results show respondents appear to be aware of research data repositories, but are not necessarily using them (see Figure 4 for responses per position, and Table 6 for responses per faculty).The most common answer in all faculties was 'Aware but not using,' ranging from 42% of the replies at AS faculty to 61% at TPM.Only about 16% of all respondents per faculty claim to be using research data repositories to find existing datasets or to share data. Participants were also asked whether they had heard about the 4TU.ResearchData, for which respondents could reply 'Yes,' 'No,' or 'Not sure.' Inspection of those results shows that between 4 (AS) to 31% (TPM) of the respondents who replied 'Not sure' to being aware of research data repositories, claim to have heard about the 4TU.ResearchData repository All percentages have been rounded to the nearest integer.The average data loss percentages of respondents that do automatic backups is of 8%, while the average data loss percentage of respondents that do not automatically backup the data is of 17%. (Table 7).Moreover, among the respondents who have heard about the 4TU. ResearchData, an average of 8% replied 'Not aware' (i.e., chose the option 'No, I have no idea what these are') when asked about research data repositories (Table 7).In general, respondents tend to be aware of research data repositories, but claim not to be using them.Answers have been shortened as defined for Figure 4. Results are given in percentages relative to the total number of respondents from each faculty.All percentages have been rounded to the nearest integer. Data Management Plans & FAIR Data Figure 5 shows most respondents stated they were not working on a project with a DMP by the time they replied to the survey.Only ~19% of the respondents claim to be working in projects with a DMP, and a similar percentage is not sure whether the project they are working on has a DMP or not.The numbers correspond to percentages per faculty.All percentages have been rounded to the nearest integer.The numbers represent the average and the standard deviation calculated from considering responses per faculty.All percentages have been rounded to the nearest integer.Results are given as percentages relative to the total number of respondents from each faculty. All percentages have been rounded to the nearest integer. Interestingly, among the respondents who are either aware or using research repositories (see Table 6), we find that the percentage of respondents working on projects with DMPs is greater than the percentage of respondents who do not work with DMPs (see Table 8).This also holds among the respondents who are aware of FAIR data (see Table 8). Concerning FAIR data awareness alone, more than 50% of the respondents at each Faculty are not 'aware' or are 'not sure' of funders expectations for FAIR data (see Table 9).In general, the percentage of respondents who answered to be aware of FAIR, is at the 20-30% level across faculties (except at TPM faculty; see Table 9).Most of these answers are from staff members in higher positions of the academic ladder (see Figure 6). Results also show that respondents who are aware of FAIR data tend to also be 'aware of or using' research data repositories, as opposed to the respondents who are not aware of what FAIR data is.However no significant Liber Quarterly Volume 29 2019 difference is detected when comparing directly with usage of research data repositories alone (see Table 10). This positive trend of FAIR data and research data repositories awareness is also seen when comparing the answers to the question about having heard of the 4TU.ResearchData archive (see Table 10). Data Ownership Overall researchers -particularly PhD candidates-show little awareness about who owns the data.Participants were specifically asked 'Do you The numbers represent the average and the standard deviation calculated from considering responses per faculty.All percentages have been rounded to the nearest integer. know who owns the data you are creating?.'Only those who responded 'yes' to that question were asked to specify who the owner(s) of the data was(were).The results show that at least ~50% of all the respondents of each faculty do not know or are not sure of who the owner(s) of the data is(are) (see Figure 7). Researchers in higher academic positions appear to be more aware of data ownership, particularly Full Professors and Associate Professors (>60%; see Table 11).Less than 50% of the Postdocs claim to know who owns the data.PhD candidates on the other hand, appear to be the least aware of data Results are given as percentages relative to the total number of answers per academic position (considering all faculties).All percentages have been rounded to the nearest integer. Liber Quarterly Volume 29 2019 ownership, with a 'Yes' percentage of 33% considering the responses from all faculties (Table 11).Furthermore, between 17 (AE) and 67% (TPM) of the respondent PhD candidates who affirm knowing who the owner(s) is(are), claim some degree of ownership on the data they manage (see Table 12).This translates to an average of ~9% of all respondent PhD candidates claiming to have either full or partial ownership of the data (right column of Table 12); where partial ownership appears to be shared with many different stakeholders (e.g., TU Delft, supervisor, research group, company, public, funder, etc.) and combinations thereof. The unawareness regarding this topic is also apparent from the written comments added to the answer of 'You said you know who owns the research data that you are creating.Who is it?.' Examples of such comments are: 'Me! Well the university I guess' (PhD candidate), 'Department and supervisors' (PhD candidate), and 'The regulations are not completely clear on this, but as far as I remember it's the authors' (answer from Associate Professor). Stewardship of Research Data Respondents were also asked 'Who do you think is responsible for the stewardship of research data resulting from your project?.' However confusion about the term 'stewardship' was apparent from the answers, suggesting not everyone is familiar with this term in the first place.This was clear from the first run of the survey at AE, CEG and EEMCS faculties.Thus, it was decided that the question would be modified to 'Who do you think is responsible for the management of the research data resulting from your project?' for the surveys carried out later at the faculties of 3mE and AS.Interestingly, such change in formulation of the question had no significant impact on the results: the term 'management' was found to be similarly confusing to the term 'stewardship.' Considering the above, most staff members (84% at AE; 94% at AS; 87% at CEG; 77% at EEMCS; 91% at TPM; and 92% at 3mE) acknowledge their role in being responsible of taking care of the data in the projects they are involved in.However, this responsibility is also said to be shared with other university stakeholders.In this regard, PhD candidates indicated their supervisor is either full or partially responsible for the data stewardship throughout the research projects (e.g., 37% at TPM, 50% at CEG, 40% at EEMCS and 37% at AE). Participants were also asked whether they had heard about the Data Stewardship project and data management support at their faculties.Among the answers, respondents from TPM appear to be more familiar with the Data Stewardship project and dedicated support (45%; see Figure 8), while such answer in the other faculties varied from 15 to 27% (Figure 8).Breaking down the answers by academic position, we find that in general (Full/Associate/Assistant) Professors are more aware of the Data Stewardship Project and dedicated support for RDM than the other staff members (see Figure 9).On the other hand, <20% of the total number of Postdocs/Researchers and PhD candidates respectively, claim to be aware of the Data Stewardship Project and dedicated support. Interest in Training Regarding training in RDM topics, researchers were asked 'Please indicate if you (or related staff/students) would be interested in any potential training on research data management.' Figure 10 shows the results considering the total number of answers per academic position.Among the offered training topics were: 'General introduction to research data management;' 'Data management plan preparation;' 'Data backup and storage solutions;' 'How to use repositories for data sharing and searching for existing datasets;' 'Data ownership and licensing;' 'Using version control software;' 'Funders' requirements for data management and sharing;' 'Working with confidential data (personally identifiable, commercially sensitive etc.);' 'Data carpentry;' 10 'Software Carpentry;' 11 among others.The names of such trainings have been shortened in Figure 10 for the sake of better visualization.Respondents were allowed to choose multiple topics if desired. Discussion The questions in this survey aimed to target general RDM practices, and not necessarily faculty-specific ones.Hence, it is not surprising the results of this survey showed similar trends across the different faculties of the university. In general, we find some concerning practices that might suggest researchers are not familiar with what the university has to offer regarding RDM; and/ or there is little education about what data management is, and how research can benefit from it. The fact most respondents do not have the data automatically backed up or do not know if the data is automatically backed up, indicates a great fraction of the respondents might be performing manual backups, and/or do not know very well what TU Delft ICT solutions are regarding (at least) data backups (e.g., poor use of the TU Delft network drives). The possibility of manual backups being a common practice among researchers (especially PhD candidates) is of great concern, since such practice leads to a substantial higher risk of data loss, than when relying on automatic backups.Percentages of data loss registered in the last year are at the 10% level, however such data loss occurrences have caused delays of up to 6 months of work.In addition to this, the percentage of data loss indicated by respondents that do automatic backups is lower than that indicated by respondents that do not do automatic backups.Hence, the Data Stewardship project has the mission to encourage researchers not to rely only (even less mainly) in At last, this lack of knowledge about TU Delft RDM services is also apparent when asked about Data Stewardship project awareness, and knowledge of ICT support for RDM (Figures 8 and 9).Only 15 to 27% of the respondents claimed to have heard about them (Figure 8).On one hand, such unfamiliarity with the Data Stewardship project is not surprising, since the Data Stewards had recently been introduced at their respective faculties when the survey was sent out.On the other hand, the question also mentioned the university ICT support, and the replies from specially early career researchers were still rather poor.This reveals another challenge for Data Stewards which is: bringing RDM to the day to day practices of (specially) early career researchers. The issue mentioned above also brings up the lack of education regarding RDM.This is also clear from: the confusion about the terms 'stewardship' and 'management;' the contradictions on research data repositories; and the comments on how automatic backups are done.In addition to that, when asked about what 'data management tools' respondents use, some of the tools that were mentioned (as free text responses) included 'Mendeley,' 'hard-drives,' 'Google files,' 'Google drive, ' 'MyBrain,' 'Dropbox,' 'OneDrive,' 'Onenote,' among others aside 'Git,' 'Github,' 'Gitlab,' 'Subversion,' 'Bitbucket' and 'Mercurial.' Interestingly 'papers,' 'Digital computer,' 'slack,' and 'plain simple ASCII text files,' were also mentioned as 'data management tools.' From the results of this survey, we see the need for further awareness raising and education with respect to RDM topics.This should be addressed at both an early career stage (e.g., PhD candidates) and among established researchers (i.e., Professors).Senior researchers are clearly more familiar with policies and regulations, however they are not necessarily aware of the daily RDM practices these policies imply. In addition to that, the survey results pose a new question for us: do researchers value proper RDM practices?Or are these only seen as new funder/institutional mandates?This question is guided by the relation found between the responses about 'FAIR data awareness' and 'awareness or use of research data repositories;' while no relation with solely 'use of research data repositories' was observed (Table 10).In addition to that, only 19% of the respondents claimed to be working on a project with a DMP, and a similar percentage is observed for respondents 'not being sure of' whether they are working on a project with a DMP or not (Figure 5).Hence, it is not clear whether researchers see the benefits of following FAIR principles and DMPs, or if these are only viewed as regulatory requirements from (mainly public) funders.Regardless of that, the results show that DMPs are indeed great tools to increase awareness about adequate RDM practices.Based on this, the Data Stewardship Liber Quarterly Volume 29 2019 project is currently focusing on bringing awareness into actual practice: encouraging researchers to recognize tools such as DMPs not only as funder deliverables, but also as useful instruments to take good care of the data. A relevant aspect of data management that also raises concerns is data ownership.As seen in section 3.4, over 50% of the respondents 'do not know' or 'are not sure of' who the owner of the data is.Researchers in higher academic positions appear to be more aware of data ownership than early career researchers.This might be related to the fact that established researchers are the ones directly involved in the contractual phase of research projects.From the survey results, it is not clear if such information is accordingly disseminated to the early career researchers, who manage relevant research data on a daily basis.This we find a relevant subject, since once data ownership is clearly established, and well communicated to all team members from the beginning of a project, it makes things clearer when deciding on how the data should be managed throughout the project and the restrictions thereof (e.g., data encryption, data sharing, protected storage). Clarifying responsibilities regarding data is also relevant.In this aspect, most staff members do recognize their role in being (either fully or partially) responsible for the data in the projects they are involved in (section 3.5).Among PhD candidates, between 37 and 50% claim their supervisor is either fully or partially responsible for data management.Respondents who claim they have either full or partial ownership on the data tend to also recognize responsibility on the data.Such responsibility is assumed either alone or shared with other university stakeholders (mostly supervisor and ICT manager).However, this also holds for the respondents who 'do not know' or 'are not sure of' who the owner of the data is.In other words, respondents acknowledge responsibility regardless of ownership.This in addition to the great interest respondents show about RDM training (section 3.6), definitely help setting up the proper environment for the Data Stewardship project to work on improving the RDM at the different faculties of the TU Delft. Conclusions In a machine-readable data driven era, RDM is becoming an increasingly important topic for researchers.Proper data management practices are not only beneficial for research, as it facilitates research and promotes verifiability and transparency in the field.But it is also useful for researchers themselves, as it promotes effective research throughout their careers, and it makes it far easier for them to share data with others.In that sense, proper data management practices facilitates the path for Open Science and responsibly data sharing. All such benefits are becoming quite clear to the community, to the point that researchers and research institutions/universities are becoming more aware about the need for further RDM support, in terms of both infrastructure and guidance. The survey results presented in this work have shown two main things: 1) lack of awareness (and quite likely, understanding) about some RDM topics, such as data ownership and what 'FAIR data' implies; and 2) researchers show great interest about RDM.More experienced researchers appear to be more aware about funders' requirements such as DMPs and FAIR data principles, than the early career researchers.This can be explained by the fact senior researchers are the ones dealing with policies, regulations and mandates.However it is not clear whether 'awareness' in this case, directly implies 'understanding' or furthermore, actual adoption of such practices.The results also suggest that such high-level topics are not be necessarily communicated/disseminated to the research groups (more specifically, to the early career researchers). Based on the findings of this survey, the Data Stewardship project at TU Delft has focused on understanding researchers' needs concerning data management, and spreading awareness about adequate RDM practices, and RDM services available for TU Delft researchers.We expect to carry out the survey at a periodic basis in order to also benchmark the evolution of the Data Stewardship project at a university level; and we encourage other institutions to reuse this survey and/or build upon it, to help evaluate RDM awareness at their own institutions/universities. Acknowledgement Special thanks to Munire van der Kruyk and all support staff at the TU Delft faculties and Library, who provided advice and help disseminating the survey at the faculties. We thank the Editorial Board of LIBER Quarterly for their critical remarks about this template. Fig. 3 : Fig. 3: Responses regarding research data loss in the past year.On average, 13% of the respondents claim to have lost research data in the past year. Fig. 4 : Fig.4: Responses regarding awareness of research data repositories.The answers respondents could choose from have been shortened to 'Using' (option 'Yes, I am already using them to find existing datasets or to share my own data'); 'Aware but not using' ('Yes, I am aware of research data repositories, but I have not used them'); and 'Not aware' ('No, I have no idea what these are').The results are given in percentages considering all faculties.In general, respondents tend to be aware of research data repositories, but claim not to be using them. Fig. 5 : Fig. 5: Responses to the question 'Does your project have a data management plan?.' Responses are given as percentages with respect to the total number of respondents per faculty. Fig. 6 : Fig. 6: Awareness of FAIR data.The percentages are given with respect to the total number of respondents per academic position (across all faculties). Fig. 7 : Fig. 7: Results regarding data ownership awareness.Responses are given as percentages considering the total number of responses per faculty. Fig. 8 : Fig. 8: Responses regarding Data Stewardship project and dedicated support on RDM at the faculties.The results are given as percentages relative to the total number of responses per faculty. Fig. 9 : Fig. 9: Responses regarding Data Stewardship project and dedicated support on RDM at the faculties.The results are given as percentages considering the total number of respondents per academic position (from all faculties). Fig. 10 : Fig. 10: (Continued) , and a visualization of the survey is available at Tableau Public under the name 'TU Delft Quantitative Assessment of Research Data Management Practice 2017-2018.' 8 Table 2 : Results to the question 'Is your research data automatically backed up?.' Table 3 : Percentage of respondents that do automatic backups per position. Table 4 : Percentage of respondents who have lost data in the past year.PositionRespondents that do automatic backups (%) Table 5 : Comparison of data loss percentages between respondents that do automatic backups, and those who claim not to have their research data automatically backed up. These contradictions suggest respondents either do not know what repositories are, or do not know very well what the 4TU.ResearchData is (see more in Discussion). Table 6 : Results to the question 'Are you aware of research data repositories?.' Are you aware of research data repositories? Table 7 : Comparison of answers from survey respondents regarding awareness of research data repositories, and awareness of the 4TU.ResearchData. Table 8 : Comparison of responses between researchers who work on projects with a DMP, and those who do not. Table 9 : Results regarding awareness of FAIR data. Table 10 : Comparison between respondents who are aware of FAIR data and those who are not. Table 11 : Responses regarding data ownership. Table 12 : Data ownership responses among PhD candidates.Only respondents who answered 'yes' to the question 'do you know who owns the research data that you are creating?'were asked to specify who the owner(s) of the data was(were).The last column on the right lists the percentage of PhD respondents who claimed full or partial ownership, considering the total number of PhD responses per faculty.All percentages have been rounded to the nearest integer. According to the results, there appears to be great interest among the surveyed researchers: more than 80% of the respondents are interested in RDM training.Interestingly, researchers in different academic positions expressed interest in different topics: Full Professors are mostly interested in a 'General Introduction to Research Data Management.' Associate and Assistant Professors expressed more interest in 'Working with confidential data,' and 'Data Ownership.'While Postdocs/Researchers and PhD candidates appear to be mostly interested in a 'General Introduction to Research Data Management,' but also in 'Data Backup and Storage.'These results appear to be consistent with what each academic position faces at work on a daily basis in terms of RDM. Liber Quarterly Volume 29 2019 manual backups.Along with that, researchers should be encouraged to make use of TU Delft ICT resources and RDM services.The lack of use of the TU Delft network drives and/or the little understanding of these solutions is quite apparent from the text comments written by participants who 'claim to do automatic backups.'Whenasked how those automatic backups are done, examples of typical answers are: 'Managed by the ICT department at our faculty.The frequency I don't know.I put the data on the project drive (U);' 'Once a day, usually backed up in a harddisk or a usb disk, myself manages the backup;' 'Twice a week, my data is backed up in my mobile hard disk;' 'On USB hard drives separate from the systems I work on, or remotely.'Moreover,only 34% of the respondents doing automatic backups, mention the university network drives (most of the times using them together with other backup solutions).About 28% of the respondents doing automatic backups mention Surfdrive 12 (most of them mentioning Surfdrive alone); 16% mention Dropbox (either alone or together with other platforms); and 7% mention Google Drive (either alone or together with other platforms).On a more concerning note, the free-text comments about how automatic backups are done show that some respondents who 'have' the research data automatically backed up, are doing it by themselves.Hence it is not clear what definition of an 'automatic backup' the respondents considered when answering this question (only respondents who claimed to do automatic backups, were asked how the backups are done).It is the aim of the Data Stewards then, to increase awareness regarding the sensitivity and security of data, and which data storage, backup and processing solutions are the most suitable ones for each data type.
8,055.6
2019-05-30T00:00:00.000
[ "Computer Science" ]
Prime factorization algorithm based on parameter optimization of Ising model This paper provides a new (second) way, which is completely different from Shor’s algorithm, to show the optimistic potential of a D-Wave quantum computer for deciphering RSA and successfully factoring all integers within 10000. Our method significantly reduced the local field coefficient \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h$$\end{document}h and coupling term coefficient \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$J$$\end{document}J by more than 33% and 26%, respectively, of those of Ising model, which can further improve the stability of qubit chains and improve the upper bound of integer factorization. In addition, our results obtained the best index (20-bit integer (1028171)) of quantum computing for deciphering RSA via the quantum computing software environment provided by D-Wave. Furthermore, Shor’s algorithm requires approximately 40 qubits to factor the integer 1028171, which is far beyond the capacity of universal quantum computers. Thus, post quantum cryptography should further consider the potential of the D-Wave quantum computer for deciphering the RSA cryptosystem in future. Quantum annealing. Quantum annealing, as the core algorithm of a D-Wave quantum computer, has the potential to approach or even achieve the global optima in an exponential solution space, corresponding to the quantum evolution towards the ground state of the Hamiltonian problem 24 . The quantum processing units (QPUs), which are the core components for performing quantum annealing, are designed to solve quadratic unconstrained binary optimization (QUBO) problems 25,26 , where each qubit represents a variable, and the couplers between qubits represent the costs associated with qubit pairs. The objective form of the QUBO that the QPU is designed to minimize is as follows: T where Obj represents objective function of QUBO, x is a vector of binary variables of size N , and Q is an × N N real-valued matrix characterizing the relationship between the variables. Thus, any problem given in such a form can be solved by the D-Wave quantum annealer. Multiplication table for factorization. Quantum annealing uses the quantum effects generated by quantum fluctuations to realize the global optimal solution of the objective function. The integer factorization problem can be transformed into a combination optimization problem that can be handled by the quantum annealing algorithm, and the minimum energy value can be output through the quantum annealing algorithm. At this time, the minimum value is the successful solution of integer factorization. To clarify the integer factorization method via quantum annealing, we introduce a multiplication table to illustrate the feasibility of mapping the integer factorization problem to Ising model (a model can be processed by a D-Wave quantum computer). We illustrate the factorization of the integer multiplication table by factoring = × N p q, where p and q are prime numbers. Table 1 shows the factorization of = × 143 11 13. In Table 1, p i and q i represent the bits of the multipliers, and z ij is the carried bits from ith bit to the jth bit. All the variables p i , q i , and z ij in the equations are binary. Note: All of the variables involved in Table 1 can only take the values of {0, 1} . Adding each column leads to the following equations: 2 , and p q p q ( 1 ) The objective function is defined as the sum of squares of the three equations. It can be given as follows: It can be seen from the above that the minimum value of Eq. (12) is 0, that is, p p q ( , , 1 , and q ) 2 are the values that minimize Eq. (12), and it is also the solution of Eqs. (9)- (11). This means that the values of p p q ( , , 1 , and q ) 2 represent the solution to the factorization problem. Equations (13)- (15) are further simplified to the following We define the objective function as the sum of the squares of all the columns as follows: Since Ising model can only deal with the interaction of two variables, it is necessary to process polynomials greater than the 2-local term. According to the properties = p p 2 , = q q 2 , and = c c 2 (the values of p, q and c are 0 or 1), Eq. (19) is expanded and simplified, and the polynomials of more than 2-local term are replaced by the following equation 30 (for more information about factorization refer to ref. 30 ): We replace p q 1 1 , p q 1 2 , p q 2 2 , and p q 2 1 with t 1 , t 2 , t 3 , and t 4 , respectively. In Eq. (20), the variable x i is used to represent the rule that the cubic term is reduced to the 2-local term. For example, the expansion term p q q 1 1 2 in Eq. (19) is replaced by 1 . Then, we perform variable replacement to transform the variables into the domain 0, 1 by using 11 3 , and = x t 12 4 . Finally, via the correspondence = p s finally simplifies to the following: www.nature.com/scientificreports www.nature.com/scientificreports/ The local field h represents the coefficient value of the single term of all s i variables, and the coupling J is the coefficient value of the 2-local term for all s s i j variables. The final model can be given as follows: 130 5 107 5 130 5 107 5 41 82 3 6 137 81 107 81 Then, the model given in Eqs. (22)-(23) can be directly solved by the D-Wave machine or the qbsolv software environment can be used to perform the quantum annealing algorithm. In this way, the model for the factorization can be generalized to any integer. Furthermore, it is a scalable model for any large integer in theory and it is a real potential application for D-Wave. In the case when the factorization increases in Shuxian Jiang et al. 30 , the growing number of qubits and the huge coupler strength in the theoretical quantum model will result in a nontrivial impact on the QA precision in the real D-Wave machine. Especially for limit-connectivity hardware, too high of costs regarding the number of qubits greatly limits the generalization and scalability of the factorization in large cases. In addition, the reduction from the 3-local term to the 2-local term increases the coupler strength and local field coefficient, especially for large integers. This paper proposes a new model that addresses two perspectives: saving qubit resources and simplifying the quantum model to factor larger integers with fewer qubits. Using this way, we can reduce the number of involved qubits and the range of the coupler strength between qubits without any loss of generalization. It is expected to solve larger integers with fewer qubits so that the D-Wave can provide a more powerful capacity to factor large integers in the future. Optimization of model parameters. In Ising model in ref. 30 , they did not consider the restrictions on the final model derived from the target values, which may cause too many carries to be involved in the model. Here we introduce the constraints derived from the difference between the target values and the maximal output of each column. The carries involved can be directly removed in some cases. As shown in the improved multiplication table of Table 2 Actually, the method of ref. 32 is designed to reduce the number of qubits, and thus the improvements to the complexity of the model are limited. The main reason is that there is a "2" in Eq. (20), which leads to many high coupler strengths and local field coefficients in the final Hamiltonian resulting in fragile quantum states. Therefore, another optimization should be proposed to solve the above problem without the loss of generalization and scalability. As mentioned above, we mainly focus on the optimization of the model parameters. Jiang et al. 30 a way to reduce the 3-local term to a 2-local term, which increased the local field coefficient and coupler strength parameters, especially for large integers. In the integer factorization problem based on quantum annealing, the reduction of the model parameters is beneficial to reducing the hardware requirements and the precision of quantum annealing. To reduce the 3-local term to a 2-local term in the integer factorization process, inspired by ref. 35 The negative term is the same as ref. 30 . We mainly prove our optimization of the positive term, that is, why the positive term Table 3 is a combination of 16 values of x 1 , x 2 , x 3 , and x 4 . The values of x 1 , x 2 , x 3 , and x 4 are 0 or 1. The output of is given in the last column, followed by √ or × to represent whether x x x 1 2 3 equals 4 + x 4 ) or not. As mentioned earlier, the integer factorization problem is the problem of finding the minimum value of a function. In other words, solving the minimum value of x x x 1 2 3 is the same as solving . Take the first two rows of the Table 3 as an example for the following illustration. In this case, where = . Therefore, At this time, x x x 1 2 3 is equivalent to 4 . The dimension reduction method in this paper is not only applicable to the integer 143, but it is also applicable to the case where the polynomial of the objective function of any integer is greater than the quadratic term, such as the factorization of the 20-bit integer 1028171. A detailed analysis of the factorization is shown in the supplemental material. The method is universal and extensible. We do the following analysis. Assume that the objective function of the integer factorization is as follows: where g x ( ) and f x x x ( , , ) i j k are polynomials composed of two-local terms and 3-local terms, respectively. Then, it can be transformed based on Eq. (27) as follows: x n k i j i n j n n min x n n Therefore, the minimum value that solves the objective function S x ( ) min is equivalent to the minimum value of solving the 3-local term f x ( ), namely, the value of . Similarly, we analyze the 4-local term in the function. www.nature.com/scientificreports www.nature.com/scientificreports/ f x x x x ( , , , ) i j k l is a polynomial composed of 4-local terms. We consider x k and x l as a whole, and obtain Eq. i j k l x n k l i j i n j n n n = + − − + . For the 3-local term x x x n k l in Eq. (30), the dimensionality reduction formula x n k i j i n j n n n is used again to obtain the following: n k l x m l n k n m k m m m Finally, the final 4-local term is reduced to a 2-local term as follows: Table S1 of the supplemental material shows the factorization of integer 1028171. The qbsolv software environment is a decomposition solver that finds the minimum value given by a QUBO problem by splitting it into pieces that are solved either via a D-Wave system or a classical tabu solver. For more information about the tool, please refer to http://github.com/dwavesystems/qbsolv. The simulations are based on the combination of the two optimizations, which can be divided into the following steps. • Step 1. Give the improved multiplication table of Jiang et al. 30 that is divided into several columns. It's complexity is less than O log N ( ( )) 2 . • Step 2. Give the original model based on the optimization in ref. 32 . The complexity of this step is less than O log N (( ( )) ) In the above simulations, Steps 1-4 are classical calculations, and the complexity is less than O log N (( ( )) ) 2 3 . Step 5 performs a quantum annealing calculation. The complexity increases as the integer to be factored becomes larger, and the overall complexity is less than O log N (( ( )) ) 2 2 . This algorithm realizes the hybrid computing structure of quantum and classical, and exerts the optimal computing power of the distributed processing problem of both quantum and classical. Take the factorization on 143 as an example, the final input is given as follows: 25 12 24 ] (33) Results Due to the accuracy of the error correcting and quantum manipulation technique, the short-time decoherence, the susceptibility to various noises, etc., the progress of universal quantum devices is slow, which limits the development and practical applications of Shor's algorithm. The maximum factorization ability of Shor's algorithm is currently the integer 85. However, D-Wave quantum computers have rapidly developed, and the number of qubits has been doubling every other year. Based on the quantum annealing method, we factor the integer 1028171. Although our method requires more qubits than Shor's algorithm to factor the same integer, Shor's algorithm is highly dependent on high-precision hardware. Actually, Science, Nature, and the National Academies of Sciences (NAS) are consistent in that it will be years before code-cracking by a universal quantum computer is achieved. The existing works based on NMR utilize the special properties of certain primes to perform principle-of-proof experiments. The maximum integer of factorization based on an NMR platform is 291311. The integer factorization method based on the NMR platform is not applicable to all integers and is not universal and scalable. Actually, our method is general and can factor up to 20-bit (1028171) integers, making it superior to the results obtained by any other physical implementations, including general-purpose quantum platforms (the Hua-Wei quantum computing platform), and far beyond the theoretical value (factor up to 10-bit integers) that can be obtained by the latest IBM Q System One TM if it can run Shor's algorithm. Table 4 shows the parameter values of Jiang et al. 's method 30 for integer factorization (please note that all the data of ref. 30 are given via our simulations, just for reference). Table 5 shows the factorization results of our method for the integers 143, 59989, 376289, 1005973 and 1028171. It can be seen from Table 5 that our method can successfully factor the integers 1005973 and 1028171. Jiang et al. 's method can factor up to the integer 376289, whereas ours method can achieve the factorization of the integer 1028171, making it superior to the results obtained by any other physical implementations. The reduction of the qubits can reduce the hardware requirements of the quantum annealing machine and further boost the accuracy of quantum annealing, which has great practical significance. In the case of the hardware restrictions of the quantum machine, our goal is to achieve the factorization of a larger-scale integer 1028171 with fewer qubits, which is the best integer factorization result solved by the quantum algorithm. Tables 4 and 5 show that the optimization model can further reduce the weight of the qubits and the range of the coupler strength involved in the problem model, which can advance the large-scale integers in the D-Wave machine. Table 6 shows a comparison of the different algorithms when factoring the integer 7778 = × 31 251. Note: The values of the local field coefficient h and coupler strength J are the absolute values of the parameter ranges. Table 6 takes the maximum integer 7718 that was factored by Warren, R.H. 34 as an example and compares the coefficients of Ising model and qubits. In the actual quantum annealing experiment, the excessive coupling strength between the qubits reduces the possibility of reaching the ground state, and finally reduces the success rate of the integer factorization. It can be seen from Table 6 that the proposed method achieves the lowest local field coefficient h and coupling coefficient J, reduces the ranges of the coefficients of Ising model, and uses far fewer qubits than Warren, R.H. 34 . The reduction of the parameter value ranges can reduce the demand for qubit coupling strength, make the physical qubit flip unified, effectively increase the possibility of quantum annealing reaching the global optimal, and improve the success rate of integer factorization. In the case of insufficient precision and the immature development of existing quantum devices, the proposed method can effectively reduce the hardware requirements and improve the success rate of deciphering RSA via quantum annealing. In addition, our method successfully factors all integers within 10000, whereas Warren, R.H. 34 traversed and factored all integers within 1000.
3,931.2
2020-04-28T00:00:00.000
[ "Computer Science", "Physics" ]
A Lloyd-model generalization: Conductance fluctuations in one-dimensional disordered systems We perform a detailed numerical study of the conductance $G$ through one-dimensional (1D) tight-binding wires with on-site disorder. The random configurations of the on-site energies $\epsilon$ of the tight-binding Hamiltonian are characterized by long-tailed distributions: For large $\epsilon$, $P(\epsilon)\sim 1/\epsilon^{1+\alpha}$ with $\alpha\in(0,2)$. Our model serves as a generalization of 1D Lloyd's model, which corresponds to $\alpha=1$. First, we verify that the ensemble average $\left\langle -\ln G\right\rangle$ is proportional to the length of the wire $L$ for all values of $\alpha$, providing the localization length $\xi$ from $\left\langle-\ln G\right\rangle=2L/\xi$. Then, we show that the probability distribution function $P(G)$ is fully determined by the exponent $\alpha$ and $\left\langle-\ln G\right\rangle$. In contrast to 1D wires with standard white-noise disorder, our wire model exhibits bimodal distributions of the conductance with peaks at $G=0$ and $1$. In addition, we show that $P(\ln G)$ is proportional to $G^\beta$, for $G\to 0$, with $\beta\le\alpha/2$, in agreement to previous studies. Of particular interest is the comparison between the one-dimensional (1D) Anderson model (1DAM) [44] and the 1D Lloyd's model, since the former represents the most prominent model of disordered wires [45]. Indeed, both models are described by the 1D tight-binding Hamiltonian: [ǫ n | n n | −ν n,n+1 | n n + 1 | −ν n,n−1 | n n − 1 | ] ; (2) where L is the length of the wire given as the total number of sites n, ǫ n are random on-site potentials, and ν n,m are the hopping integrals between nearest neighbors (which are set to a constant value ν n,n±1 = ν). However, while for the standard 1DAM (with white-noise on-site disorder ǫ n ǫ m = σ 2 δ nm and ǫ n = 0) the on-site potentials are characterized by a finite variance σ 2 = ǫ 2 n (in most cases the corresponding probability distribution function P (ǫ) is chosen as a box or a Gaussian distribution), in the Lloyd's model the variance σ 2 of the random on-site energies ǫ n diverges since they follow a Cauchy distribution. It is also known that the eigenstates Ψ of the infinite 1DAM are exponentially localized around a site position n 0 [45]: where ξ is the eigenfunction localization length. Moreover, for weak disorder (σ 2 ≪ 1), the only relevant parameter for describing the statistical properties of the transmission of the finite 1DAM is the ratio L/ξ [46], a fact known as single parameter scaling. The above exponential localization of eigenfunctions makes the transmission or dimensionless conductance G exponentially small, i.e., [47] − ln G = 2L ξ ; (4) thus, this relation can be used to obtain the localization length. Remarkably, it has been shown that Eq. (4) is also valid for the 1D Lloyd's model [41] implying a single parameter scaling, see also [38]. It is also relevant to mention that studies of transport quantities through 1D wires with Lévy-type disorder, different from the 1D Lloyd's model, have been reported. For example, wires with scatterers randomly spaced along the wire according to a Lévy-type distribution were studied in Refs. [3,4,48,49]. Concerning the conductance of such wires, a prominent result reads that the corresponding probability distribution function P (G) is fully determined by the exponent α of the powerlaw decay of the Lévy-type distribution and the average (over disorder realizations) − ln G [48,49]; i.e., all other details of the disorder configuration are irrelevant. In this sense, P (G) shows universality. Moreover, this fact was already verified experimentally in microwave random waveguides [2] and tested numerically using the tightbinding model of Eq. (2) with ǫ n = 0 and off-diagonal Lévy-type disorder [50] (i.e., with ν n,m in Eq. (2) distributed according to a Lévy-type distribution). It is important to point out that 1D tight-binding wires with power-law distributed random on-site potentials, characterized by power-laws different from α = 1 (which corresponds to the 1D Lloyd's model), have been scarcely studied; for a prominent exception see [41]. Thus, in this paper we undertake this task and study numerically the conductance though disordered wires defined as a generalization of the 1D Lloyd's model as follows. We shall study 1D wires described by the Hamiltonian of Eq. (2) having constant hopping integrals, ν n,n±1 = ν = 1, and random on-site potentials ǫ n which follow a Lévytype distribution with a long tail, like in Eq. (1) with 0 < α < 2. We name this setup the 1DAM with Lévytype on-site disorder. We note that when α = 1 we recover the 1D Lloyd's model. Therefore, in the following section we shall show that (i) the conductance distribution P (G) is fully determined by the power-law exponent α and the ensemble average − ln G ; (ii) for α ≤ 1 and − ln G ∼ 1, bimodal distributions for P (G) with peaks at G ∼ 0 and G ∼ 1 are obtained, revealing the coexistence of insulating and ballistic regimes; and (iii) the probability distribution P (ln G) is proportional to G β , for vanishing G, with β ≤ α/2. II. RESULTS AND DISCUSSION Since we are interested in the conductance statistics of the 1DAM with Lévy-type on-site disorder we have to define first the scattering setup we shall use: We open the isolated samples described above by attaching two semi-infinite single channel leads to the border sites at opposite sides of the 1D wires. Each lead is also described by a 1D semi-infinite tight-binding Hamiltonian. Using the Heidelberg approach [51] we can write the transmission amplitude through the disordered wires as t = −2i sin(k) W T (E − H eff ) −1 W, where k = arccos(E/2) is the wave vector supported in the leads and H eff is an effective non-hermitian Hamiltonian given by H eff = H − e ik WW T . Here, W is a L × 1 vector that specifies the positions of the attached leads to the wire. In our setup, all elements of W are equal to zero except W 11 and W L1 which we set to unity (i.e., the leads are attached to the wire with a strength equal to the inter-site hopping amplitudes: ν = 1). Also, we have fixed the energy at E = 0 in all our calculations, although the same conclusions are obtained for E = 0. Then, within a scattering approach to the electronic transport, we compute the dimensionless conductance as [52] First, we present in Fig. 1(a) the ensemble average − ln G as a function of L for the 1DAM with Lévy-type disorder for several values of α. It is clear from this figure that − ln G ∝ L for all the values of α we consider here. Therefore, we can extract the localization length ξ by fitting the curves − ln G vs. L with Eq. (4); see dashed lines in Fig. 1(a). This behavior should be contrasted to the case of 1D wires with off-diagonal Lévy-type disorder [53] which shows the dependence − ln G ∝ L 1/2 when α = 1/2 at E = 0 [50]. Also, we have confirmed that the cumulants (− ln G) k obey a linear relation with the wire length [41,54], i.e., where the coefficients c k , with c 1 ≡ ξ −1 , characterize the Lyapunov exponent of a generic 1D tight-binding wire with on-site disorder. We have verified the above relation, Eq. (5), for k = 1, 2, and 3; as an example in Fig. 1(b) we present the results for (− ln G) 2 as a 5), which can be used to extract the higher order coefficient c 2 . Now, in Fig. 2 we show different conductance distributions P (G) for the 1DAM with Lévy-type on-site disorder for fixed values of − ln G ; note that fixed − ln G means fixed ratio L/ξ. Several values of α are reported in each panel. We can observe that for fixed − ln G , by increasing α the conductance distribution evolves towards the P (G) corresponding to the 1DAM with white noise disorder, P WN (G), as expected. The curves for P WN (G) are included as a reference in all panels of Fig. 2 as red dashed lines [55]. In fact, P (G) already corresponds to P WN (G) once α = 2. We recall that for 1D tight-binding wires with offdiagonal Lévy-type disorder P (G) is fully determined by the exponent α and the average − ln G [50]. It is there- fore pertinent to ask whether this property also holds for diagonal Lévy-type disorder. Thus, in Fig. 3 we show P (G) for the 1DAM with Lévy-type on-site disorder for several values of α, where each panel corresponds to a fixed value of − ln G . For each combination of − ln G and α we present two histograms (in red and black) corresponding to wires with on-site random potentials {ǫ n } characterized by two different density distributions [57], but with the same exponent α of their corresponding power-law tails. We can see from Fig. 3 that for each value of α the histograms (in red and black) fall on the top of each other, which is an evidence that the conductance distribution P (G) for the 1DAM with Lévy-type on-site disorder is invariant once α and − ln G are fixed; i.e., P (G) displays a universal statistics. Moreover, we want to emphasize the coexistence of insulating and ballistic regimes characterized, respectively, by the two prominent peaks of P (G) at G = 0 and G = 1. This behavior, which is more evident for − ln G ∼ 1 and α ≤ 1 (see Figs. 2 and 3), is not observed in 1D wires with white-noise disorder (see for example the red dashed curves in Fig. 2). This coexistence of opposite transport regimes has been already reported in systems with anomalously localized states: 1D wires with obstacles randomly spaced according to Lévy-type density distribution [48,50] as well as in the so-called random-mass Dirac model [58]. Finally, we study the behavior of the tail of the distribution P (ln G). Thus, using the same data of Fig. 3, in Fig. 4 we plot P (ln G). As expected, since P (G) is determined by α and − ln G , we can see that P (ln G) is invariant once those two quantities (α and − ln G ) are fixed (red and black histograms fall on top of each other). Moreover, from Fig. 4 we can deduce a powerlaw behavior: for G → 0 when α < 2. For α = 2, P (ln G) displays a lognormal tail (not shown here), expected for 1D systems in the presence of Anderson localization. Actually, the behavior (6) was already anticipated in [41] as P (G) ∼ G −(2−λ)/2 for G → 0 with λ < α; which in our study translates as P (ln G) ∝ G λ/2 (since P (ln G) = GP (G)) with λ/2 ≡ β ≤ α/2. Indeed, we have validated the last inequality in Fig. 5 where we report the exponent β obtained from power-law fittings of the tails of the histograms of P (ln G). In addition, we have observed that the value of β depends on the particular value of − ln G characterizing the corresponding histogram of P (ln G). Also, from Fig. 5 we note that β ≈ α/2 as the value of − ln G decreases. III. CONCLUSIONS In this work we have studied the conductance G through a generalization of Lloyd's model in one dimension: We consider one-dimensional (1D) tight-binding wires with on-site disorder following a Lévy-type distribution, see Eq. (1), characterized by the exponent α of the power-law decay. We have verified that different cumulants of the variable ln G decrease linearly with the length wire L. In particular, we were able to extract the eigenfunction localization length ξ from − ln G = 2L/ξ. Then, we have shown some evidence that the probability distribution function P (G) is invariant, i.e., fully determined, once α and − ln G are fixed; in agreement with other Lévy-disordered wire models [2,[48][49][50]. We have also reported the coexistence of insulating and ballistic regimes, evidenced by peaks in P (G) at G = 0 and G = 1; these peaks are most prominent and commensurate for − ln G ∼ 1 and α ≤ 1. Additionally we have shown that P (ln G) develops power-law tails for G → 0, characterized by the power-law β (also invariant for fixed α and − ln G ) which, in turn, is bounded from above by α/2. This upper bound of β implies that the smaller the value of α the larger the probability to find vanishing conductance values in our Lévy-disordered wires. and ρ2(ǫ) = α (1 + ǫ) 1+α , where Γ is the Euler gamma function.
3,031
2016-01-20T00:00:00.000
[ "Physics" ]
Petroleum Reservoir Engineering by Non-linear Singular Integral Equations For the determination of the properties of several reservoir materials, when oil reserves are moving through porous media, a new mathematical approach is proposed. Such problem is very much important for petroleum reservoir engineering. Thus, the above mentioned problem is reduced to the solution of a non-linear singular integral equation, which is numerically evaluated by using the Singular Integral Operators Method (S.I.O.M.). Beyond the above, several properties are analyzed and investigated for the porous medium equation, defined as a Helmholtz differential equation. Finally, an application is given for a well testing to be checked when an heterogeneous oil reservoir is moving in a porous medium. Hence, by using the S.I.O.M., then the pressure response from the well test conducted in the above heterogeneous oil reservoir, is numerically calculated and investigated. Introduction The study of the movement of oil reserves through porous media is very much important problem on petroleum reservoir engineering.Therefore, by applying a well test analysis, then a history matching process takes place for the determination of the properties of the reservoir materials.The movement of oil reserves through porous media, produces both single-phase and multiphase flows.Furthermore, if a well test is conducted, then the well is subjected to a change of the flow rate and the pressure response can be further measured.For the determination of several petroleum reservoir parameters, such as permeability, then numerical calculations should be used, as analytical solutions in most cases are not possible to be derived. During the past years several variants of the Boundary Element Method were used for the solution of petroleum reservoir engineering problems.At the end of eight's Lafe and Cheng (1987) proposed a BEM for the solution of steady flows in heterogeneous solids.During the same period Masukawa and Horne (1988) and Numbere and Tiab (1988) applied boundary elements for steady state problems of streamline tracking.Furthermore, Kikani and Horne (1992) solved transient problems by using a Laplace space boundary element model, for the analysis of well tests in several arbitrarily shaped reservoirs.Beyond the above, Koh and Tiab (1993) used boundary elements to describe the flow around tortuous horizontal wells, for homogeneous, or piecewise homogeneous reservoirs.Sato and Horne (1993, 306-314;1993, 315-322) applied perturbation boundary elements for the study of heterogeneous reservoirs.Also, El Harrouni, Quazar, Wrobel and Cheng (1996) proposed the use of a transformed form of Darcy's law combined with dual reciprocity boundary element method to handle heterogeneity.On the other hand, Onyejekwe (1997) applied a Green element method to isothermal flows with second order reactions.The same author (Onyejekwe O.O., 1998, 293-312;Onyejekwe O.O., 1998, 313-330) used a combined method of boundary elements together with finite elements for the study of heterogeneous reservoirs.Beyond the above, Taigbenu and Onyejekwe (1997) applied a transient one-dimensional transport equation by using a mixed Green element method. During the last years several non-linear singular integral equation methods were used successfully by Ladopoulos (1991) -(2000, Springer Verlag) for the solution of applied problems of solid mechanics, elastodynamics, structural analysis, fluid mechanics and aerodynamics.Thus, in the present research, the non-linear singular integral equations will be used in order to determine the properties of the reservoir materials, when oil reserves are moving through porous solids. By using therefore, the Singular Integral Operators Method (S.I.O.M.), then the pressure response from the well test conducted in an heterogeneous reservoir will be computed.Also, some properties of the porous medium equation, which is a Helmholtz differential equation, are proposed and investigated.Thus, basic properties of the fundamental solution will be analyzed and investigated. Finally, an application is given for a well testing to be investigated when an heterogeneous oil reservoir is moving in a porous medium.Then this problem will be solved by using the Singular Integral Operators Method and so the pressure response from the well test conducted in this heterogeneous oil reservoir, will be computed. Hence, the non-linear singular integral equation methods which were used with big success for the solution of several engineering problems of fluid mechanics, hydraulics, aerodynamics, solid mechanics, elastodynamics, and structural analysis, are further extended in the present study for the solution of oil reservoir engineering problems.In such a case the non-linear singular integral equations are used for the solution of one of the most important and interesting problems for petroleum engineers. Well Test Analysis for Oil Reservoir Oil well test analysis is a kind of an important history matching process for the determination of the properties of reservoir materials.Thus, during the movement of oil reservoir through porous media, then both single-phase and multiphase flow occurs.Also, when a petroleum well test is conducted then the well is subjected to a change of its flow rate and the resulting pressure response is possible to be measured.Moreover, this pressure is compared to analytical or numerical models in order to estimate reservoir parameters such as permeability. In general an oil reservoir well test in a single-phase reservoir is calculated by using the porous medium equation: (2.1) in which λ denotes the permeability,  the porosity, ξ the viscosity, p the pressure of the reservoir, t the time and c t the compressibility. Beyond the above, consider by u * (x,y) the fundamental solution of any point y, because of the source point x.Then the fundamental solution can be given by the following equation: (2.5a) which may be further written as: (2.5b) Thus, eq.(2.5) is the Helmholtz potential equation governing the fundamental solution. Consider further by u * the fundamental solution chosen so that to enforce the Helmholtz equation in terms of the function u, in a weak form.Then the weak form of Helmholtz equation will be written as following: Also, by applying the divergence theorem once in (2.6), one obtains a symmetric weak form: (2.7) in which n denotes the outward normal vector of the surface S. Therefore, in the symmetric weak form the function u and the fundamental solution u* are only required to be first -order differentiable.By applying further the divergence theorem twice in (2.6) we have: (2.8) Hence, (2.8) is the asymmetric weak form and the fundamental solution u* is required to be second -order differentiable.On the other hand, u is not required to be differentiable in the domain Ω. By combining eqs (2.5) and (2.8), then one obtains: (2.9) which can be further written as: (2.10) where q(y) denotes the potential gradient along the outward normal direction of the boundary surface: (2.11) and the kernel function: (2.12) By differentiating (2.10) with respect to x k , we obtain the integral equation for potential gradients u, k (x) by the following formula: (2.13) Fundamental Solution's Basic Properties Beyond the above, we rewrite the weak form of (2.5) governing the fundamental solution, as follows: where c denotes a constant, considering as the test function.Also, eq.(3.1) can be written as: (3.2) Furthermore, (3.2) takes the form: (3.3)By considering further an arbitrary function u(x) in Ω as the test function, then the weak form of (2.5) will be written as: For the understanding of the physical meaning of (3.7), we rewrite (3.3) and (3.6) as: (3.8) and: (3.9)By (3.8) follows that only a half of the source function at point x is applied to the domain Ω, when the point x approaches a smooth boundary, . Also, consider another weak form of eqn (2.5) by supposing the vector functions to be the gradients of an arbitrary function u(y) in Ω, chosen in such a way that they have constant values: , for k=1,2,3 (3.10) Then the weak form of eqn (2.5) will be written as: (3.11) By applying further the divergence theorem, then eqn (3.11) takes the form: (3.12) Furthermore, the following property exists: (3.13)By adding eqs (3.12) and (3.13) then one obtains: (3.14) which takes finally the form: Analysis by Non-linear Singular Integral Equations Furthermore, the porous medium equation (2.1) will be written in another form, in order a singular integral equation representation to be applicable: In order the non-linear singular integral equation ( 4.2) to be numerically evaluated, then the Singular Integral Operators Method (S.I.O.M.) will be used.Thus, the non-linear singular integral equation (4.2) is approximated by the formula: where M denotes the total number of elements. Beyond the above, we introduce the following functions describing the pressure at any point in an element, in terms of the nodal pressures: Well Testings Applications in Heterogeneous Reservoirs The previous mentioned theory will be applied to the determination of a well testing, which will be checked in an heterogeneous reservoir with a permeability varying from 10 mD to 300 mD (1mDarcy  10 -12 m 2 = 1(μm) 2 ).Hence, by using the Singular Integral Operators Method (S.I.O.M.) as described in the previous paragraphs, then it has been effected the computation of the pressure response from the well test conducted in the above heterogeneous reservoir.First of all the pressures were computed in variation with the time.Thus, Table 1 shows the pressure response with respect to the time. Beyond the above, the pressure derivatives were computed with respect to the time, as shown in Table 2.Such derivatives are very much important of the well testings interpretation, as these are some distinct shapes and especially the characteristics of certain reservoir features. The computational results of the pressures and the pressure derivatives are compared to the analytical solutions of the same well testing problem, if the reservoir was homogeneous with permeability equal to 50 mD.Thus, the analytical results are shown in Table 1 for the pressures and in Table 2 for the pressure derivatives, correspondingly.From the above Tables it can be seen that there is very small difference between the computational results and the analytical solutions for both the pressures and the pressure derivatives.On the other hand, the above mentioned small difference can be explained because of the diffusive nature of the pressure transport mechanism.Finally same results are shown, correspondingly in Figures 1 and 2, and in three-dimensional form in Figures 1a and 2a. Conclusions In the present investigation a mathematical model has been presented as an attempt to determine the properties of the reservoir materials.Thus, the study of the movement of oil reserves through porous media is very important for petroleum reservoir engineers.The above mentioned problem was reduced to the solution of a non-linear singular integral equation, which was numerically evaluated by using the Singular Integral Operators Method (S.I.O.M.).Furthermore, several important properties of the porous medium equation, which is a Helmholtz differential equation, were analyzed and investigated.Thus, the fundamental solution of the porous medium equation was proposed and studied.Also, some basic properties of the fundamental solution were further investigated.These are very important in order the behavior of the non-linear singular integral equation to be well understood. An application was finally given for a well testing to be checked when an heterogeneous oil reservoir is moving in a porous solid.The above problem was solved by using the Singular Integral Operators Method and thus the pressure response from the well test conducted in the above heterogeneous oil reservoir, was computed.Both the pressures and the pressure derivatives were computed and these values were compared to the analytical solutions of the same well testing problem, if the reservoir was homogeneous with a mean permeability. Over the last years, non-linear singular integral equation methods have been used with a big success for the solution of several important engineering problems of structural analysis, elastodynamics, hydraulics, fluid mechanics and aerodynamics.For the numerical evaluation of the non-linear singular integral equations of the above problems, were used several aspects of the Singular Integral Operators Method (S.I.O.M.).Thus in the present research such methods were extended for the solution of oil reserves problems in petroleum reservoir engineering. of a Cauchy Principal Value (CPV) integral. the Green Element Method, then eqn (4.1) reduces to the solution of a non-linear singular integral equation Figure 1 . Figure 1.Pressure Response for Well Test in Heterogeneous Reservoir
2,746.8
2011-12-29T00:00:00.000
[ "Mathematics", "Engineering" ]
A pan-cancer analysis on the carcinogenic effect of human adenomatous polyposis coli Adenomatous polyposis coli (APC) is the most commonly mutated gene in colon cancer and can cause familial adenomatous polyposis (FAP). Hypermethylation of the APC promoter can also promote the development of breast cancer, indicating that APC is not limited to association with colorectal neoplasms. However, no pan-cancer analysis has been conducted. We studied the location and structure of APC and the expression and potential role of APC in a variety of tumors by using The Cancer Genome Atlas and Gene Expression Omnibus databases and online bioinformatics analysis tools. The APC is located at 5q22.2, and its protein structure is conserved among H. sapiens, M. musculus with C. elaphus hippelaphus. The APC identity similarity between homo sapiens and mus musculus reaches 90.1%. Moreover, APC is highly specifically expressed in brain tissues and bipolar cells but has low expression in most cancers. APC is mainly expressed on the cell membrane and is not detected in plasma by mass spectrometry. APC is low expressed in most tumor tissues, and there is a significant correlation between the expressed level of APC and the main pathological stages as well as the survival and prognosis of tumor patients. In most tumors, APC gene has mutation and methylation and an enhanced phosphorylation level of some phosphorylation sites, such as T1438 and S2260. The expressed level of APC is also involved in the level of CD8+ T-cell infiltration, Tregs infiltration, and cancer-associated fibroblast infiltration. We conducted a gene correlation study, but the findings seemed to contradict the previous analysis results of the low expression of the APC gene in most cancers. Our research provides a comparative wholesale understanding of the carcinogenic effects of APC in various cancers, which will help anti-cancer research. Next, we analyzed overall survival (OS), distant metastasis-free survival (DMFS), relapsefree survival (RFS), post-progression survival (PPS), first progression (FP), disease-specific survival (DSS), and progress-free survival (PFS) across the GEO datasets by the Kaplan-Meier plotter. We set "auto select best cutoff" to separate lung, ovarian, lung, gastric, and liver cancers into two groups, and Kaplan-Meier survival plots were generated. Genetic alteration analysis We referred to previous research methods [15] to check the genetic change characteristics of APC and the change frequency of all TCGA tumors, mutation types, and copy number change. We also obtained Kaplan-Meier plots on survival prognosis analysis. Analysis of the correlation between APC and TMB/MSI We examined whether APC expression was correlated with tumor mutational burden (TMB) or microsatellite instability (MSI) in cancers by logging into the website "http://sangerbox. com/Tool" [16] with the query "APC". The P-value and partial correlation value obtained with Spearman's rank correlation test were identified. Immune infiltration analysis The "immune gene" module of TIMER2 was applied to analyze the correlation between the immune infiltration level and the APC gene expression level. We then obtained a visual heat map containing the purity-adjusted Spearman's partial correlation values and P-values. A scatter plot was generated by clicking on a cell on the heat map to display the relationship between the estimated infiltration volume and the gene expression. APC targeted gene correlation analysis We logged into the STRING website, selected APC-adenomatous polyposis coli protein, and set the following main parameters in the "Settings" module: Network type (full STRING network), meaning of network edges (evidence), active interaction sources (Experiment), minimum required interaction score (low confidence (0.150)), max number of interaction score (no more than 20 interactors) and network display mode (interactive svg) to get APC-binding proteins. By applying GEPIA2, we obtained the 100 genes with the strongest correlation with APC and selected the 6 genes with the strongest correlation (QKI, CLASP2, RP11-566E18.1, FAM168A, TMOD2 and KIF1B) from the above 100 genes. We then identified the potential correlation between the APC and selected genes (QKI, CLASP2, RP11-566E18.1, FAM168A, TMOD2, and KIF1B) by applying the "correlation analysis" module of GEPIA2. Moreover, we obtained the heat map data of the selected genes (QKI, CLASP2, FAM168A, TMOD2 and KIF1B) by using the "Gene_Corr" module of TIMER2. Gene ontology analysis The genome of human APC (NM_000038.6) is on chromosome 5 (q22.2) ( Fig 1A). As shown in Fig 1B, the evolutionary process of the APC protein was displayed. The similarity of APC sequence between human and mouse is 90.1% (Fig 1C). The APC protein structure is conserved among Homo sapiens, Mus musculus, and C. elaphus hippelaphus, and it is composed of and EB1_binding (pfam05937) domain ( Fig 1D). Gene expression analysis Gene expression analysis in tissues and cells. As shown in Fig 2A, the expression of APC in tissues is relatively high in the brain. However, APC can be expressed in all tissues, with low PLOS ONE RNA tissue specificity and is expressed in nearly all cancer cells ( Fig 2B). As illustrated in Fig 2C, all cancers displayed moderate to strong cytoplasmic or membranous APC positivity in varying fractions of cells, although lymphomas were mainly APC negative. Based on the HPA datasets, the expression of APC in cells is relatively high in bipolar cells. Similarly, APC can be detected in all cancer cells but with low RNA cell-type specificity (Fig 2D). We determined the expression level of APC in various blood cells and human brain regional tissues and examined the location of APC in cells. Fig 2E illustrates the low regional specificity in human brain based on HPA/GTEx/FANTOM5 datasets. A low RNA immune blood cell type specificity is illustrated in Fig 2F. The APC gene is located mainly on the plasma membrane but is also present in the nucleoplasm and the Golgi apparatus ( Fig 2G). APC protein was not identified in plasma by mass spectrometry, which may be evidence that its physiological activity is mainly within cells (Fig 2H). We next examined the difference of APC expression in adrenocortical carcinoma (ACC), lymphoid neoplasm diffuse large B-cell lymphoma (DLBC), head and neck squamous cell carcinoma (HNSC), acute myeloid leukemia (LAML), brain lower grade glioma (LGG), ovarian serous cystadenocarcinoma (OV), sarcoma (SARC), skin cutaneous melanoma (SKCM), testicular germ cell tumors (TGCT), thymoma (THYM) and uterine carcinosarcoma (UCS). No significant expression difference of APC in these tumors was found (Fig 3B), and the expression of APC total protein was not significantly different between normal tissues and the primary tissues of all detected tumors ( Correlation between APC expression and cancer pathological stage Since genes often have different expression levels in different pathological stages, we used the GEPIA 2 online tool to analyze the correlation between APC gene expression and pathological stages of cancer. The results show that the expression level of APC correlated with the progression of kidney renal cell carcinoma, testicular germ cell tumor, thyroid carcinoma, lung squamous cell (Fig 4A, P < 0.05), but not others ( Fig 4B). Survival analysis Discussions on the outcome of events over time are common in medical research because they not only provide information about whether the event occurred, but also provide information related to the outcome. To deal with these results and to review unobserved events during follow-up, survival analysis methods are used. Among them, Kaplan-Meier estimation can be used to create an observed survival curve graph, and the log-rank test can be used to compare the curves of groups. Fig 5A illustrates that APC expression had low correlation with OS, which means poor prognosis of pheochromocytoma and paraganglioma (P = 9e-06), whereas APC expression was highly correlated with disease-free survival (DFS) for cancer of BLCA (P = 0.0016) for the TCGA project. Also, down regulation of the APC was correlated to poor DFS prognosis for TGCT (P = 0.018). High expression APC (203525_s_at) and poor OS (P = 0.049), DMFS (P = 0.0048), RFS (P = 8.8e-06) and PPS (P = 0.00086) prognosis for BRCA ( Fig 5B) were highly correlated. In contrast, a low expression level of APC (203525_s_at) was highly correlated with poor OS (P = 8.2e-16), FP (P = 9.2e-07) and PPS (P = 0.00023) for LUAD and poor RFS (P = 5.9e-08) for ovarian cancer and poor FP (P = 1.3e-06) and PPS (P = 1.6e-08) for gastric cancer. Moreover, a high APC (203525_s_at) expression level was associated with poor OS (P = 0.00019) for gastric cancer and poor OS (P = 0.00053) and PPS (P = 0.00038) for lung cancer. However, we found no correlation between expression of APC (324) and the OS (P = 0.11), PFS (P = 0.34), RFS (P = 0.064), and DSS (P = 0.08) for liver cancer. Genetic alteration analysis We analyzed the mutations of 396 patients with colorectal tumors; 66.67% of them had mutations in the APC gene ( Fig 6A). Copy number deletion of APC was present in all thyroid cases with genetic alteration (Fig 6A). The type, location, number of cases, and mutation frequency of APC gene changes are presented in Fig 6B. Truncated mutation of APC was the main type of genetic alteration, and R1450+ changes were present in 1 case of cervical squamous cell carcinoma, 8 cases of rectal adenocarcinoma, 20 cases of COAD, 6 cases of mucinous adenocarcinoma of the colon and rectum, 1 case of tubular stomach adenocarcinoma, 1 case of diffuse type stomach adenocarcinoma and 7 cases of uterine endometrioid carcinoma (Fig 6B), which is evidence of APC protein truncation. Moreover, as shown in Fig 6B, a somatic mutation frequency of 7.3% was revealed. The R1450 site in 3D structure of APC protein also was present ( Fig 6B). DNA methylation and protein phosphorylation analysis As shown in Fig 7A, for the READ case, we observed that APC DNA methylation was significantly negatively correlated with gene expression on multiple probes in the non-promoter region, but the opposite result was obtained in the SKCM case. As shown in Fig 7B, by using the CPTAC dataset, the phosphorylation site and the number of normal and primary tumor tissues were obtained, and the significant differences (P-value) of each cancer were highlighted. We also used the PhosphoNET database to analyze CPTAC-identified phosphorylation of APC and found that APC phosphorylation of S780, T1438, S2260 and S2270 in the cell cycle and APC phosphorylation of S3674 in activity-dependent processes for complex and uterine carcinosarcoma (UCS). (c) The total protein expression level of APC was analyzed based on the CPTAC dataset. https://doi.org/10.1371/journal.pone.0265655.g003 brain functions as well as APC phosphorylation of S2772 in carcinogenic effects of rapamycin were experimentally supported by several publications [17][18][19] (S1 Table). The above results indicate that we could perform further in vivo and in vitro assays for further prospecting of the latent role of S780, T1438, S2260, S2270, S3674 and S2772 phosphorylation in tumorigenesis and biological activities. Immune infiltration analysis As an indispensable part of the tumor microenvironment, tumor-infiltrating immune cells can promote or inhibit tumor growth under the drive of certain genes [20], and the removal of Treg cells can induce and enhance anti-tumor immune responses [21]. In addition, in various types of human cancers, increases in the number of Tregs and tumor-infiltrating lymphocytes, especially a decrease in the ratio of CD8+ T-cells to Tregs, is associated with poor prognosis [22]. Cancer-related fibroblasts in the tumor microenvironment play a key role in tumor progression and may create an immune barrier to the anti-tumor immune response mediated by CD8+ T-cells [23]. Cancer-related fibroblasts directly block the function of cytotoxic lymphocytes, thereby inhibiting the killing of tumor cells [24]. One of the most important physiological functions of cancer-related fibroblasts is the driving of tumor-infiltrating immune cells to recruit and exercise immune functions in the surrounding immunosuppressive microenvironment [25]. In this study, we investigated the relationship between the estimated quantity of immune infiltrates and the expressed level of AP in various tumors of TCGA and displayed them in heat maps and scatter plots. According to all or most algorithms, low APC expression enhanced the immune infiltration capacity of CD8+ T-cells in ACC, UCEC, pancreatic adenocarcinoma, and uveal melanoma (Fig 8A). Similarly, we recognized that the low APC expressed in pheochromocytoma and paraganglioma can enhance the immune infiltration capacity of cancer-associated fibroblasts (Fig 8A). We also noted a positive correlation of CD8+ T-cells for LIHC and TGCT and a positive correlation of cancer-associated fibroblasts for COAD, HNSC, HNSC [HPV (Human papillomavirus −], MESO and STAD (Fig 8A). According to the highest cor value, the scatterplot data of cancers are illustrated in Fig 8B. The above data indicate that APC is a tumor suppressor gene for many cancers, and its overexpression helps inhibit tumor progression. PLOS ONE tumors are unclear. Therefore, further study of the APC-targeting binding protein and APCrelated genes is needed. Discussion APC participates in the occurrence and development of tumors by regulating cell proliferation, invasion, angiogenesis, and cell-cycle processes [26,27]. To clarify the mechanism of APC in cancers from clinical data, we performed, for the first time that we know of, pan-cancer analysis of APC by using TCGA, CPTAC, and GEO databases. First, our phylogenetic tree, humanmouse gene similarity, and homologous gene analysis revealed conservation of APC protein in humans and mice; this finding indicates that normal physiological effects of APC may exist with similar mechanism between the species, and it may be feasible to use mice for more APC gene-related human disease research. Potential links between APC and clinical diseases, especially tumors, have been described [1][2][3]. Whether the APC can promote the occurrence and development of various tumors through common molecular mechanisms is unknown, however. Therefore, we comprehensively examined the APC genes in various tumors from the aspects of gene expression, survival analysis, genetic changes, DNA methylation, protein phosphorylation, and APC target gene correlation. Comprehensive analysis of HPA, GTEx, and FANTOM5 datasets revealed that the APC gene is increased in human brain tissue, whereas there is no increased expression in other tissues. At the same time, analysis based on a consensus human brain dataset showed that the APC gene expression in human brain tissue is low. In addition, the analysis of the TCGA database showed that the APC gene has low cancer specificity and cell type specificity, but it is enhanced in neuronal cells, especially bipolar cells. Therefore, we suspect that the APC gene in human brain tissue plays a decisive role in regulating the occurrence and development of tumors, and drugs that target the APC gene in brain tissue may be useful in tumor intervention. Of course, the expression of APC in cancer is not equivalent to playing a pathophysiologic role in cancer, and more clinical data are needed for clarification of the activity of APC in brain cancer. Our results also revealed that APC is present mainly in the plasma membrane of cells, which plays an important role in cell activities. This observation suggests that cytoplasmic membrane proteomics could be used to help define the role and mechanism of ATP in disease. Mass spectrometry did not detect the APC protein in plasma; thus, it does not have secretory properties, which is in line with the characteristics of a large-molecule protein. Compared with its expression in normal tissues, APC has low expression in most tumors. However, APC gene and protein expression in the TCGA and CPTAC data are not consistent; this difference could be due to differences in data collection and analysis in the data bases or lack of APC gene translation. Further analysis of our data found that the correlation of APC expression with the pathological stages of most cancers is low, a finding that suggests that APC has persistent low expression in cancer progression. This observation prompts the consideration that promoting APC overexpression could be a means of inhibiting tumor progression. Additionally, for tumors with different APC gene expression in various pathological stages, gene-targeted therapy might be implemented early in the course of the disease or individualized according to the pathological stages of disease. In all, our results provide reference value for clinical gene therapy. We also studied the relationship between the expressed level of APC and overall survival, disease-free survival, distant metastasis-free survival, first progression, relapse-free survival, and disease-specific survival by using the GEPIA2 tool and the Kaplan-Meier plotter method [28]. The results showed that the survival prognostic analysis data of the APC gene put forth completely different conclusions for different tumors. Thus, further collection and analysis of clinical data are indicated. The overall results show that there is a correlation between the expressed level of APC and the markers of survival. However, the present evidence based on clinical results cannot sustain the effect of APC activity in different cancers. Therefore, a larger sample size is needed to verify the effect of APC in process of various tumors. In short, the change in survival is only related to a part of the tumor cases in our research, suggesting that the APC gene on the survival and prognosis of patients is tumor-type dependent and can provide reference for basic and clinical research. Gene mutation is related to DNA replication, DNA damage repair, cancer, and aging [1][2][3][29][30][31]. Gene mutation is also one of the most noteworthy factors in the process of biological evolution [32], and APC gene mutations play an important role in many diseases, especially tumors [1][2][3]. In this study, we first found that APC mutations mainly occur in colorectal cancer, which is consistent with previous experimental and clinical data [3,33]. Among the various types of APC mutations, missense mutations account for most, but the single most frequent mutation is the truncation mutation of R1450+. This discovery has reference value for studying APC mutations. APC plays a central role in predicting overall survival, and there may be 0, 1, or 2 truncation mutations in APC, and each mutation will have a significantly different effect on survival [34]. To clarify the relationship between APC mutations and survival prognosis, we once again analyzed the GEO database, using the Kaplan-Meier plotter method. The results showed that APC mutations have no correlation with the survival prognosis of colorectal adenocarcinoma, but they are correlated with the survival prognosis of uterine corpus endometrial carcinoma. Thus, APC mutations appear to have variable effects on the occurrence and survival of tumors. According to reports, APC methylation regulates the occurrence and development of various tumors [35][36][37]. Recent discoveries provide convincing evidence that the methylation pattern is profoundly changed in cancer cells that help regulate tumor phenotype changes in expression [38]. For rectal adenocarcinoma, we observed that APC DNA methylation was negatively correlated with gene expression on multiple probes in the non-promoter region, but the opposite result was obtained in the skin cutaneous melanoma case. Thus, additional exploration of the latent effect of APC DNA methylation in tumorigenesis seems needed. Some studies have reported that APC activation promotes the rapid degradation of CTNNB1 and participates in Wnt signaling as a negative regulator, and its active state also plays an important role in cell migration induced by hepatocyte growth factor [39,40]. The function of APC is closely related to its phosphorylation state. We found that APC phosphorylation of S780, T1438, S2260, and S2270 in the cell cycle and APC phosphorylation of S3674 in activity-dependent processes for complex brain functions as well as APC phosphorylation of S2772 in carcinogenic effects of rapamycin were supported by several publications [17][18][19]. We also found that APC phosphorylation at T1438 and S2449 has a higher differential expression ratio in a variety of tumors, suggesting that the function of APC is correlated with APC phosphorylation of T1438 and S2449. The phosphorylation levels of T1438 and S2449 of APC are opposite differentially expressed in various tumor cells. Additional experiments evidently will be required to clarify the potential role of phosphorylation of APC at S780, T1438, S2260, S2270, S3674, S2772 and S2449 in tumorigenesis, development, and biological activities. Many studies have documented a link between the immune infiltration of several human cancers and the prognosis and response of treatment [41,42]. Our results suggest that APC expression is correlated with immune infiltration and participates in tumor regulation, but it has different regulatory effects among tumors. This observation provides new ideas for tumor immunotherapy, which could jointly regulate the expression of APC and immune infiltration. Studies on the APC target binding protein and the correlation between APC and polygenes have shown that genes highly related to APC are positively correlated with the occurrence of a variety of tumors. APC, as a tumor suppressor gene, is expressed at low levels among tumors, and we believe that the six genes that are highly related to APC in our study promote the occurrence of multiple tumors. This notion is consistent with the results of previous studies [43][44][45][46][47]. In summary, our first pan-cancer analysis of APC shows that increased APC expression in the brain or on cell membranes and APC expression is statistically correlated with clinical prognosis, cancer pathological staging, DNA methylation, protein phosphorylation, immune cell infiltration, and genetic alteration in various tumors, which is helpful to understand the role of APC in tumorigenesis based on clinical tumor samples combined with clinical parameters. Supporting information S1
4,858
2022-03-18T00:00:00.000
[ "Medicine", "Biology" ]
Volatility Forecast in Crises and Expansions We build a discrete-time non-linear model for volatility forecasting purposes. This model belongs to the class of threshold-autoregressive models, where changes in regimes are governed by past returns. The ability to capture changes in volatility regimes and using more accurate volatility measures allow outperforming other benchmark models, such as linear heterogeneous autoregressive model and GARCH specifications. Finally, we show how to derive closed-form expression for multiple-step-ahead forecasting by exploiting information about the conditional distribution of returns. Introduction Volatility plays an important role in financial econometrics.Measuring, modelling and forecasting financial volatility are essential for risk management purposes, portfolio allocation and option pricing.Although returns remain unpredictable, their second moment can be forecasted quite accurately, which generated a lot of research during the last thirty years motivated by Engle's seminal paper [1].The existing literature aiming to model and forecast financial volatility can be divided into two distinct groups: parametric and non-parametric models.The former assumes a specific functional form for volatility and models it as a function of observable variables, such as ARCH or GARCH models [1][2][3], or as a known function of latent variables resulting in stochastic volatility models [4,5]. The second class defines financial volatility without imposing any parametric assumptions hence called realized volatility models [6].The main idea of the latter models is to construct consistent estimators for the unobserved integrated volatility by summing the squared returns over a very short period within a fixed time span, typically one day.The availability of high-frequency data allows high precision estimation of the continuous time pure diffusion processes given the large datasets of discrete observations.As a result, volatility essentially becomes observable and, in the absence of microstructure noise, can be consistently estimated by a realized volatility measure.This approach has two main benefits compared with GARCH and stochastic volatility models.First, researchers can treat volatility as observable and model it by applying a time series technique, for example ARFIMA or autoregressive fractionally integrated moving average models [6].Second, realized volatility models significantly outperform models based on lower frequency (daily data) in terms of forecasting power; see, e.g., [7][8][9].Indeed, the latter models adapt new information and update the volatility forecast at a slower daily frequency, while the former models can incorporate changes in volatility faster due to the more frequent arrival of intraday information. Although the literature proposes many different approaches for modelling volatility, there is still no unique model that explains all of the stylized facts simultaneously.In particular, there is no consensus on how to model long memory, since there are at least four approaches: the non-linear model with regime switching [9]; the linear fractionally-integrated process [10]; the mixture of heterogeneous run information arrivals [11]; and the aggregation of short memory stationary series [12].Numerous methods have been developed, since it is hard to distinguish between unit root and structural break data generating processes [13,14].[15] show that structural break models can outperform the long memory model if the timing and sizes of future breaks are known.Although few academics and practitioners accurately predicted the timing of the recent financial crises and European sovereign debt turmoil, a model with structural breaks seems to be more economically plausible than a fractionally-integrated long memory model.In addition, [15] recommend relying on economic intuition to choose between smooth transition auto regressive models (STAR) and abrupt structural break models. In this paper, we extend the heterogeneous autoregressive model proposed by [16] to take into account different regimes of volatility.The resulting model is called a non-linear threshold autoregression model, where regimes are governed by an exogenous trigger variable.This model provides a better fit of the robust measure of realized volatility for both in-sample data and out-of-sample forecasting.In addition to an improved performance in particular samples, a non-linear model also produces superior multiple-step-ahead forecasts in population according to the Giacomini and White test [17].We also show that the superior performance of a non-linear model is achieved during periods of high volatility.This is especially important during times of financial crises, when investors are in particular need of more accurate forecasts.Finally, we derive a closed form expression for multiple-step-ahead forecast, where the past returns govern changes in volatility regimes. Our paper finds that changes in the volatility regimes occur when return exceeds a −1% threshold, which is in line with previous findings [9,18].However, our model differs in terms of the estimation procedure and the most recent dataset that includes financial crises.In fact, the superior performance of a non-linear model becomes particularly significant during periods of elevated volatility, such as recent financial crises.More importantly, we a derive a closed-form expression of multiple-step-ahead forecasts, whereas other authors either focus on one-step ahead forecasts [9] or using conditional simulations [18]. The remainder of this paper is organized as follows.The non-linear threshold model for realized volatility is defined in Section 2. Section 3 describes preliminary data analysis and estimation results for the S&P 500 index.Section 4 describes one and multiple-step-ahead forecasts.Finally, Section 5 concludes and provides directions for future work. Model In this section, we introduce two building blocks: the heterogeneous autoregressive model and the regime switching model.Then, we describe the econometric framework designed for the estimation and inference of our threshold autoregressive model.Finally, we discuss the forecasting of our model and how to derive a closed form expression for its multiple-days-ahead forecasts. HAR-RV Model with Regime Switching In this section, we discuss extensions of the heterogeneous autoregressive model (HAR) of realized volatility proposed in [16].First, let us assume that returns follow a continuous diffusion process: where p(t) is the logarithm of instantaneous price, µ(t) is continuous with a finite variation mean process, σ(t) is instantaneous volatility and W (t) is standard Brownian motion.Given the process in (1), the integrated variance corresponding to day t is defined as: Several authors show that as sampling frequency increases, integrated volatility IV d t can be approximated by realized variance defined as a sum of the intraday squared returns [6,19,20].In essence, volatility becomes observable and can be forecasted using time series techniques. The presence of market microstructure noise makes realized variance inconsistent and is a biased estimator of true volatility.Therefore, we use the realized kernel estimator developed in [21], which remains consistent under the presence of market microstructure noise.The realized kernel RK K,δ is an estimator of latent realized variance and is defined as follows: where is a weight function and p i,t is i-th intra-daily log price sampled at frequency δ and recorded at day t.In other words, i = 1, ..., n(δ) and n(δ) = n seconds /δ, where n seconds is the number of seconds during the trading day.Thus, the realized kernel is similar to the HAC (heteroskedasticity and autocorrelation consistent covariance matrix) estimator of the variance-covariance matrix for some stationary time series.Throughout this paper, realized variance will equal the realized kernel measure defined in Equation ( 3). The realized kernel has several advantages over other high-frequency proxies of latent volatility.First, [22] show that the realized kernel performs better (in terms of forecasting value-at-risk) than other high-frequency measures, including realized volatility, bi-power realized volatility, two-scales realized volatility and daily range.Second, the realized kernel is a consistent estimator of latent variance, which is robust to the market microstructure noise. The heterogeneous autoregressive model is able to replicate the majority of stylized facts observed in data: fat tails, volatility clustering and long memory.In particular, HAR is able to generate hyperbolic decays in the autocorrelation function in a parsimonious way due to the volatility cascade property, despite the fact that this model does not belong to the class of long memory models.This model is based on the heterogeneous market hypothesis [23] , which implies that lower frequency volatility (weekly) affects higher frequency volatility (daily), but not vice versa: where RV d t , RV w t and RV m t are daily, weekly and monthly realized variance, respectively, at period t.The lower frequency, for example weekly, realized variance is computed as: Similarly, the monthly realized variance is computed as the average of daily variances over 22 days.Although the HAR model is able to capture long memory and volatility clustering, it cannot explain abrupt changes in regimes.Indeed, recent subprime mortgage crises, European debt turmoil and a number of other financial calamities led to significantly different behaviour in the dynamics of the realized volatility during "good" and "bad" times, as we will discuss in Section 3. Therefore, we propose to extend the benchmark HAR model and allow the possibility of multiple regimes, governed by either endogenous or exogenous variables.We define the threshold HAR model with two regimes as follows: where T t−l is a trigger variable with some lag l and τ is the value of a threshold.In this paper, we consider only observable triggers, including returns and the realized kernel. Econometric Framework for the Non-Linear Model Estimation Next, we present the econometric techniques designed to model non-linear dynamics of time series: the self-exciting threshold autoregressive (SETAR) model and the threshold autoregressive (TAR) model introduced by [24] and [25].The main difference between these models is that the trigger variable can be either exogenous (TAR model) or endogenous (SETAR model).The TAR(m) model, where m denotes the number of regimes, is defined as follows: where Y t+1 is a univariate time series, X t = (1, Y t , ..., Y t−p ) (p + 1) × 1 vector, τ = (τ 1 , ..., τ m−1 ) and τ 1 < τ 2 < ... < τ m−1 , 1 j,t (τ, l) = 1(τ j−1 ≤ T t−l < τ j ), 1(•) is an indicator function and T t−l is a threshold variable.Let us assume that τ 0 = −∞ and τ m = ∞, while the error term t+1 is conditionally independent on information set I t and has a finite second moment: In particular, if variable Y t+1 follows the TAR(2) process, then the model ( 7) becomes: Recall that Model (9) nests a non-linear HAR specification (6) if we put constraints on the corresponding AR (22) model in each regime.Now, define the vector of all parameters of Model ( 9) as θ = (θ 1 , θ 2 , ..., θ m , τ , l) .Under Assumption (8), the estimation of the TAR(m) model is performed using a non-linear least squares approach: Here, the minimization can be done sequentially.In particular, θ = (θ 1 , ..., θ m ) can be computed through OLS regression of Y on X(τ, l) for fixed parameters d and τ : where Y is the T x1 vector consisting of observations of Y t+1 , while X(τ, l) is the T x4m matrix with t-th row X t (τ, l): Now, let us assume for simplicity that the non-linear model has only two regimes or m = 2. Thus, two parameters τ and l can be estimated through minimization of the residual sum of squared errors S(τ, l): (τ , l) = arg min where S(τ, l) = Y − X(τ, l) θ(τ, l) Y − X(τ, l) θ(τ, l) .The minimization can be performed through a grid search, while noting that l is discrete.We follow [26] approach, which allows speeding up the minimization algorithm.In particular, he recommends eliminating the smallest and largest quantiles for the threshold variable in the grid search.This elimination does not only reduce the computational time, but also serves as a necessary condition for having enough observation in each regime.Indeed, asymptotic theory places additional constraints on the optimal threshold level, such that n j T ≥ τ as n → ∞.Although, there is no clear procedure for how to optimally choose τ , [26] recommends to use a 10% quantile for the cut-off procedure. Testing for Non-Linearity We start by discussing the testing of the linear model or TAR(1) against the non-linear model or TAR(m), where m > 1.Under the null hypothesis, all parameters θ 1 , ..., θ m should be the same: Since the threshold parameter is not identified under the null hypothesis, the classical tests have a non-standard distribution.This problem is called "Davies' problem" due to [27,28].[26,29] overcomes this problem by using empirical process theory and derived the limiting distribution of the main statistics of interest F jk : where S j and S k are the sum of squared residuals and k > j.Computation of the asymptotic distribution is not straightforward, but might be faster than a bootstrap calculation.Although the literature does not assess the performance of the asymptotic against the bootstrap distribution in the context of SETAR models, [30] show that the bootstrap technique performs better in the AR(1) context with Andrews structural change test [31].Thus, we use the following bootstrap algorithm for testing the linear model against the non-linear TAR(2) model: 1. Draw residuals with replacement from the linear TAR(1) model. 3. Estimate the TAR(1) and TAR(2) models on the "fake" dataset.The algorithm in ( 1)-( 7) can be used to evaluate the distribution of F 12 under the assumption of either homoscedastic or heteroscedastic errors.We compute the bootstrap p-value under the latter assumption, since the residuals of Model (4) are heteroscedastic.This is in line with the literature [32].These diagnostic tests are available upon request. Testing for Remaining Non-Linearity The testing for remaining non-linearity is an important diagnostic check for the TAR (m) model.One way to address this question is to test whether the presence of the additional regime is statistically significant or not.This test relies on the aforementioned algorithm, while the bootstrap p-value is computed for statistics F jj+1 , where j > 1. Asymptotic Distribution of the Threshold Parameter The existing literature documents that the distribution of the parameter τ is non-standard if the threshold effect is significant [26,33].[29,34] derives an asymptotic distribution of likelihood ratio statistics: where S 1 (τ ) is the residual sum of squares given parameter τ and σ2 is the variance of residuals of the TAR(2) model and equals S 1 (τ ) T −4 .Moreover, [29,34] shows that the confidence interval for the threshold parameter is obtained by inverting the distribution function of a limiting random variable.In other words, the null hypothesis Alternatively, the confidence interval for the threshold parameter is formed as an area where LR 1 (τ ) ≤ c(α) and is called the "no-rejection region".We have to interpret the confidence interval for threshold parameter τ with caution, since it is typically conservative [26,29].However, the ultimate test of our non-linear model is the ability to produce superior out-of-sample forecasts, which requires a tight confidence interval for the threshold parameter.We provide more discussion on page 15. Although estimates θ1 , ..., θm depend on the threshold parameter τ , the asymptotic distribution remains the same as in the linear model case, since estimate τ is super-consistent [35].[33] and [26] prove that dependency on the threshold parameter is not of first order asymptotic importance, thus the confidence interval for θ can be constructed as if τ is a known parameter. Stationarity The stationarity conditions for our TAR(2) model are not easily derived, and in general, not much is known about this property for non-linear models with heteroskedastic errors-see the discussion in [35] (pp.79-80).The literature does propose sufficient conditions for a restricted class of non-linear models and typically for models with homoscedastic errors.In particular, [36] consider SETAR(2) specification with the AR(1) model in both regimes, while [37] establish necessary and sufficient conditions for the existence of a stationary distribution for TAR(2) and SETAR(2) models with the AR(1) process. In contrast, our model has a richer structure within each regime, since the HAR model is a restricted version of the AR (22) process.Because of this richer structure within each regime and because neither self-exciting nor exogenous thresholds are used, it is not possible to use the results from [36] and [37] to prove stationarity.In addition, our residuals exhibit volatility clustering, and because of the heteroscedastic errors, it is not possible to exploit the necessary and sufficient conditions for strict stationarity, even for the simple HAR model derived by [9].The diagnostic checks show that this assumption does not hold. In conclusion, as is the case in much empirical work, we have to make a trade-off between the flexibility of the model and the analytical tractability of stationarity conditions.In this paper, we choose to design a model aiming at providing more accurate volatility forecasts, and we leave the question of stationarity for future work. One-Step-Ahead Forecast We assess the forecasting performance of various models by computing the one-step-ahead forecast of the realized volatility measured by the square root of the realized kernel.These forecasts are computed through rolling window estimation.First, the parameters of the model are estimated using an in-sample set, and then the one-step-ahead forecast is computed.Second, the rolling window is moved by one period ahead; the most distant observation is dropped, and the parameters of the model are re-estimated, while the threshold parameter τ and optimal lag l are kept time invariant.Finally, the one-step-ahead forecast is computed again. We use the root mean square error (RMSE) and the mean absolute error (MAE) to compare the forecast performance of four models: where Y t+1|t is the one-step-ahead conditional forecast of the daily realized volatility computed based on the rolling window for one of the four models and Y t+1 is the daily realized volatility at period t + 1.In addition, we compute R 2 of the following Mincer-Zarnowitz regression: Finally, we investigate the forecasting performance of different models in population using the Giacomini and White (GW) test [17].The GW test fits nicely in our framework due to the following reasons.First, it does not favour models that overfit in-sample, but have high estimation errors.Second, this test is designed to compare not only unconditional, but conditional forecasts, as well.Finally, the GW test works with rolling window forecasts, where in-sample size is fixed, while out-of-sample size is growing. Conditional Distribution of Returns In this section, we discuss multiple-step-ahead forecasts for aggregate volatility over periods of five and 10 days.The extension of the multiple-step-ahead forecast to the linear model is straightforward, while the non-linear model has one important problem.We describe formulas used to compute the multiple-step-ahead forecast for the HAR, GARCH(1,1) and GJR-GARCH(1,1) (proposed by [38]) models in Appendix A. In particular, the one-step-ahead forecast remains the same for both non-linear and linear cases, while the two-step-ahead one is different: where I t is the information set available at period t, F is a non-linear function, θ is a vector of estimates and Y t is the realized volatility at period t.Equation (19) illustrates the main problem related to non-linear model: the expected value of a non-linear function differs from the value of a non-linear function evaluated at the expected value.In the literature, several methods have been proposed for the computation of the multiple-step-ahead forecast, including conditional simulations in [18].However, we choose a different strategy and derive a closed form solution for the multiple-step forecast.Specifically, we follow an approach similar to [39] and [40] to derive the conditional distribution of returns.Given the diffusion process (1), the standardized returns should follow a normal distribution: where I t = F (r t , r t−1 , ...) is information at the period t set generated by the history of returns and µ N is the mean of standardized returns, and µ N and σ 2 N should be close to zero and one, correspondingly.See Table B1 in Appendix B for details.Meanwhile, the conditional distribution of realized volatility is closely approximated by the inverse Gaussian distribution with the following density function: where σ t+1 is a conditional mean and λ IG is a shape parameter of the inverse Gaussian distribution.The conditional mean is assumed to be filtered from the non-linear TAR(2) model as follows: Combining Equations ( 20) and ( 21), the conditional distribution of returns becomes a normal-inverse Gaussian distribution (NIG) with the probability density function computed as: The NIG distribution provides a relatively accurate fit of the unconditional distribution of returns (see Appendix B for details).Having the distributional assumption for returns, Theorem 1 demonstrates how to obtain the closed form expression for the multiple-step ahead forecast of the realized volatility. In essence, Formula ( 24) is similar to the multiple-step-ahead forecast of the GJR-GARCH(1,1) model -see Appendix A for details.However, the TAR model has an additional flexibility, since probability π t is time varying, while GJR-GARCH assumes that the corresponding probability equals to 0.5.To facilitate comparison between these two models, we compute the unconditional probability of a high volatility regime occurring based on the NIG distribution (23) and from returns data.Here, the probability equals the frequency of returns occurring, which is lower than the threshold value.The results show a close match between these two methods: 11.3% (NIG) vs. 13.2%(historical returns) for in-sample data. Finally, we describe the multiple-step-ahead forecast using the rolling window approach.First, the parameters of the model are estimated using in-sample data, and probability π t is computed.Second, multiple-step-ahead forecasts for the TAR model are calculated based on Expression (24), while π t remains constant.Probability π t can be computed for each step of forecast, as well, but this will add additional computational burden, while the results should change only marginally.In other words, we assume that π t+h|t = π t ∀h, where π t+h|t = P r[r t+h < τ |I t ].We compute h-step-ahead forecasts for the HAR, GARCH(1,1) and GJR-GARCH(1,1) models based on the formulas presented in Appendix A. Finally, the rolling window is moved by one period ahead; the first observation is dropped, and the parameters of the model, including π t+1 , are re-estimated. Data The empirical analysis is based on high-frequency data for the S&P 500 index obtained through the Realized Library of Oxford-Man Institute of Quantitative Finance (Library Version 0.2), which is freely available: "Researchers may use this library freely without restrictions so long as they quote in any work which uses it: Heber, Gerd, Asger Lunde, Neil Shephard and Kevin Sheppard (2009) "Oxford-Man Institute's realized library", Oxford-Man Institute, University of Oxford." The sample covers the period from 3 January of 2000 to 12 June of 2014, overall 3603 trading days.We exclude all days from the sample when the market was closed.[41] have created the Realized Library database, which provides daily data for about 11 realized measures for 21 assets.The authors clean the raw data obtained through Reuters Data Scope Tick History and compute high-frequency estimators from cleaned data.We use a realized kernel [21] as a proxy for integrated variance. Preliminary Data Analysis We start with data analysis of five main time series of interest: standardized returns, returns, realized variance, realized volatility and the logarithm of realized variance.Table 1 presents the descriptive statistics, while Figure 1 illustrates the time series dynamics of these variables.p = 0.00 p = 0.00 p = 0.00 p = 0.00 p = 0.06 Normality test (J-Btest) p = 0.00 p = 0.00 p = 0.00 p = 0.00 p = 0.00 L-Btest 5 lags p = 0.01 p = 0.00 p = 0.00 p = 0.00 p = 0.00 L-B test 10 lags p = 0.08 p = 0.00 p = 0.00 p = 0.00 p = 0.00 L-B test 15 lags p = 0.07 p = 0.00 p = 0.00 p = 0.00 p = 0.00 ARCH effect p = 0.00 p = 0.00 p = 0.00 p = 0.00 p = 0.00 Four of the variables are stationary at 5% according to the augmented Dickey-Fuller test, while log( √ RV t ) is stationary at 6%.The recent financial crises and European sovereign debt turmoil affected the volatility pattern and led to several spikes in the realized variance series.Although these spikes look less pronounced in the logarithm of realized variance, they remain very distinct from the volatility behaviour observed during calm times.This observation motivates the introduction of the regime switching model for volatility process. Daily returns are weakly correlated and follow a leptokurtic and negative skewed distribution.By contrast, the distribution of the standardized returns is much closer to Gaussian, which is in line with previous empirical findings: [10,42].Figure 2 documents the long memory observed in realized volatility as the autocorrelation function decays at a hyperbolic rate.This result is also consistent with the literature: [6,15,32].Sample autocorrelations and partial autocorrelations of returns and realized volatility. Benchmark HAR Model We start with the estimation of the benchmark linear Model (4) for the three specifications of dependent variable RV , √ RV and log( √ RV ), correspondingly.Table 2 presents the estimation results with the standard errors computed based on the HAC variance-covariance matrix.In particular, benchmark Model (4) underestimates volatility by around 40% during financial crises in 2007-2009.A similar pattern is observed during spikes in volatility in 2010 and 2011.One of the explanations of the poor performance of the HAR model during turbulent volatility periods is that it fails to take into account changes in volatility regimes.Indeed, if volatility reacts to negative returns more than to positive returns, then the arrival of the consequent negative shocks and volatility persistence can substantially increase the future volatility level.On the other hand, different economic regimes might affect volatility differently.We choose the TAR over SETAR model based on the higher value of the F 12 statistics or, alternatively, the lower value of p bootstrap defined in Subsection 2.2.2.These results are available upon request. The TAR(2) Model Next, we estimate the TAR(2) model (Tables 3 and 4), where past returns govern changes in the volatility regimes. Table 3 shows that regression R 2 improves substantially if regimes are driven by past returns.As a result, high values of the F 12 statistics lead to the rejection of the null hypothesis (13) for all specifications at a 5% significance level.In addition, the optimal value of the threshold parameter remains the same for two specifications: RV t and √ RV t .The τ that corresponds to logarithm specification is closely related to the second threshold of the TAR(3) model.However, the confidence interval for this parameter is very wide, which leads to the imprecise estimate of the threshold parameter.Not surprisingly, this model produces a less accurate one-step forecast than TAR(2).In particular, [43] document that the imprecise estimate of the threshold parameter leads to the poor forecasting performance of the simple switching model compared to the random walk model.In both cases, changes in regimes are driven not only by negative returns (leverage effect), but by significantly negative returns: −1.3% on a daily scale.[9] also show that the transition between volatility regimes is governed not by negative past returns, but by "very bad news" or very negative past returns.The fact that changes in regimes are triggered by "very negative returns" can be explained by the volatility persistence and higher intensity of shocks during bad times.Although the value of the threshold is not very large (it corresponds to the 11th percentile of the returns distribution), the increasing number of negative returns can generate a spike in the volatility.This explanation is similar to the option pricing literature, where researchers modelled volatility by adding infinite activity jumps to the return's process [44].Even though the appearance of one small or medium jump is not enough to generate a significant surge in volatility, high volatility persistence can lead to pronounced spikes in the future volatility.Indeed, Figure 4 shows that the frequency of returns that are lower than the threshold (red line) increased dramatically during recent financial crises.By contrast, returns that exceed the threshold (blue line) completely dominated "very negative returns" during the period of low volatility in 2003-2007. Table 4 shows that parameters β d , β w and β m are very different in high-and low-volatility regimes.In particular, β w 1 is twice as large as the corresponding estimate in the low-volatility regime for √ RV t specification.Although some estimates have negative signs, they are not statistically significant at 10% for both realized volatility and variance models.By contrast, intercepts in both regimes are statistically negative for logarithmic specifications.Overall, corresponding estimates differ substantially in different regimes, which highlights the importance of using the regime switching model.Next, Figure 5 shows that the 95% confidence interval for the threshold parameter is quite narrow (τ opt ∈ [−0.014, −0.012]), although it includes two disjoints sets.Finally, we compare the in-sample performance of the SETAR(2) and TAR(2) models for different indices, including both developing and developed countries: Bovespa (Brazil), DAX (Germany) and IPC Mexico (Mexico).The main findings remain robust to the different sets of indices: the non-linear model with an exogenous trigger is preferred over the corresponding specification with the endogenous variable.These results are available upon request. Forecast In this section, we discuss one-and multiple-step-ahead forecasts of realized volatility based on the TAR(2) model and several competing benchmarks.We assess their forecasting performance using lowand high-volatility periods. One-Day-Ahead Forecast We start with the one-day-ahead forecast of the realized volatility, which is measured as the square root of the realized kernel.The in-sample period covers 1968 days from January 2000 to January 2008.In addition to the HAR model, we choose several GARCH specifications as benchmarks, including symmetric GARCH(1,1) and asymmetric GJR-GARCH (1,1).[45] show that it is extremely hard to outperform a simple GARCH (1,1) model in terms of forecasting ability.Meanwhile, TAR(2) is a non-linear model; therefore, we need to add asymmetric GARCH specification to guarantee a "fair" model comparison.Figure 6 and Table 5 assess the forecasting performance of high-and low-frequency models.Table 5. One-day-ahead out-of-sample forecast (although realized volatility ignores overnight returns, the superior performance of the high-frequency models is unlikely to be affected).Next, we investigate whether the TAR forecast remains superior in population or not using the Giacomini and White test.Recall that the GW test is designed for the situation where in-sample size is fixed, while out-of-sample size is growing.Thus, we assess the forecasting performance of different models using the GW test only for the period from January 2008 to June 2014 and not for U.S. and Eurozone financial crises.In the latter cases, the GW test is likely to perform poorly, since we have a relatively short period of sample periods: 247 and 123 observations, correspondingly. The main results of this comparison are the following.First, high-frequency models significantly outperform lower frequency symmetric (GARCH) or asymmetric (GJR-GARCH) daily models.This result highlights the importance of more accurate volatility measuring based on the intra-daily data.Second, non-linear TAR(2) specification dominates the linear HAR model thanks to an additional flexibility to capture changes in regimes according to the first three metrics.Surprisingly, TAR(2) does not outperform the HAR model according to the GW test. Finally, we assess the performance of volatility forecasts during times of financial turmoil: the U.S. financial crises in 2008 and the Eurozone crises in 2011.Although high-frequency models continue to dominate GARCH specifications, the benefits of using the non-linear TAR(2) model become substantial compared to linear specification: the latter's MAE is higher by 3% (U.S. crises) and 6% (Eurozone crises).By contrast, the MAE of the HAR model is only 1% higher during the whole out-of-sample period.Figure 7 shows that TAR(2) better captures spikes in volatility than linear specification during the recent U.S. financial crises.Finally, both RMSE and MAE are lower for Eurozone crises and whole out-of-sample periods compared with recent U.S. financial crises, which reflects the learning process of the model, where recent volatility spikes help to improve the models' performance. To sum up, the benefits of using the non-linear TAR(2) model are most evident during periods of elevated volatility.In addition, the model is able to predict spikes in volatility, even when we use a relatively calm period for in-sample estimation, since changes in regimes are driven by moderately low returns.As a result, we do not rely on extreme market events to forecast volatility.To sum up, our non-linear model outperforms its competitors thanks to its ability to capture different regimes in volatility and to measure volatility much more accurately than daily models.In addition, our model achieves approximately the same rate of improvement over the HAR model as much more complicated non-liner models, but with lower computational costs, since the TAR(2) model has only two regimes.For example, [18] modelled realized volatility with five regimes and achieved an improvement in forecasting performance over the HAR model of around 3%.This feature is essential for practical applications. Conclusions This paper develops a non-linear threshold model for RV (realized volatility), allowing us to obtain a more accurate volatility forecast, especially during periods of financial crisis.The changes in volatility regimes are driven by negative past returns, where the threshold equals approximately −1%.This finding remains robust to different functional forms of volatility and different set of indices from both developing and developed countries.The additional flexibility of the model allows one to produce a more accurate one-day-ahead forecast compared to the linear HAR specification and GARCH family models.More importantly, the superior multiple-step-ahead forecasting performance of TAR is achieved not only in particular samples, but also in population according to the GW test for the out-of-sample period from 2008 to 2014.Finally, we derive a closed form solution for multiple-step-ahead forecast, which is based on the NIG conditional distribution of returns.The non-linear threshold model primarily outperforms its competitors during periods of financial crisis.and θ = (θ 1 , θ 2 ) .The one-step-ahead forecast is obtained as: Next, consider the the-step-ahead forecast from Equation (9): Simplifying the first summand S 1 , we obtain: where Ŷt (s) = Y t+s is s < 0. Finally, the formula for the multiple-step-ahead forecast Ŷt (h) with h > 2 is extended recursively from Result (C5). 4. Compute S b 1 and S b 2 on the fake dataset, where b refers to specific bootstrap replication.5. Compute statistics F b 12 from (15).6. Repeat Steps (1)-(5) a large number of times.7. The bootstrap p-value (p bootstrap ) equals the percentage of times that F b 12 exceeds the actual statistic F 12 . Figure 1 . Figure 1.Daily standardized returns, returns, realized variance, realized volatility and the logarithm of the realized variance of the S&P500 index.The sample period goes from January 2000 till June 2014 (3603 observations). Figure 2 . Figure 2.Sample autocorrelations and partial autocorrelations of returns and realized volatility. Figure 3 . Figure 3. In-sample comparison of actual realized volatility (blue line) and volatility recovered from the HAR model (red line).The in-sample covers the period from February 2000 to June 2014 (3582 observations). Figure 4 . Figure 4. Daily returns in high (red line) and low (blue line) volatility regimes.The high (low) volatility regime occurs when the return is lower (higher) than the threshold.The sample period goes from February 2000 till June 2014 (3603 observations). Figure 5 . Figure 5. Ninety five percent confidence interval for the threshold parameter of the TAR(2) model with √ RV t specification.The red line corresponds to c(0.05) ≈ 7, while the blue points represent LR. Figure 6 . Figure 6.Comparison of actual and one-day-ahead forecasts based on the TAR(2), HAR, GARCH(1,1) and GJR-GARCH(1,1) models from January 2008 to June 2014 (1614 observations).The red line indicates the one-step forecast, while the blue line the actual data. Figure 7 . Figure 7.Comparison of actual and one-day-ahead forecasts based on the TAR(2) and HAR models during U.S. financial crises from January 2008 to January 2009 (247 observations).Red and green lines indicate one-step forecasts based on the TAR(2) and HAR models, correspondingly, while the blue line the actual data. Figure 8 . Figure 8.Comparison of aggregate volatility over five days and corresponding forecasts based on the TAR(2), HAR, GARCH(1,1) and GJR-GARCH(1,1) models from January 2008 to June 2014 (1604 observations).The red line indicates the aggregate five-step forecast, while the blue line the actual data. Table 2 . Heterogeneous autoregressive model (HAR) estimation.arein-sampleestimation results of the linear HAR model and corresponding standard errors computed based on the HAC variance-covariance matrix.The in-sample covers the period from February 2000 to June 2014 (3582 observations).Here, * * * means that the corresponding p-value is lower than 0.01. the benchmark model fails to model spikes in volatility during turbulent times on financial markets.Figure3illustrates this point and depicts a comparison between the in-sample forecast and the actual realized kernel. Reported Table 3 . Comparison of the TAR(1) (or HAR) and TAR(2) models.Reported are in-sample estimation results of the linear HAR model and non-linear TAR(2) model.The in-sample covers the period from February 2000 to June 2014 (3582 observations).p bootstrap is computed based on 500 replications using the heteroscedastic bootstrap method.We set the maximum amount of lags equal to 10 in the TAR estimation. Table 4 . TAR(2) estimation.are in-sample estimation results of the non-linear TAR(2) model and corresponding standard errors computed based on the HAC variance-covariance matrix.The in-sample covers the period from February 2000 to June 2014 (3582 observations).The first four rows correspond to the high-volatility, while the last four rows correspond the low-volatility regime, respectively.Here, * * * and * mean that the corresponding p-values are lower than 0.01 and 0.1, respectively. Reported The first four columns correspond to the period of recent financial crises in the U.S. from January 2008 to January 2009 (247 observations).The next four columns correspond to Eurozone crises from July 2011 to December 2011 (123 observations).The last four columns correspond to the period from January 2008 to June 2014 (1614 observations).The performance metrics are root mean square error (RMSE), mean absolute error (MAE), the R 2 of the Mincer-Zarnowitz regression and the p-value of the Giacomini and White test based on the MAE metric.Two forecasts are identical in population under the null hypothesis, while TAR beats its competitors under the alternative.We compare TAR against all other models, while NA corresponds to the TAR vs. TAR case.The TAR column represents the actual value of RMSE and MAE errors, while the HAR, GARCH and GJR columns, corresponding to the RMSE and MAE rows, equal the ratio of the TAR model to the following benchmark.Thus, a number below one indicates the improvement of the TAR model over its competitor.Observations for RMSE and MAE of the TAR model are standardized by 1000.
8,804.6
2015-08-05T00:00:00.000
[ "Economics" ]
Selection of CVD Diamond Crystals for X-ray Monochromator Applications Using X-ray Diffraction Imaging : A set of 20 single crystal diamond plates synthesized using chemical vapor deposition (CVD) was studied using X-ray diffraction imaging to determine their applicability as side-bounce (single-reflection) Laue monochromators for synchrotron radiation. The crystal plates were of optical grade (as provided by the supplier) with (001) nominal surface orientation. High dislocation density was found for all samples. Distortions in the crystal lattice were quantified for low-index Laue reflections of interests using rocking curve topography. Maps of effective radius of curvature in the scattering plane were calculated using spline interpolation of the rocking curve peak position across the studied plates. For several selected plates, nearly flat regions with large effective radius of curvature were found ( R 0 (cid:38) 30 − 70 m, some regions as large as 1 × 4 mm 2 ). The average width of the rocking curve for these regions was found to be about 150 µ rad (r.m.s.). These observations suggest that the selected CVD diamond plates could be used as intermediate-bandwidth monochromators refocusing the radiation source to a specific location downstream with close to 1:1 distance ratio. Introduction Chemically vapor-deposited single crystal (sc-CVD) diamond is a synthetic material with many emerging applications in modern technology. sc-CVD diamond retains the spectacular thermal and mechanical properties of single crystal diamond, natural or the more demanding (grown in equilibrium conditions) high-pressure high-temperature (HPHT) material ( [1,2]). It has been shown that regions of nearly perfect crystal lattices exist in HPHT diamond, which can be used for the next-generation high-resolution X-ray optics [3,4] (including high radiation heat load applications). In contrast, CVD crystals show increased dislocation densities, which results in broadening of X-ray rocking curves and reduction in X-ray reflectivity per unit spectral interval (not to be confused with the overall increase in integrated reflectivity frequently observed for imperfect crystals). Successful attempts to reduce dislocation density in CVD diamond have been reported (e.g., [5]), yet, availability of high-quality CVD and HPHT crystals remains limited. At the same time, many X-ray studies performed at present using conventional X-ray sources and synchrotrons do not rely on the narrow radiation bandwidths provided by reflections in perfect crystals (∆E/E 10 −4 ), but instead benefit from the increased photon flux due to the use of a monochromator element with an increased acceptance radiation bandwidth. Traditionally, the desired increase (∆E/E ≈ 10 −2 − 10 −3 ) is accomplished using multilayer monochromators (e.g., [6,7]) or refractive lenses (e.g., [8]). Imperfect/mosaic CVD diamond crystal can be considered as a cost-efficient alternative for the high-heat-load monochromator element. The benefits are a relatively low cost of the element and a reduction in the monochromator operational expenses (possible use of water cooling as opposed to cryogenic cooling under conditions of high incident X-ray power density). In the previous study we described substantial broadening of X-ray rocking curves in sc-CVD diamond plates while retaining high reflectivity in the Laue geometry for hard X-rays [9]. The challenge for the application of sc-CVD diamond crystals as X-ray monochromator elements at synchrotrons originates from distortion of the radiation wavefront of the reflected X-rays due to imperfections of the crystal lattice. The distortion can be conditionally ascribed to two major effects: the intrinsic effective curvature of the crystal lattice across the incident beam footprint (total curvature) and the increase in radiation divergence due to local interaction with misoriented crystal blocks. In general the two effects are coupled by the stress-strain elasticity relationships: bending of the crystal lattice results in a change of the lattice parameter locally. Experiments show that the most dramatic effect on the reflected radiation wavefront (e.g., increased beam size and the resulting loss in the reflected radiation flux density) is due to substantial total lattice curvature [10]. This parameter is typically not controlled during CVD growth. Therefore, quantitative characterization of the total curvature is an important step in selection of CVD diamond crystals for X-ray monochromator applications. In this work we describe evaluation of the total lattice curvature using X-ray diffraction imaging (double-crystal X-ray topography), present quantitative characterization results and the related statistics for a collection of sc-CVD diamond plates of the nominal "optical" grade featuring high dislocation densities. Distortions of the crystal lattice were quantified using rocking curve topography [11] in the double-crystal configuration. Rocking curve peak position topographs visualize and quantify effective misorientation of different crystal regions (fulfillment of the Bragg's condition locally at different orientation angles), revealing regions with larger or smaller variations. To further characterize the crystals as X-ray reflectors, maps of the effective radius of curvature in the scattering plane were generated using spline interpolation of the rocking curve peak position profiles. Substantial variation in the radius of curvature were found among all samples as well as across each individual crystal plate. For several selected plates regions with large radius of curvature (R 0 30-70 m) were found (some regions as large as 1 × 4 mm 2 ). For a given radius of curvature and the reflection geometry focusing distances can be predicted, which presents an opportunity to use the CVD diamond as a refocusing monochromator. The standard deviation for the rocking curve peak position across the nearly flat regions of interest (ROIs) can be as small as 20 µrad. The average rocking curve width for the ROIs was found to be about 150 µrad (r.m.s.). By decoupling the shear (lattice rotation) and the lattice spacing (dilation-compression) contributions to the rocking curve peak position topographs it was found that the local lattice rotation is the dominant contribution (analysis performed for one of the plates). The variation due to the lattice spacing was found to be 24 µrad (r.m.s.) across the ROI. Samples A set of 20 sc-CVD diamond plates of the optical grade was obtained from Applied Diamond Inc., (DE, USA). The plates were of square shape with 7 × 7 mm 2 area and of 1 mm thickness. The nominal crystallographic orientation of the 7 × 7 mm 2 surface was (001). Twelve of the studied plates had (100) edge orientation and the remaining eight plates had (110) edge orientation. Crystallographic orientation of the plates was measured using Multiwire X-ray back-reflection instrument at Cornell Center for Materials Research. Deviations from the nominal surface orientation were measured for each sample with ±0.5 • precision. These deviations did not exceed 4 • . White-beam X-ray topography in the transmission (Laue) geometry was performed for selected samples at 1-BM Optics beamline of the Advanced Photon Source (Argonne National Laboratory, IL, USA). A dense network of dislocations was found for all studied samples (dislocation densities above 10 4 cm −2 ). A representative topograph for one of the samples is shown in Figure A1. Experiments Rocking curve imaging of the CVD diamond crystal plates aligned for either 111, 220 or 400 Laue reflections was performed in the double-crystal nearly-nondispersive configurations shown in Figure 1. The experiments were conducted at 1-BM Optics beamline using the sequential X-ray topography setup [12]. The bending magnet synchrotron radiation is monochromatized using a Si 111 double-crystal monochromator (DCM). An asymmetric Si crystal serving as beam conditioner is in the dispersive arrangement to the DCM's second crystal, which further reduces the radiation bandwidth. The asymmetry is described by the angle between the lattice planes of the working reflection and the entrance crystal surface (the asymmetry angle η). The beam conditioner reflection is selected with an approximate match in d-spacing to the studied diamond reflection. The beam conditioner crystal and the diamond crystal reflections were set in the nearly non-dispersive configuration (e.g., [13]). The profile of the X-ray beam reflected from the diamond crystal is imaged using a digital area detector (AD). The configuration of experimental setup depicted in Figure 1a was used to study (110)-edge oriented diamond plates using the 220 Laue reflection, while the configuration shown in Figure 1b was used to study (100)-edge oriented plates using the 400 Laue reflection. To complement synchrotron experiments, some of the (110)-edge oriented plates were studied using Cu K α rotating anode source (Rigaku), in a configuration shown in Figure 1c. These measurements required increased data collection time (about 2 h per plate). A photon counting area detector was used to facilitate data collection. The resulting topographs were found to be noisy. Nevertheless, the angular characteristics were determined and found satisfactory upon cross-check with synchrotron measurements for one of the plates. The parameters of the configurations are summarized in Table 1. Table 1. The incidence angles on the strongly asymmetric beam conditioner crystals are exaggerated (increased) for clarity. See text for more details. E-photon energy (selected by the DCM); θ Si -Bragg angle of the Si beam conditioner crystal; η Si -asymmetry angle of the Si reflection; θ C -Bragg angle of the studied diamond crystal; η C -asymmetry angle of the diamond reflection (nominal). Measurements were performed while scanning the angle of the diamond crystal plate in the scattering plane. Images of the beam profile were taken at each angular setting of the crystal over its reflection curve. The scattering plane was vertical (σ-polarization of the X-ray wave) in the synchrotron experiments. In the configuration with the rotating anode source the scattering plane was horizontal. The sequences of collected images were sorted to calculate local rocking curves for each detector pixel. Rocking curve topographs were computed using rctopo code of the DTXRD package [14]. The parameters of the local rocking curves were obtained using Gaussian profile fitting of the local rocking curves. These topographs in the Laue geometry represent projections of the crystal volume across its entire thickness. A geometric representation of the Laue diffraction geometry for a collimated monochromatic incident beam shows that each ray in the reflected beam emanating from the crystal at a given point originates from a finite crystal volume defined by the Borrmann triangle as shown in Figure 2. The base of the triangle on the entrance surface of the crystal is where t 0 is the thickness of the plate, θ C is the Bragg angle, G 0 = cos φ 0 and G h = cos φ h are the direction cosines with respect to the surface normal z. In the case of a symmetric Laue reflection (G 0 = G h = cos θ C ) Equation (1) reduces to l 0 = 2t 0 tan θ C . Thus, good lateral resolution can be achieved only for thin specimens at shallow Bragg angles. Spatial restriction of the incident radiation to a narrow "pencil" beam permits depth resolution in the Laue geometry, which is known as section topography. The principle of section topography can be easily understood by reversing the direction of propagation of X-rays in Figure 2. Unlike in traditional X-ray topography (using either white-beam or monochromatic X-rays) where quantitative analysis is focused on studies of defect-induced diffraction contrast (e.g., [15,16]) our goal is to quantitatively map macroscopic characteristics (reflectivity, peak position and curve's width) using rocking curve imaging. In our study, the incident monochromatic radiation illuminates the entire volume of the crystal. This approach is particularly useful for visualizing and quantifying the regions (contours) of equal effective orientation of the distorted crystal lattice. Table 2. In addition, the width of the total (integrated across the region) rocking curve is given. D(δθ m )-standard deviation of the peak position from average value; < ∆θ σ >-average curve width (r.m.s.), standard deviation form the average value is shown in parentheses; ∆θ tot σ -width of the total rocking curve (integrated across the region); R 0 -effective radius of curvature. Analysis of Rocking Curve Topographs Similar analysis for another diamond plate CVD-N is summarized in Figure 4. This plate had (110) edge orientation and the rocking curve topographs were collected in the experimental setup configuration shown in Figure 1b. The overall distortion of the crystal lattice is substantially greater compared to that for plate CVD-B (note the increased range of the (δθ m ) and (∆θ σ ) colorbars). The effective radius of curvature was found to be R 0 50 m in the ROI of size 1×4 mm 2 shown by the dashed rectangle in Figure 4a. The I peak R , δθ m and ∆θ σ topographs for this region are shown in Figure 4b. The colorbars ranges are the same as those of Figure 3b. The ROI is significantly more distorted compared to the region of interest for CVD-B. This is quantified by the increased standard deviation of the rocking curve peak position and the average width of the rocking curve compared to those for the ROI on plate CVD-B (see Table 2). The I peak R topograph shows a few occasional peaks of increased reflectivity. These peaks correspond to regions of substantially reduced curve width at the corresponding locations (∆θ σ topograph). The origin of these features are not understood at present. In addition, Table 2 shows results for crystal plate CVD-I of (110) edge orientation, which was studied in configuration of experimental setup shown in Figure 1c. The data were found somewhat noisy (the topographs for plate CVD-I are shown in Figure A2). Nevertheless, the angular topographs showed valid statistical results. For ROI of a similar size 1 × 3 mm 2 the radius of curvature was found to be R 0 30 m. The standard deviation of the peak position, the average and the total curve widths are similar to those found for the ROI of plate CVD-N. The total rocking curves for ROIs of the three crystals are shown in Figure A3. Their shape is well approximated with a Gaussian function. Dilational and Rotational Components of the Lattice Distortion To further study the origin of the angular variations (effective tilt of the crystal lattice) δθ m , separation of the local dilation and rotational components was performed (first proposed by Bonse [17]). The differential form of the Bragg's Law shows that the Bragg's condition for a monochromatic wave (∆λ = 0) can be satisfied by either variations in the d-spacing (∆d/d) or the rotation of the lattice planes ∆θ. The rocking curve shift relative to a reference position is (e.g., [18,19]): where ∆ψ is the local misorientation angle, n r and n m are the unit vectors representing directions of the rocking curve rotation axis and the misorientation rotation axis, respectively. The dilation/compression (∆d/d) and the shear/rotation (∆ψ) components of the lattice distortion can be decoupled by altering the sign of the second term in Equation (3) via choice of the crystal lattice orientation with respect to the rotation axis. The components can be obtained from sums and differences of the data taken at azimuthal rotations around the reciprocal vector, which are 180 • apart. Additional data collection and analysis were performed for one of the samples (CVD-N). An extra sequence of images on the rocking curve of the 220 reflection was collected upon rotation of the plate by 180 • around the reflection's reciprocal vector. To extract the local rotations around the perpendicular y axis (∆ψ y ) another pair of sequences at 0 • and 180 • were collected in a similar manner for the 220 reflection (plate rotated 90 • about its surface normal and remounted). The resulting decoupled maps of dilational and rotational contributions to the effective tilt δθ m are shown in Figure 5. Figure 5a shows these maps for the entire crystal. The dominant contribution to the overall effective tilt of the crystal lattice (or slope error from which the effective radius of curvature was deduced) originates from the local rotations ∆ψ x , y. The dilational component shows a more localized texture (except for some features at the edges of the crystal where the lattice is severely distorted and the subtraction procedure fails). The dashed rectangular region corresponds to the previously identified ROI with R 0 50 m. Remarkably, the angular variation in the perpendicular direction ∆ψ y is also optimal (minimized) for this region. The dilational and the rotational components for the ROI are shown in Figure 5b with colorbars rescaled to reveal more details. The peak-to-valley variation in the rotational components are about two times greater compared to that of the dilational component. The standard deviations across the ROI are 24 µrad for the dilational component, and 49 and 64 µrad for ∆ψ x and ∆ψ y , respectively. These results suggest that the contribution of the local lattice rotations to the effective tilt δθ m is the dominant contribution, even for the relatively "flat" regions of the crystal. Nevertheless, the dilational components, which seem to be more localized around the defects (dislocations) in the crystal lattice cause substantial (possibly non-negligible) variations in the studied high-dislocation-density CVD diamond. Conclusions Among the studied 20 sc-CVD diamond plates of the nominal optical grade and (001) surface orientation it was found that about 50% had substantial total curvature (the effective radius of curvature was < 10 m over the entire crystal, which is impractical for refocusing of synchrotron radiation). These were rejected in our selection procedure aimed at finding nearly "flat" regions with large radii of curvature to realize a close-to 1:1 polychromatic pseudo-focusing in the Laue geometry [20]. The 1:1 focusing geometry refers to the ratio of a distance from source to the optical element to the distance form the optical element to the desired observation plane. Among the remaining 50%, several plates were identified with relatively "flat" regions (of size ≈ 1 × 4 mm 2 ) having the effective radius of curvature 30 − 70 m. The size of these regions is sufficient to accommodate footprints of synchrotron beams at practical distances from synchrotron radiation source (20-30 m). In particular, one of the dimensions (4 mm) being greater than the other is required to intercept the larger beam footprint in the horizontal direction (this asymmetry is common for the third-generation synchrotrons such as Cornell High Energy Synchrotron Source (CHESS)). The most prominent effect on beam propagation is expected to be caused by the effective curvature of the crystal in the scattering plane, which is the horizontal plane at CHESS for several newly constructed side-bounce beamlines. Our crystal selection methodology was developed to mitigate this effect, and to explore the increase in the reflected radiation bandwidth (thus, the potential increase in the reflected photon flux) due to use of imperfect reflectors such as high-dislocation-density CVD diamond crystals. The analysis of the rocking curve topographs is summarized as follows. 1. The standard deviation of the effective lattice misorientation across the nearly flat regions of interest is in the range 20-70 µrad. 2. The averaged rocking curve width for these regions is about 130-165 µrad (r.m.s.), which was found to be close to ∆θ tot = 134-181 µrad (r.m.s.) widths of the total rocking curve (integrated across the region). The effective intrinsic bandwidth of the reflector (FWHM) can be estimated as ∆E/E 2.355 ∆θ tot σ / tan θ C . 3. The effective lattice misorientation observed in the rocking curve topographs was dominated by the shear/rotational components of the lattice distortion, which exceed the dilation-compression component by about a factor of 2 (peak-to-valley variation) in the studied nearly flat region of interest for a representative crystal plate. The standard deviation for the dilation-compression component across the region was found to be 24 µrad.
4,499.6
2019-07-31T00:00:00.000
[ "Materials Science", "Physics" ]
An Insight into Amorphous Shear Band in Magnetorheological Solid by Atomic Force Microscope Micro mechanism consideration is critical for gaining a thorough understanding of amorphous shear band behavior in magnetorheological (MR) solids, particularly those with viscoelastic matrices. Heretofore, the characteristics of shear bands in terms of formation, physical evolution, and response to stress distribution at the localized region have gone largely unnoticed and unexplored. Notwithstanding these limitations, atomic force microscopy (AFM) has been used to explore the nature of shear band deformation in MR materials during stress relaxation. Stress relaxation at a constant low strain of 0.01% and an oscillatory shear of defined test duration played a major role in the creation of the shear band. In this analysis, the localized area of the study defined shear bands as varying in size and dominantly deformed in the matrix with no evidence of inhibition by embedded carbonyl iron particles (CIPs). The association between the shear band and the adjacent zone was further studied using in-phase imaging of AFM tapping mode and demonstrated the presence of localized affected zone around the shear band. Taken together, the results provide important insights into the proposed shear band deformation zone (SBDZ). This study sheds a contemporary light on the contentious issue of amorphous shear band deformation behavior and makes several contributions to the current literature. Introduction The development of magnetorheological elastomer (MRE) materials and their advancement over the years of breakthrough in materials science has had a significant influence on the material revolution. Categorized as intelligent and receptive solid materials, MRE has properties that can be substantially modified by external magnetic stimuli. However, MRE is still not as common as other viscoelastic materials despite its smart characteristics, primarily due to substantial limitations in the current experimental and theoretical research on these materials [1]. It is only recently that evidence of a qualitative link between the theoretical mixture model and magnetorheological fluids (MRF) experiments has been obtained [2]. MRE consists of a mixture of two materials of different kinds. It consists of magnetizable particles, such as iron powders or carbonyl iron particles, which are immersed in an elastomeric material, and during the curing process, the material will transform into the desired shape [3]. The initial microstructural condition of the cured mixtures plays important role in determining the performance of the MRE. Microscopic-scale analysis has successfully observed this initial microstructure configuration and alignment of the particle and how it contributed to the enhancement of the field dependence of the mechanical properties of MRE when subjected to the applied force and magnetic field [4][5][6][7]. Moreover, the microscopic analysis also is the art and science of examining the mechanism of failed components to determine the cause of failure. It is one of the major steps in the process of post-failure analysis [8][9][10][11][12]. Therefore, few methods have been used to observe the microscopic analysis including optical microscopy. To date, optical microscopy was a popular method of identifying the microstructure of materials. However, more advanced and promising tools have recently been introduced for highly detailed microscopic images, including electron and scanning probe microscopy [13][14][15][16][17]. The use of microscopy analysis has been extensively used to explore in the study of post-failure rheological features of the MRE due to the precision of the measurement and the highest image resolution quality [18]. At the microscopic scale, forces in MRE are defined using micro stress and strain, and the distribution of stress applied during the test is the most commonly studied parameter for both elastic and plastic deformation of MRE. There has been an increasing number of publications [8][9][10]19] concentrating on a microscopic scale in MRE, with micro stresses ranging from one part of a molecular chain to the other. Additionally, it is very important to demonstrate the rheological behavior of MREs, particularly the time-dependent rheological nature of MRE and stress releases under constant strain, known as stress relaxation properties in detail microscopic analysis, in order to design a material that has good durability. Stress relaxation tests are, therefore, important to demonstrate the rheological viscoelastic behavior of MRE durability and to provide guidance on their application [20]. Previously, studies related to stress relaxation have attracted significant critical interest in natural rubber, modified rubber, nanocomposites, and amorphous solid [21][22][23][24][25][26][27]. However, in the stress relaxation investigation based on MRE, only a few have been published [20,28]. Generally, the molecular structure was in the strained condition for some finite duration during the stress relaxation process, resulting in some amount of strained microplastic [26]. Micro plasticity deformation then took place through a series of local reorganizations. However, due to their localization at a significantly low strain value, the region only deformed plastic in a very narrow area, resulting in the development of the shear band. Much attention has been paid to the shear bands phenomena, especially in metallic glasses, amorphous solids, and granular materials, because of their key feature, which controls the process of plastic deformation [8,9,11,26,[29][30][31][32][33][34]. The susceptibility of shear band propagation in MRE limits the extent of exploitation of their use in the application of engineering and limits their scope of application to non-primary structural components. By decreasing the mechanical properties and limiting the component's longevity, the shear band can precede failure. Therefore, several studies [8][9][10][11]19,29,31,[35][36][37][38] have examined the formation of shear bands in solid amorphous materials. Analytically, Dasgupta et al. [9] demonstrated the mechanism for shear localization under shear stresses in metallic glasses, which was the appearance of the structured plastic flow of the amorphous solid into a shear band. The research was also successfully evaluated and demonstrated that the shear banding mechanism could only occur at stress values that surpass the yield stress. Meanwhile, another investigation by Cao et al. [35] on understanding the nature of plasticity has shown that the shear band that occurred in the granular system (container roughened by glued glass particles) consisted of significant plastic behaviors and consequences of the shear band regime. Evidence from the experimental analysis of Hamm et al. [31] on randomly packed granular medium with brass beads, identified the form of the shear bands in the early stages and the evolution of the shear bands was found to be discontinuous. At a longer duration, Shen et al.'s [11] study on ferromagnetic metallic glass found that the shear band affected zone consisted of a nanoscale shear band, a micrometer-scale severely deformed zone in the vicinity of the shear band, and tens of micrometers of extended strain gradient region. Surprisingly, no numerical, modeling, or experimental analyses have yet been conducted on the shear band mechanism and deformation for the MRE. The debate continues about the best strategies for the characterization of the shear band. To date, there has been little agreement on determining the precise nature of the shear band. The available literature revealed that no previous study has investigated the shear band formation under stress relaxation. This has made the shear band more exclusive and unique. In the literature on shear band deformation, the relative importance of morphological observation has been subject to considerable discussion. Few published studies [8,10,37,38] have attempted to describe the nature development of the shear bands and structural evolution utilizing microscopy instruments. A recent study by He et al. [8] has been carried out using transmission electron microscopy (TEM) to examine the local conditions in shear bands. Nevertheless, TEM has its downsides, regardless of the advantages. It is necessary to cut the sample thin enough but still able to withstand the analysis process, for electrons to pass through the sample. Thus, partial shear band spatial distribution information within the matrix can be obtained. However, this means that in a single TEM image there is minimal sensitivity to depth. On the other hand, a few studies [11,12,18,23,[39][40][41] have proven the use of the Atomic Force Microscope (AFM) that offered additional capabilities and emulate advantages over other microscopy instruments. The study of Shen et al. [11] may have been the first to be recorded on the evaluation of the shear band using AFM. A comprehensive review by De Sousa et al. [12] summarized that AFM can, therefore, also evaluated the fundamental properties of the sample surfaces, along with elastic properties, and effectively distinguished between phases of the scanned region. Another advantage of AFM is that it offers access to material morphology without the need for rigorous sample preparation. However, no AFM application has been implemented to date in order to better understand the shear band phenomenon in MRE. Therefore, motivated by the distinctive characteristic of the shear band that has undergone stress relaxation, AFM is jointly used for evaluating the additional features of shear bands after deformation in MRE. Moreover, a unique finding of an exclusive shear band deformation zone (SBDZ) is thoroughly examined using AFM. The aim of this study was to determine the shear band characteristics in a system caused by localized microplasticity. The deformation characteristic of the shear band within the elastic region of the material was the most obvious finding to emerge from the analysis. Since the effects of the shear band on the neighboring matrix were not well understood, AFM images may provide information that is hidden in a topographic image. Materials and Methods MRE with silicone-rubber-based and carbonyl iron particle (CIP) filler has been established for the stress relaxation investigation and topography of fault mechanism region evaluation. The sample was manufactured using the traditional method by means of the required cylindrical closed mold. At a controlled speed of 200 rpm, the soft CIP (d50 = 3.8-5.3 µm, CC grade, supplied by BASF, Ludwigshafen, Germany) was mechanically stirred into silicone rubber (NS625tds) supplied by Nippon Steel Co., Tokyo, Japan, at room temperature 25 • C. With the application of a 0.1 wt% curing agent, the ready mixture of the 30 wt% matrix and 70 wt% CIP was left to be cured. The curing method took 2 h to generate a sample for experiment purposes. Finally, a circular disc-shaped sample of MRE was cut out using a hollow hole punch tool from the original prepared MRE disc sheet to the desired diameter of 20 mm, with a nominal thickness of 1.2 mm for dynamic oscillatory shear testing. Most importantly, the highly sharp and sturdy punching process has no effect on the sample because the sample also has a soft and easy-to-cut physical characteristic. As a result, there were no chances of internal stresses affecting the sample as a consequence of the very low applied load during the punching process. The characteristic of resilience was obtained as a measure of resistance to microstructural shear deformation under fluctuating stress. The stress, strain, normal force, frequency, and length of the test period parameters were preliminarily confirmed by experimental rheological behavior. The storage modulus was determined for the specified sample test geometry from the measured relationship of proven torsional oscillatory shear theories. Usually, stress relaxation has been defined in terms of stress deterioration over time under constant strain conditions. Thus, the recent advances in stress relaxation methods have facilitated the investigation of the behavior of the MRE. The MRE sample was tested under torsional shear mode and measured by an oscillation parallel plate rheometer (Physica MCR 302, Anton Paar Company, Graz, Austria). The rheometer was set to the desired test condition (temperature, force, and gap) and a rotary disc parallel plate (pp20 rod) was mounted before the investigation. Once the sample was placed on the stationary base mount of the rheometer, it was preloaded to prevent it slipping through the wall. The scope in this study, is the strain level of 0.01% was decided from our earliest rheological study on the sample. The value was achieved from the determination of linear viscoelastic (LVE) limit. Based on the rheological results of the previous investigation, the loss modulus has no significant impact on the behavior of the MRE in the LVE region. The consequence of the elasticity condition is that the storage capacity is dominant rather than the extremely low value of the loss modulus and the loss factor for the dissipation of energy. Throughout the test, the shear deformation was continuously set at 0.01%; this was highlighted as the pristine attempt at the stress relaxation test for MRE closest to its state of rest and follows Newton's first law, regarded as a condition of equilibrium. A constant test frequency of 1 Hz was set to preferably replicate the actual working condition in the application and to take into account the shear velocity gradient affected during the test. To allow for a broader range of behavior observation, the time interval of each test condition was set at every 4000 cycles, and the total length of the test reached up to 115,000 cycles. By means of tapping-mode AFM study, observation for morphological studies on the MRE sample was further investigated. Using a Nanosensor tapping-mode monolithicsilicon AFM probe-type single-beam cantilever supplied by BudgetSensors, Sofia, Bulgaria., this open-loop mode was worked on a NanoWizard 3, NanoOptics AFM (JPK Instruments, Berlin, Germany). The cantilever had a nominal length of 125 µm and a nominal force constant of 40 N/m and a resonance frequency of 300 kHz. The cantilever's uncoated tip has a revolving shape with a height of 17 µm and a radius of less than 10 nm. The initial scan area was set at 100 µm, and the comprehensive evaluation of the sheared sample with a specific failure mechanism was developed at a scan area of approximately 20 µm. For the calculation and phase images, bundled analysis software was manipulated, while AFM images have been processed using the JPK Instruments data processing software (Version SPM-5.1.8). Results The consistency of the MRE sample was first assessed by its ability to elastically store deformation energy through the characterization of the storage modulus. As shown in Figure 1, the continuous stress applied to identical constant strain has steadily diminished the proportionality activity of the MRE, and the covalent cross-linkages within matrix molecular structure have undermined the return-ability of the stretched chains. As shown in the negative value of the slope variable in the linear equation correlation for the overall test cycles, the storage modulus value displays a decreasing pattern. These results were somewhat surprising considering the fact that energy storage capacity within the elastic region could still be impaired, even though the test was performed under a constant strain. The results showed that the ability of MRE to store deformation energy decreased marginally. There are several possible explanations for this result. The most important explanation is related to the relaxation of stress in the matrix's amorphous molecular structure. On the molecular scale, stress relaxation could have occurred and been involved in the alteration of the molecular chain's structure. In addition, stress relaxation phenomena in MRE have theoretically occurred through a number of mechanisms, including cross-link disengagement, elastic stretching, inelastic deformation, structural shift by phase transformation, structural rearrangement due to rupture, separation of microphases, microplasticity, and finally the nucleation of shear bands formation by localized strain. In The graph plotted in Figure 1 simultaneously indicates that the comparability evaluation of this activity corresponds to the durability of the early phase and the final interval range. For the start of the evaluation, a smaller interval is chosen to ensure that the onset of the shear bands is captured as early as possible. Choosing the broader interval during the initiation process of stress relaxation is risky and may miss the occurrence of shear band nucleation. An extended comprehensive test period for MRE stress relaxation evaluation could be introduced after the development of shear bands has become adequately consistent. In order to understand the complete cycle of MRE resilience to stress relaxation, while the shear bands have been consistently developed, the test was carried out continuously over a longer duration of the test cycle. The continuing concern within the LVE region of MRE is the impact of the persistent low strain at 0.01%. These findings indicated that the relationship of stress relaxation plays an important role in the MRE life cycle and revealed that the storage modulus decreased by approximately 10% over the overall total specified test duration. In the MRE stress relaxation analysis, this cycle scale was the longest ever recorded compared to the previous stress relaxation study [28]. The investigation by AFM in-phase images allowed us to be assured of the measure dissemination of micro-scale particles of the CIP within the elastomeric matrix. The AFM images of the sheared sample at 115,000 cycles are shown in Figure 2. After undergoing stress relaxation of the stated duration, the particles dispersed through the matrix can be seen protruding. The particle topography picture in Figure 2a was obtained with a drive amplitude of 0.075 V at a frequency of 263.916 kHz and 24.68 µm/s tip velocity on the scanned area of 100 µm × 100 µm. At this stage, the tiny line of shear band formation can be seen on the matrix within the area with lesser protruding particles on the left side of the image in Figure 2a. Further, the in-phase channel simultaneously reflects the modulus of the individual domain within the multi-domain (soft and hard) of MRE resulting from the stress relaxation shear durability test. The lower modulus of the softer domain presented in a darker color contrasting with the lighter color appearance of the harder domain. The phase image in Figure 2b was obtained at a phase shift angle of 165 • . The variations throughout the phase image also indicate some kind of boundary between the soft and harder domain. As in Figure 2c, the image has been manipulated from the edge detection feature to reveal the edge of the particles. CIP appears to be evenly distributed over the matrix and varies in size. Utilizing the measurement tools in the software, it was found that the average of the smallest particle diameter was 1.5 µm, the medium diameter was 3.4 µm and the largest average particle diameter was 5.8 µm. The measurement was taken in the phase image at more than ten locations and validated to the specification of the CIP manufacturer. The present study was designed to determine the characteristic of the shear band in the system resulted from localized microplasticity. The most obvious finding to emerge from the analysis is that the deformation of the shear band within the elastic region of the material. This is also consistent with the observations through FESEM, as shown in Figure 3, which showed that shear band deformation involved with few stages and presented different physical characteristics during nucleation, Figure 3a, to a longer test duration, Figure 3b, where the formation of shear bands can be observed distinctly. This result, however, has not previously been described using AFM. AFM images may reveal information that is hidden in a topographic image, which may be attributed to the fact that the effects of the shear band on the neighboring matrix are not well understood. The elastic behavior of the matrix itself was originated from the cross-linking process in the amorphous molecular chain. This event happened in a very localized region. The continuous shear load has repeatedly smoothed the chains, causing the elastic limit in this region to be exceeded. Uneasily reconfigured with fewer cross-linkages and amorphous structure within the matrix domain originates from a micro-sized shear band. A consideration of the micro-mechanism is important for a deep understanding of the behavior of the shear band in the amorphous solid. This stage of research reveals the system or mechanism that influences the progression of the expanding shear band. It can, thus, be suggested that the ability to store deformation energy elastically through the storage modulus that was identified and measured from the experiment could be related to the shear band mechanism. It is possible to hypothesize that these conditions are likely to occur in the shear band deformation zone (SBDZ), as illustrated in Figure 4. In this zone, there is the formation of the sheared surface of the main shear band, micro-plastic dissipation due to micro-plastic zone formation in the matrix around the shear band edges, translation of the micro-plastic zone, and the secondary shear band that may develop parallel to the main shear band. Based on the schematic Figure 4a, the CIP embodied in the silicon matrix in the closest region to the shear band has produced Materials 2021, 14, 4384 7 of 13 secondary interaction. This interaction developed within the CIP surface and the localized enclosed matrix region provides a stronger bonding than just the matrix deformation zone itself. As a result, CIP has restricted itself from any movement or dislocation. Distributed micro-stress at continuous shear was then concentrated in a less effective localized area of SBDZ. Hypothetically, the SBDZ consists of variant cohesion regions. These variants can refer to the cohesive band surrounding the permanently deformed shear band. Oscillated shear in the direction, as shown in Figure 4b, has created stronger cohesive strength in the zone and drastically increased the shear-band width. Coherent intensity at higher contrast SBDZ repeats the process indefinitely, and the shear band thickness eventually depends heavily on the degree of initial inadequacy [42]. The SBDZ's lightest contrast zone is then replaced by an increase in shear band thickness. Areas with several virtually parallel shear bands are created by micro-plastic deformation. The micro-plastic flow in the SBDZ and permanent micro-plastic formation of the inner portion at localized molecular chains are dependent on the state of the molecular structure and the crosslinked ability. The molecular structure at a similar elastic limit would have a similar pattern of formation of the shear bands parallel to each other. However, in the localized region with different elastic limit, will promote compounded interrelation of shear bands. Both conditions can be observed in the AFM topographic image in Figure 5. The scanned area images (100 µm × 100 µm) have a resolution of 512 × 512 pixels at a line rate of 0.157 Hz. The other potential cause for this difference was the stress relaxation mechanism, which has softened the molecular chain over a wide range of time. Unlike the process of increasing strain, stress relaxation is a very slow process of modifying the atomic arrangement. Shear plasticity deformation at this condition was believed to occur through a series of local reorganizations with a smooth and consistent operation. The materials have provided enough time to react to this very slow process. As the position of the strain deformed plastically at a very slow rate and only in a very narrow area, non-homogeneous deformation was, therefore, observed. Consequently, the general characteristic of the matrix, which is viscoelastic, limits the formation of the microplastic area. A detailed observation of the shear band expresses itself in an outer region belonging to the elastic matrix domain, which is, thus, elastically strained as the stress in that domain falls below the elastic limit. Splitting the softening chains and tolerating harder elastic matrix domains resulted in microphase separation of elastic and microplasticity deformation. The microphase is representative of the micro-sized domain of the matrix, whereas the shear band falls within this phase separation. The separated spacing was measured, as shown in Figure 6, utilizing the AFM cross-sectioned image-measuring features. The deformed shear bands were measured with thicknesses ranging from 600 nm to less than 1.2 µm, which the smaller range identified populations within the lower-amplitude scale area. In addition to measuring the drive amplitude, the in-phase images were mapped out to able to systematically be presenting the individual qualitative domain modulus. Figure 7 shows the phase images of approximately 30 µm × 30 µm scanned area of the shear bands deformation. The images have lower pixels and resolution for a smaller scanned area (30 µm× 30 µm). The average value at a similar line rate (0.157 Hz) was an approximately 160 × 160-pixel image. The phase images are presented in both two-and three-dimensional views to provide images with different viewpoints for better accurate characterization. Phase-imaging AFM allows us to differentiate and identify the hard and soft domains of the scanned area. For instance, Figure 7a,b display shear bands of nano-to micrometer thickness, and each shear band is designated with a soft domain, whereas the nearby matrix in the SBDZ appeared to be a harder domain. The spacings between shear bands were approximately similar. This signifies that the lighter color is implied by a higher modulus domain of the matrix, whereas the lower modulus domain of the shear bands appears darker. The mechanism in the SBDZ was then confirmed by a thorough observation of the shear band phase image, suggesting a decreasing contrast in the shear profile on each band. Figure 8a, one of the peculiar characteristics of the shear band can be determined by the edges formed on the nano-scale phase separation during this stress relaxation process. During the test where the fluctuating stress has broken the chain along with the cross-linked soft-dominated domain, the edges were formed. The nano-sized and various thickness spacing within neighboring bands, as shown in Figure 8b, were other distinct characteristics, due to stress relaxation. The strain was constant and optimally sheared throughout the test; however, in this region, the localization of strain was not persistent and contributed to the uncertainty of the plastic strain process, thus promoting a similar level of uncertainty in the shear band shape. The localization of the strain has become apparent by end of the test with the sighting of a thicker size shear band, as shown in Figure 8c. Nevertheless, due to the heterogeneity of localized stiffness of the cross-linked molecular structure of the matrix, few isolated shear bands with a small thickness can be observed within the region. Meanwhile, shear band deformation was not solely observed followed the maximum localized shear stress, but was affected by the maximum normal stress over the shear yielding or shear band formation. Therefore, as shown in Figure 8d, a more complex shear band population was disclosed by the AFM morphological observation. dimensional views to provide images with different viewpoints for better accurate characterization. Phase-imaging AFM allows us to differentiate and identify the hard and soft domains of the scanned area. For instance, Figure 7a,b display shear bands of nano-to micrometer thickness, and each shear band is designated with a soft domain, whereas the nearby matrix in the SBDZ appeared to be a harder domain. The spacings between shear bands were approximately similar. This signifies that the lighter color is implied by a higher modulus domain of the matrix, whereas the lower modulus domain of the shear bands appears darker. The mechanism in the SBDZ was then confirmed by a thorough observation of the shear band phase image, suggesting a decreasing contrast in the shear profile on each band. Conclusions This study set out to gain a better understanding of shear band deformation in an MRE amorphous matrix that had undergone stress relaxation. Thus, this study investigated the nature of the shear band by employing AFM as a novel method of measuring the features of the deformed shear band and the surrounding region. Under stress relaxation, the process of shear banding in the MRE matrix is primarily formed in the matrix, with no evidence of CIP inside the microphase shear band separation. AFM cross-sectioned features were used to measure shear bands ranging in thickness from 300 nm to less than 1.2 µm. The majority of shear bands are nearly parallel, with only a few having complex interrelationships, which may be due to the effect of maximum normal stress during the stress relaxation process. The evidence has demonstrated that the AFM in-phase images indicate the individual qualitative domain modulus within the shear band deformation region. Particularly, the insights gained contribute towards the enhancement of fundamental knowledge on the shear band in a number of ways and serves as a foundation for the proposed SBDZ mechanism. One of the study's key strengths was the use of AFM in the morphological analysis of the single shear band characteristics and the deformation zone formed by stress relaxation. The method used offers valuable insights into the nano-scale shear band deformation region at an effectively three-dimensional view. The proposed novel mechanism is useful for the development of a framework in order to explore the complex relationship between the evolution of the MRE shear band and durability performance under stress relaxation. In the current study, MRE used was a solid and strain was applied purely in the elastic region, so the model only included the Hookean elastic spring model; the Maxwell model as the sample was subjected to stress relaxation. The physical-mathematical modeling of MRE will be considered as a possible next step in our investigation of this phenomenon and addition of new knowledge to this field.
6,665.4
2021-08-01T00:00:00.000
[ "Materials Science", "Physics" ]
Scrum: An Agile Software Development Process and Metrics In a traditional software development process such as the Waterfall Model, works best in a stable environment. But, it is not flexible when it comes to change. There is a gap in the interaction between the users and the development team which leads to incomplete and misunderstood specification. Because of this, the end product is sometimes a surprise to users and this gap accelerates incorrect development of the software product. Once requirements are frozen there is no scope of accepting changes. There is a need for a framework which holds the solution for all these situations. With this premise, the agile development methodology came into existence. Scrum, an agile approach supports continuous collaboration among the customer, team members, and other stakeholders. Its time-boxed approach and continuous feedback from the product owner ensures the development of working product with essential features at all the time. This paper explains the agile software development approach, its proclamation and different frameworks of agile approach. Further illustrate most widely used framework: Scrum. This research paper covers the implementation and application of Scrum. It focuses on why Scrum is preferred over the Waterfall Model with the help of some survey results and later a discussion on some Scrum Metrics which will be helpful and accounting for the best Scrum Practices in achieving goals set by the software development team, the product owner and the customers. The outcome of this study shows that Scrum Metrics is critical and highly valuable for successful product development. The quantitative insight that these metrics provide for the Scrum Team, Product Owner and Stakeholders is necessary for achieving strong project dynamics and optimal results. Introduction The Software Development Industry works with many methodologies to provide their best results. Earlier development methodology would be like the Waterfall Model, Sequential Model, etc. were deliverable stage to the Client or Stakeholder then it is too old in the technology and client would see it the first time [1][2][3]. It may be it is too different from their expectations. This is Watermelon Situation where it seems to be green or all right from outside but actually, it is red or Problematic from inside [4]. Waterfall model is the first process model used for software development. It is divided into separate phases. There is no overlapping of phases. One phase acts as an input for the next phase. Any Phase will begin when the previous phase is completed. It is a linear sequential model like a waterfall. Its stages are Requirements, System Design, Implementation, Integration, and Testing, Deployment and Maintenance [5]. In this model, the output will be available towards its end phase. If the client was not satisfied it is a total wastage of time, Money and Effort. No changes may be possible at this stage. It is not suitable for the project in which requirements keeps on changing according to market Scenario [3]. It is a well-known fact that technology keeps on changing. With the changing requirements of technology, using the traditional model for software development process it is not apt. Past few years have witnessed the huge growth in app development. Within a period of Six months ,companies are launching new mobile sets, with some different techniques and features [6]. Thus if the Waterfall Model is followed, it will end up with an outdated product in the Market. An Approach or framework, which can be a solution to all these problems during the Software development process that is Agile Software Development [7,8]. Agile approach of software development In general term, Agile can be defined as the ability to move quickly and easily. This approach was discussed in February 2001 in Utah when a team of Software Developers met to find a solution for a situation that occurs every day in the field of Marketing, Management, External and Internal Customer and developers who don't want to make hard trade-off decisions, so they impose irrational demands through the imposition of corporate power structures [9]. They published the Manifesto for Agile Software Development for bringing up better ways of developing software by doing it and helping others to do it. They value Individual and Interaction over Processes and Tools, Working Software over Comprehensive documentation, Customer Collaboration over Contract Negotiation, Responding to Change over following a plan [9]. Manifesto [9] consists of 12 principles which are as follows: 1 Scrum was thought to be the most popular framework for implementing Agile Approach [11]. It is a lightweight process which is iterative in nature used to manage complex software development. Fixed length iteration called sprints which are enduring for one to two weeks. It allows the tea to ship software or product on a regular pace. Scrum follows a set of roles, responsibilities, and meetings that never change. Scrum organizes four ceremonies that provide structure to each sprint: Sprint Planning, Daily Stand-Up, Sprint Demo and Sprint Retrospective. During each sprint, the team will use visual artifacts like task boards or burndown charts to show progress and receive incremental feedback. There are three roles in Scrum. They are Product Owner, Scrum Master and Scrum Team [14]. Ken Schwaber defined "Scrum as a framework for developing complex products and systems. It is grounded in the empirical process and control theory. Scrum employs an iterative and incremental approach to optimize predictability and control risk". In an iterative approach, first, draw the outline of sketch and improve it more and bring it to the middle level and then reaches to the perfect picture. In the Incremental approach, the picture is started by drawing it in a piece by piece over a period of time. Fig. 1, it is displayed an Iterative approach where an idea of designing a red Jaguar car is thought and move towards building up the finished product. In an Incremental approach where A finished product is thought, it starts moving up by making a bit at a time and finally reaches to the Finished Product. Iterative and Incremental approach is a combination of both the approach, where a product designed with the gradual increase in its features additions, cyclical release and upgraded pattern. The Scrum follows both the iterative and incremental approaches. In first go it has one part of iteration and the increment and then perfects this piece and then add another The product vision starts with the stakeholders, the customers, the users, the senior manager, and the Management team. The other people involved are Product Owner, Scrum Master, and the Scrum Team. The Process of Scrum starts with the Product Owner. He gets the input from the end users, Customers, and the other stakeholders and comes up with the Product Backlog. Product Backlog or list of features prioritized by the Product Owner in the sequence of business requirements. So, these features which are also called as User Stories are prioritized at the starting of the project, this Product Backlog will always be live and the stakeholders keep on adding things to the backlog in the form of User Stories. The Product Owner is the only person who maintains the Product Backlog. Product Owner can make changes in the Product Backlog whichever change the user want to make. Scrum Team typically consist of 7 + 2 People. If the project is too big then break the entire team into multiple Scrum Teams and the teams have to be cross-functional, who have requirement people, designer, coder, testers? The team is self-organized and self-managed and there is no role of the project manager in the scrum project. The team organizes everything within themselves and the team makes the commitment how much they will produce within that sprint is the first level commitment they also commit every day in the daily scrum meeting, what they will do in the next day and what they have done in the previous day. So the commitment happens continuously for self-organizing, self-managed and committing to the product and the responsibilities. Sprint is referred to the fixed period of time that the team commits to working in the course of development product. The sprint duration is typically 1 to 4 weeks. Once it is fixed for a project say 4 weeks, then it will not be changed in the next sprint until the final deliverable of this particular project. Thus it is the fixed duration of time the team commits to work and deliver the working software and working at the sustainable pace. From the Product Backlog the Scrum Team pulls out a small portion of it on the highly prioritized items in a manageable chunk into the Sprint Backlog and this happens for every sprint. For example, sprint is of 4 weeks the once in 4 weeks this pull happens. In the Sprint Backlog, the task breaks down into the activity level and then schedule all those activities and then keep on doing those requirements, design, coding, and testing of all those features. Scrum Meeting is primarily for the Scrum Team, Scrum Master and the Product Owner. The team has a short meeting to update each other. As this is a self-organizing team nobody distributes the work. The team should know what work team has been done everyday updates each other on the progress of the project as well as any impediments or block. They usually stand up so that the meeting is finished faster. The team members stand in a circle looking at each other. Everyone can listen to whatever anyone says. They have to quickly update what is completed and what are further plans to do next and what are the impediments. The Scrum Master notes the blocks and that is the primary responsibility of the Scrum Master. After the Scrum meeting is over, the Scrum Master's responsibility is to do away with all those blocks. Scrum Master cannot allocate work like a traditional project manager. The Scrum Master role is to protect the team from any external or internal disturbance. He also teaches or guides the team to use the scrum. It is the responsibility of the Scrum Master to train the Product Owner and Scrum Team on the values, principles, and practices the agile. It is not only training if something goes wrong and people do not follow properly, it is the responsibility of the Scrum Master to make sure people follow the processes properly for not only getting trained but also for practicing it. The everyday team uses a Burndown Chart (Fig. 3). This chart is visual to eve-ryone. It is displayed on the working area of the Scrum Team. In the burndown, there is one planned line and another is an actual line. The Sprint Backlog list for the entire task it is determined primarily how much time is left to be complete this sprint. How much time is required to complete this project is more important. It will estimate whether the project is on time or behind the Schedule. At the end of every sprint, two reviews are done. First one is the Sprint Review Meeting in which the Product Owner, the Scrum Team, Scrum Master and the Customer and Stakeholders of the product all of them come together and see a demo of the working software. Actual working software or product of functionality of this sprint are shown to the user. This is primarily a Product Review. The user of the product gives feedback. Usually, the Product Owner captures all the feedback and update the Product Backlog. In the second meeting after the Product Review Meeting is the Retrospective Meetings. In Retrospective Meeting it is only for the Scrum Team, the Scrum Master as well as the Product Owner. These People meet at the end of each Sprint Review at the way of working primarily the process review what went wrong, process perspective, what was right and how to improve. The Sprint Review is the Product Review and Sprint Retrospection is the Process Review. In the Retrospective Meetings, the Entire Scrum process is reviewed and it re-evaluates what was right and what went wrong. The aim of the team is to complete the 100% of what they have committed to ideally as an increment which should be potentially shipped for the product. The word potentially is very important, the final deliverable is the working software and if required it should be always to ship to the customer and it required customer should be in a position to use the working software. It should be potentially shippable to the client and it needs to be working software, should be given at the end of every sprint. The functionally designed, implemented and fully tested with no major bugs that has to be ensured for every Potentially Shippable Product Increment. In the end, the team comes out with a product increment. These are not just documents but are all product increments and the working products and keep building are the products piece by piece then integrate them in line with the vision. Survey results The report by Sam Swapn Sinha, [15] CEO, Strategism Inc. on Forbes Technology Council analyses "Does Scrum Live Up to Its Hype? Steve Dunning [15] details how a 100mpg car was developed from scratch in three months by using Agile in Manufacturing. It also shared that Fortune 100 companies in the United States use Scrum and Agile in Software Projects. 2015 State of Scrum Report shows 95% of the 4,452 people surveyed confirmed they will continue using Scrum in the future [15]. Agile projects are successful three times more often than non-agile projects, according to the 2011 CHAOS report [16] of the Standish Group. The report states "The agile process is the universal remedy for software development project failure. Software applications developed through the agile process have three times the success rate of the traditional waterfall method and a much lower percentage of time and cost overruns." [16] The Standish Group defines project success as on time, on budget, and with all planned features. They do not report how many projects are in their database but say that the results are from projects conducted during 2002-2010. The following graph (Fig. 4) shows the specified result reported. Sutherland [12] mentioned that Traditional Waterfall Project Management is a predictive process control system which will lead to a large failure rate is 89% in traditional Project Management. It is a totally inappropriate process control mechanism in an environment where 65% of requirement change during development. In the book "Scrum: The Art of Doing Twice the work in Half the Time" [12] explained how the FBI solved the 9/11 tracking problem by moving from Waterfall to Scrum. The details of the project to integrate all data needed to track terrorists after many years with hundred people and $400M, the US General Accounting Office closed the project because nothing worked out and no end was in sight. Then, The FBI hired an agile CIO and CTO. They put about 10% of the original staff doing Scrum and completed the project for less than $50M. This project is used to run all FBI operations. There are many real-life examples which show that Scrum is the solution for all those areas where the requirements keep on changing along with the market. A growing body of literature has analyzed that traditional methods are not efficient for changing business needs. Long time between project start and go live causes a gap between initial solution blueprint and actual user requirements at the end of the project. A case study was done in a large telecommunications company (350 BI users) and the results of pilot research provided in the three large companies: Media, Digital and Insurance. Both studies prove that agile methods might be more effective in BI projects from an end-user perspective and give first results and added value in a much shorter time compared to a traditional approach [17]. Few reports on large scale agile transformations reveal that many large organizations are adopting agile software development as part of their continued push towards higher flexibility and shorter lead time [18]. A systematic study was carried out on how Ericsson introduced agile in a new R&D product development program developing a XaaS platform and a related set of services, while simultaneously scaling it up aggressively [18]. A recent review of the literature on agile software development in different industrial sectors is evident nowadays. Financial Institution needs to acts faster in response to quick changes in their business environment. This is relative because of the new generation of financial technology companies that have exhibited substantial transformation in time to market and accelerate software development. To stand with the pace of such fintech (Financial technology) companies, financial institutions are concentrating on implementing agile practices in order to advance their software development processes [19]. In recent years, the rapid development of banks is more and more dependent on the function of a bank software system. The demand for software development is constantly changing, which makes some banks software development team unable to adapt to frequent demand changes. In order to adapt to the needs of frequent changes, more and more software development teams use agile software development methods [20]. In Production System Engineering (PSE), many projects conceptually follow the plan of traditional waterfall processes with sequential process steps and limited security activities, while engineers actually work in parallel and distributed groups following a Round-Trip-Engineering (RTE) process. Unfortunately, the applied RTE process in PSE is a coarse-grained that is often data are exchanged via E-Mail and integrated seldom and inefficiently as the RTE process is not well supported by methods and tools that facilitate efficient and secure data exchange. Thus, there is a need for frequent synchronization in a secure way to enable engineers building on a stable and baseline of engineering data. The system is built on Scrum, as an established agile engineering process, and security best practices to support flexible and secure RTE processes. Further results show that the augmented RTE process can provide string benefits from agile practices for the collaboration of engineers in PSE environments [21]. Scrum metrics The main objective of scrum metrics lies in Predictable Software Delivery and the maximum value to the Customer. Goals of Scrum can be divided into three Category based on Key Performance Indexes. (i) To measure the deliverable of the Scrum Tea and understand how much value is being delivered. (ii) To measure the effectiveness of the Scrum Team; its contribution to the business in terms of ROI, time to market, etc. (iii) To measure the Scrum Team itself in order to gauge its health and catch problems like team turnover, attrition, and dissatisfied developers. Scrum metrics -Measuring deliverable The following metrics help in measuring the work done by the Scrum Team and value delivered to Customer. Sprint goal success A Sprint Goal is the objective that is set to achieve after completion of each sprint. Sprint Goals are discussed between the Product Owner and the Scrum Team. Sprint goal must be specified and measurable. A sprint goal process is shown in Fig. 5. For example, delivering an X Feature, Check if the architecture enables the desired performance (addressing a risk), and Test if a user is willing to register before using the product features (testing an assumption). Selecting a Sprint Goal can be possible if the team will answer the following three questions: a. Why do we carry out sprint? b. How do we reach its goal? c. How do we know that goal has been met? Escaped defects and defect density It is the total no. of bugs faced by the users. A Scrum team try to avoid escaped defects by full test cases. But a trend of Escaped defects indicates a good product quality. It is calculated as shown in equation 1. (1) Defect Density is measured by no. of defects per software size. For example -Per Lines of code (LOC). It is calculated as shown in equation 2. (2) It is more important for fast-moving projects to check if the growth in defects is "normal" given the growth of the underlying codebase. Team velocity It is measured by calculating no. of units of software developed by a team in a sprint. It can be used for planning and estimating no. of sprints. As the name suggests, it is a metric for Scrum Teams to leverage for its internal purpose for continuous improvements. For example, as shown in Fig. 6, a team planned to complete 20 story points in their first sprint. They completed 15 story points and rolled 5 story points over the next sprint, in the second sprint they planned to complete 10 story points. They completed 15 story points (including 5 story point of the first sprint). In third sprint they planned to complete 25 story points, but completed 20 story points, in the fourth sprint they planned to complete 30 story points and completed 32 story points (including 2 story point of the third sprint). In the fifth sprint, they planned to complete 25 story points and they completed 25 story points. So, 21.4 is their average velocity. This velocity is used to make predictions. By knowing the velocity, team members can recognize an estimate of how long the project will take to complete. It should not be used for any other purpose; otherwise, the benefits of Scrum will be lost by the team and the organizations. The sprint burndown chart The Burndown Chart represents the progress within a Sprint. It depicts the no of hours remaining to complete the stores planned for the current sprints, for each day during the Sprint. It shows whether the team is on schedule to complete the sprint objective or not. To create this graph, calculate how much work remains by adding the sprint backlog evaluates every day of the sprint. The amount of work remaining for a sprint is the sum of the work remaining for the whole sprint backlog. Then monitoring these sums day by day and use them to create a graph that shows the work remaining over time. In order to create a graph which is shown in figure 7, the duration is taken: -5 Days, Sprint Backlog: -8 Tasks and velocity: -80 available hours. That is 80 hours over 5 days equating to 16 hours a day. To create the project burn-down chart, the data needs to be collected as a daily running total starting with 80 hours than 64 hours left at the end of day 1, 48 hours left at the end of day 2, etc. which is shown in Table 1. The daily progress (Table 2) is then collected in the table against each task. The value collected for each day is the estimated effort to complete the task instead of actual effort. The total remaining effort needs to be collected at the end of each day. This is the total (sum) of all the estimated time remaining at the end of each day as shown in Table 3. When the data is ready, then project burndown chart can be created using a line chart option of Excel (Fig.7). Scrum metrics -Measuring effectiveness The following metrics measure the effectiveness of Scrum Teams in terms of meeting business goals. Time to market Depending upon the Project, it is the time a project takes to start providing value to the Customer, which can be calculated by taking the length of the no. of sprints before a Scrum Team releases to production or the time it takes to start generating revenues which can be calculated by taking the length of the no. of sprints before its release plus depending on the organization's alpha and beta testing strategy. It can be evaluated with the help of the Schedule Performance Index as shown in equation 3 which is a ration of total original authorized duration versus total final project duration. The ability to accurately forecast schedule helps to meet Time to Market and shows the accuracy of Schedule estimating. (3) Return on investment (ROI) Return on Investment for a scrum project calculates the Net Benefits generated from a product versus the cost of the sprints required to develop it. Then multiply it by 100 will determine the percentage return for every invested as shown in equation 4. (4) To measure net benefits, placing a currency vale on each unit of data and other sources are a variety of measures like a contribution to profit saving of costs, increase in quantity of output and quality of improvements. Cost includes the costs to design and develop or maintain the product, cost of product management initiative, cost of resources, cost of travel and expenses, the cost to train and other overhead costs, etc. In Scrum, ROI starts generating very fast as compared to the traditional development methods, as working software can be delivered to the Customer very early. With each sprint added features increase the growth in revenue. Capital redeployment It is measured to check whether this Scrum Project is worthwhile or not. In case if it is not worthwhile, then the team should be deployed to other more profitable projects. It can be calculated as. The revenue value of the remaining items in the projects backlog marked as V, The actual cost of the sprint needed to complete these items marked as AC, The opportunity cost of alternative product work the team could do, marked as OC When, V< AC + OC then, the project should end and the team redeployed to other projects. Customer satisfaction It means customer expectations are met. This requires the conformance to requirement and Fitness for use. It can be calculated as shown in equation 5. The Customer Satisfaction Index is an index comprising hard measures of Customer buying/ use behavior and soft measures of customer opinions or feelings. The index is weighted based on how important each value is in determining customer overall customer satisfaction and buying/use behavior. Includes measures such as repeat and lost a customer (30%), Revenue from existing customers (15%), Market share (15%), Customer Satisfaction Survey results (20%), Complaints/ Returns (10%) and Project-specific surveys (10%). Scrum metrics -Monitoring the scrum team These metrics help the Scrum Team in monitoring its activity and identify problems before they impact development. Daily scrum and sprint retrospective Daily Scrum, improve the communication between the team, identify impediments so as to find an early solution for it, highlight and promote quick decision making and improve the team's level of knowledge. The sprint retrospective is an opportunity for the Scrum Team to introspect improve within the Scrum process so as to make the next sprint outcome more effective. So, these two events, if carried out regularly with the well-documented conclusion, can provide an important qualitative measurement of team progress and process health. Team satisfaction It is an important metric to survey as if done periodically will notify how satisfied team is with their work, can provide warning signal about culture issues, team conflicts issues, team conflicts or process issues. It can be measured through a survey which contains questions based upon their working culture, team coordination and process integrations and environment, etc. It can have a scale from 1 to 100. It is calculated as shown in equation 6. Team member turnover Team Member turnover means replacement of team members in a Scrum team. Low turnover percentage in Scrum team indicated a healthy environment and increase in the overall company turnover, while a high percentage indicates the opposite of it. It can be calculated as shown in equation 7. Team productivity Team Productivity, it is also one of the most important metrics to evaluate. It indicates whether the cost involved to have people is worth or not. The direct method to measure is to use revenue per employee as the key metric and then divide revenue per employee by the average fully burdened salary per employee yield a ratio. This ratio is the average-per-employee "Product Ratio" for the organization as a whole. It is calculated as shown in equation 8. Scrum reporting-Metric which reports to stakeholders A stakeholder is interested in knowing the progress of scrum project and wanted to know whether it is on track. Following metrics help them to communicate this and explain deviations from the expected project path. 1. Sprint and release burndown:-It gives a view of the progress at a glance 2. Sprint Velocity: -A historical review of how much value have been delivering. 3. Scope Change: -The number of stories added to the project during the release, which is often a cause of delays. 4. Team Capacity: -No. of developers in a team is on a full time, Work capacity been affected by actions or sick leaves. Developers pulled off to side projects. 5. Escaped Defects: -It provides a picture of software performing in the production. Conclusion This paper has explained that Scrum has the power to transform project management across every industry or business. The evidence from the survey results study implies that by using Scrum. The team become more agile and succeeded in a way to react more quickly and respond more accurately to the inevitable change that comes their way. This paper has highlighted the importance of Scrum Metrics. Scrum Metrics can have a specific purpose and importance in an organization and teams. It is not merely a number, but it can provide a trend which has to observe to make it impactful. With the appropriate use of Scrum metrics, organizations can link each measure to a wellarticulated goal that team the understands. Scrum metrics provides a powerful tool which can be used to make improvements and help businesses to focus their human and other resources. Organizations if using Scrum metrics, can understand the value in watching the trends, monitoring in smaller durations in order to understand individual, management and organization influences and helps in making the decision to accelerate and decelerate these influences. Taken together Scrum and Scrum Metrics, the present findings might suggest Scrum could also be applied in any field even across life in general. It is recommended try to plan Scrum, and use Scrum Metrics in order to stay focused, collaborating, and communicating, to accomplish what truly needs to be done -successfully.
7,042
2019-06-28T00:00:00.000
[ "Computer Science" ]
Estimates of direct and indirect effects for early juvenile survival in captive populations maintained for conservation purposes: the case of Cuvier's gazelle Together with the avoidance of any negative impact of inbreeding, preservation of genetic variability for life-history traits that could undergo future selective pressure is a major issue in endangered species management programmes. However, most of these programmes ignore that, apart from the direct action of genes on such traits, parents, as contributors of offspring environment, can influence offspring performance through indirect parental effects (when parental genotype and phenotype exerts environmental influences on offspring phenotype independently of additive genetic effects). Using quantitative genetic models, we estimated the additive genetic variance for juvenile survival in a population of the endangered Cuvier's gazelle kept in captivity since 1975. The dataset analyzed included performance recording for 700 calves and a total pedigree of 740 individuals. Results indicated that in this population juvenile survival harbors significant additive genetic variance. The estimates of heritability obtained were in general moderate (0.115–0.457) and not affected by the inclusion of inbreeding in the models. Maternal genetic contribution to juvenile survival seems to be of major importance in this gazelle's population as well. Indirect genetic and indirect environmental effects assigned to mothers (i.e., maternal genetic and maternal permanent environmental effects) roughly explain a quarter of the total variance estimated for the trait analyzed. These findings have major evolutionary consequences for the species as show that offspring phenotypes can evolve strictly through changes in the environment provided by mothers. They are also relevant for the captive breeding programme of the species. To take into account, the contribution that mothers have on offspring phenotype through indirect genetic effects when designing pairing strategies might serve to identify those females with better ability to recruit, and, additionally, to predict reliable responses to selection in the captive population. Introduction Juvenile survival is a critical component of population dynamics. In endangered species managed through captive breeding programmes, the survival of juveniles is crucial for population viability. These conservation programmes focus mainly on the preservation of genetic variability to avoid any negative impact of inbreeding. The genetic effect of inbreeding is the inbreeding depression: the decrease of the individual fitness through reduced fecundity, offspring viability, and individual survivorship (Charlesworth and Charlesworth 1987;Falconer and Mackay 1996). Thus, management of endangered species in captivity tends to minimize mating between relatives to maximize individual fitness and maintain population viability in the long term. This procedure assumes that the improvement of fitness or the threats to fitness are only determined by the probability of individuals carrying identical alleles by descent in a given gene. As neutral markers are assumed to be good indicators for homozygosity, most genetic surveys of endangered populations have been carried out using such molecular tools (Ruiz-L opez et al. 2009;Godinho et al. 2012) even though they could be poor predictors of genetic diversity in many population scenarios (Hansson and Westerberg 2002). Undoubtedly, traits of greatest concern in the conservation of evolutionary potential show quantitative variation among individuals Garcia-Gonzalez et al. 2012). Components of quantitative genetic variation determine the ability to undergo adaptive evolution and the effects of inbreeding on reproductive fitness . Approaches based on the resemblance of relatives can be used to determine whether endangered populations still show significant additive genetic variation (Falconer and Mackay 1996). Narrowsense heritability (h 2 ), defined as the proportion of total phenotypic variance that can be ascribed to additive genetic variance (Falconer and Mackay 1996), is the most common within-population measure of genetic diversity used for complex traits (see Charmantier and Garant 2005;Boulding 2008; for reviews). Theory predicts a reduction of heritability after several generations of inbreeding (Falconer and Mackay 1996). Heritability, which determines the evolutionary potential of a quantitative trait (Charmantier and Garant 2005), has been estimated for several life-history traits in wild populations (e.g., Kruuk et al. 2000; R eale and Festa-Bianchet 2000; Wilson et al. 2005;Johnston et al. 2011). However, reports in the literature including estimates of heritability for life-history traits in captive populations of endangered mammals are scant (Pelletier et al. 2009), particularly in ungulates (Ricklefs and Cadena 2008). Juvenile survival, an obvious key life-history trait, has been studied in polygynous mammals, including ungulates. This trait is affected by different factors such as birth weight (Singer et al. 1997), sex (Clutton-Brock et al. 1985), litter composition (Burfening 1972;Ib añez et al. 2013), maternal characteristics (Pluh a cek et al. 2007;Ib añez et al. 2013), demographic parameters (Gaillard et al. 1998), and environmental factors (Singer et al. 1997). In most breeding programmes of endangered species, approaches for the preservation of genetic variability ignore that apart from heredity, parents, as part of the environment that offspring perceive, can influence their progeny through parental effects. Following Wolf and Wade (2009), parental effects represent the influence of parent's genotype and phenotype to their offspring phenotype, independent of additive genetic effects (Kruuk and Hadfield 2007). When there is variation in the quality of the environment provided by the parents and if that variation reflects genetic differences among individuals, then the environment is partially heritable through the action of these parental effects. These 'indirect genetic effects' (sensu Wolf et al. 1998) are named indirect because the genes leading to the effects are expressed in the parent, not in the individual whose phenotype is being measured (Garcia-Gonzalez and Simmons 2007). 'Indirect environmental effects' (sensu Wolf et al. 1998) may also occur when nongenetic (i.e., environmental) influences on the phenotype of one individual (parents) have indirect effects on the phenotype of another individual (offspring; Rositer 1996). The assessment of both genetic and environmental indirect effects has major evolutionary implications and is relevant to captive breeding, as maternal effects include the genetic ability and the nongenetic abilities and strategies available to mothers to influence offspring phenotype, with potentially large-scale demographic results (Mosseu and Fox 1998;Jones 2005;Marshall and Uller 2007;R€ as€ anen and Kruuk 2007). Information on captive animals is recorded in speciesspecific databases (called studbooks), representing a wealth of invaluable untapped data for quantitative genetic approaches, as they contain detailed pedigree information rarely available for wild populations (Pelletier et al. 2009). In this study, we used the information recorded in the International Cuvier's Gazelle Studbook to analyze calf survival in the largest captive population of this species, which has been maintained at La Hoya Experimental Field Station (Almer ıa, Spain) for over 35 years. We ran genetic models on this long-term dataset, which while adjusting for systematic environmental effects, took into account the major components of phenotypic variance, the additive genetic component and parental effects. Understanding them and ascertaining their importance to individual fitness requires the implementation of a variance components approach that can separate additive genetic and environmental effects on the phenotype of focal individuals, as they might have evolutionary consequences for the long-term sustainability of the captive population. Gazella cuvieri (Ogilby 1841), a Sahelo-Saharan species, has declined dramatically since the 1950s (Beudels et al. 2005), and only a few small isolated populations seem to remain in its range (Morocco, Tunisia, Algeria), apparently due to excessive hunting, anthropogenic barriers, and habitat degradation (Beudels et al. 2005). Its captive breeding program began at 'La Hoya' Experimental Field Station (EEZA-CSIC) in Almer ıa in 1975 from four founders (one male and three females; Moreno and Espeso 2008). For this extremely bottlenecked population, one would expect small additive genetic variation for a life-history trait such as juvenile survival (Price and Schluter 1991), and consequently, (1) a decrease in the response to selection (natural or artificial) for this trait after several generations of inbreeding (Falconer and Mackay 1996) and (2) inbreeding depression, as found by several authors for this fitness trait in this population (Alados and Esc os 1991;Cassinello 2005). In this study, we verify these expectations. Moreover, the effect of additive genetic variance on phenotypic variation is compared with the contribution of indirect genetic and environmental effects. We also discuss the relative importance of these two drivers of phenotypic variance for the viability of this captive population of endangered Cuvier's gazelles. Study population Cuvier's gazelle (Fig. 1) is a medium-sized, sexually dimorphic gazelle. The average body mass of adult females is over 26 kg while that of adult males is about 34 kg. Females are fertile at about 8-9 months and males at 12-13 months. The gestation period is about 5.5 months. Twins represent up to 39% of births in this polygynous species (Moreno and Espeso 2008). At European level, its population is managed through an Endangered Species Programme (EEP) that maintains currently a self-sustaining population. Six institutions (Espeso and Moreno 2012) participate in this EEP, with La Hoya Experimental Field Station (EEZA-CSIC) housing the largest population (currently over 140 individuals). As a general rule, animals at 'La Hoya' are maintained in breeding groups formed by one adult male and five to eight adult females. The adult male is removed from its breeding herd when the first calf is born in the herd. This is the recommended procedure in Cuvier's gazelle EEP husbandry guidelines (Moreno and Espeso 2008) to avoid the same male to mate the same females in two consecutive breeding seasons. Data for the analyses were extracted from the studbook (Espeso and Moreno 2012). Inbreeding coefficient (F i ), defined as the probability that an individual has two identical alleles by descent (Wright 1922;Mal ecot 1948), and individual increase in inbreeding coefficients (DF i ;Guti errez et al. 2008Guti errez et al. , 2009, defined as the rate to which inbreeding is accumulated in a given individual due to its own pedigree, were calculated from the pedigree in the studbook using the program ENDOG (Guti errez and Goyache 2005) which implements the algorithm described by Meuwissen and Luo (1992). We focus on a critical life-history trait, juvenile survival. In captive populations, as well as in natural ones, the highest mortality occurs among juveniles (Ralls et al. 1979;Kirkwood et al. 1987;Debyser 1995), and in our species mostly up to one month of age (Ib añez et al. 2013). The trait characterizes the ability of a calf to survive during the period of strict lactation and takes a dichotomous form: live calf (1) and dead calf (0). Available data were edited to remove records in which calf death was due to management (approximately 0.05% of the total deaths), including traumatisms and injuries due to intraspecific agonistic behavior with adults in the herd. The final dataset analyzed consisted of 700 Cuvier's gazelle calf studbook records (Espeso and Moreno 2012). These included all births at 'La Hoya' Experimental Field Station from 1977 to 2012 (an average of 20 offspring per year was recorded). A total of 40 animals without records were included in the pedigree. Terminology The present analysis involves the main following effects: 1 Direct genetic effects (u), that is, the variation of a quantitative trait explained by the genotype of the individual on which performance is recorded. Here, the direct genetic effect is referred to calf. The ratio of the variance explained by the direct genetic effect to the total phenotypic variance will be referred as 'heritability' (h 2 ). 2 Maternal genetic effects (m) defined as any phenotypic influence from a dam on her offspring (excluding the effects of directly transmitted genes) that affect offspring performance (Willham 1963). Biological mechanisms to explain maternal effects include cytoplasmic (mitochondrial) inheritance, intrauterine and postpartum nutrition provided by the dam, antibodies and pathogens transmitted from dam to offspring, and maternal behavior. Due to their genetic nature for dam and their environmental influence for calf, maternal genetic effects are indirect genetic effects. The ratio of the variance explained by the maternal genetic effect to the total phenotypic variance will be referred as 'heritability of the maternal effect' (m 2 ). 3 Permanent maternal environmental effects (c), that is, those effects on offspring phenotype shared by offspring of the same mother, independent of additive genetic effects. These are a particular case of environmental effects shared by groups of individuals, for instance, effects shared by groups of relatives or individuals belonging to the same cohort. The ratio of the estimates of this effect to the total phenotypic variance will be termed as c 2 . Throughout the text, we use the term 'systematic' instead of the term 'fixed' to refer to some of the effects included in the models fitted. Although systematic effects are equivalent to those considered fixed in frequentist statistics, in a Bayesian context, where all effects are 'random' effects, are not. The difference between 'systematic' and 'random' effects in a Bayesian context is that the a priori function of the former (that from where the effects of the marginal posterior distribution is sampled) is a flat, uniform function, while the a priori function for random effects is Gaussian. Main models Juvenile survival is a discrete, dichotomous trait. The estimates of genetic parameters in dichotomous traits may depend on the population mean for the trait and, theoretically, threshold models would better account for the probabilistic structure of categorical data than linear models do (Gianola and Foulley 1983;Weller and Gianola 1989). But according to several studies in livestock (Goyache et al. 2003;Cervantes et al. 2010), when databases are small there is little incentive for the use of threshold models over linear models, especially with respect to prediction ability. So in this study, genetic parameters were estimated using a Bayesian procedure applied to linear mixed models (Altarriba et al. 1998), and these models being classified according to the statistical assumptions on the trait as: 1 Continuous (C) model assuming that the analyzed trait was a continuous variable with normal distribution. 2 Threshold (T) model, also called probit, (Gianola 1982;Gianola and Foulley 1983;Sorensen and Gianola 2002) that theoretically would fit the discrete probabilistic nature of the data better. Under this model, it is assumed that an underlying nonobservable variable exists defining the different categories of the categorical trait if this underlying variable exceeds a particular threshold value. We first analyzed juvenile survival running a complete reference model (equation 1) where offspring survival is treated as a trait of the calf as well as of the mother and of the father; that it, we run a model including all the possible random effects. This model is, however, irresolvable because relationship coefficients involved are less than the number of parameters to be estimated (Hill and Keightley 1988). Its form is given by: where y is the vector of phenotypic measurements of offspring survival; X is an incidence matrix relating the values of y to the systematic effects parameters given in the vector b; Z is an incidence matrix relating each of the additive genetic effect to an individual's phenotype, u is a vector describing the additive genetic effects; M is the incidence matrix of maternal genetic effects (m), with d as their vector; P is the incidence matrix of paternal genetic effects (s), with s as their vector; W is the incidence matrix of maternal permanent environmental effects (c), with p as their vector; e is a vector of residuals effects; r u 2 the additive genetic variance, r d 2 variance due to m, r s 2 variance due to s, r ud the covariance between the direct (additive) and the additive genes underlying m, r us the covariance between the direct (additive) and the additive genes underlying s, r ds is the covariance between the additive genes underlying m and s, r p 2 is the variance associated with maternal permanent environmental effects (c), I is an identity matrix, and A is the numerator relationship matrix. Due to the dichotomous nature of the analyzed trait, in threshold models, a restriction was set so that residual variance was set to 1 and threshold was set to 0. The model includes the following systematic effects in b: year of calving (33 levels, from 1977 to 2012; no records available for 1996 because no mating took place in that year; years 2011 and 2012 were pooled since only 4 individuals were born in 2011), mother parity (2 levels: primiparous or multiparous), age of the dam at calving in days, as linear and quadratic covariate, and litter composition [6 levels: F, M, F(F), F(M), M(F), M(M), where M and F mean male and female, respectively, and sibling sex is given in parentheses for twins]. As fitted, this litter composition accounts for the different probability of survival in a male or female twin whether or not the cotwin is the same sex. In mammals (livestock and wild), the magnitude of maternal effects is generally larger than the magnitude of the paternal effects (Cheverud 1984;Goyache et al. 2003;Wilson and R eale 2006;Blomquist 2012 that the above-mentioned model is mathematically irresolvable, and we ran the following alternative models (including fewer random components) where calf survival was treated either as a calf trait or as a combination of calf and mother traits: 1 Calf model: Offspring survival is treated as a trait of calves. In this model, only direct additive genetic effect of the calf is fitted as random effects besides the residual. 2 Calf-dam model: Offspring survival is treated as a trait determined by calves and maternal genetic effects. 3 Calf-permanent model: Offspring survival is treated as a trait determined by calves and maternal permanent environmental effects. 4 Calf-dam-permanent model: Offspring survival is treated as a trait determined by calves, maternal genetic effects, and maternal permanent environmental effects. These models included 700 calves producing data and a relationship matrix of 740 individuals (Table 1). In the studied population, there is no clear evidence for the influence of inbreeding on performance across different life-history traits as some studies have found support for this influence (Alados and Esc os 1991;Cassinello 2005), but others not (Ruiz-L opez et al. 2010;Ib añez et al. 2013). As inbreeding influence is theoretically defined on nonadditive genetics influence, it is supposed that its effect when fitted as a systematic effect would remove part of the residual variance while keeping the same additive genetic component. Therefore, an increase in heritability would be expected in that scenario. Taking this into account, different models were fitted to ascertain the possible influence of inbreeding on the Gazella cuvieri genetic background. Then, models described above were also classified according to the assessment made regarding the influence of inbreeding on the trait as: Model I: Run without fitting the inbreeding coefficient of the individual producing data in the model. Complementary models To acquire further insight into the definitive genetic nature of juvenile survival, the possibility that the trait is only dependent on either the influence of the mother (juvenile survival treated as a mother trait) or the influence of the father (juvenile survival treated as a father trait) should also be explored. Therefore, a number of complementary models were fitted as well to find out the likely influence of the mother, the father, or of both parents in this phenotypic trait of their offspring. A full description of the complementary models fitted, and their results are given in the Supplementary Material and in Tables S1 and S2. Statistics All estimations were carried out in a Bayesian frame using the TM program (Legarra 2008). Marginal posterior distributions of all parameters were estimated using the Gibbs sampling algorithm programmed in TM. In addition, this software enables setting threshold animal models besides continuous models, allowing comparisons between these different models. Prior distributions for vector b were assigned as bounded uniform prior distribution, and the variance components r u 2 , r m 2 ; r s 2 ; r c 2 and r e 2 were scaled inverted chi-squared distributions (v = 2 and S = 0). A total Gibbs chain length of 1,000,000 samples for each analysis were defined, with a burn-in period of 100,000 and a thinning interval of 100. Models were tested and examined to choose the one that best predicted performance instead of goodness of fit, as models with the best fit are not always those that provide the best prediction. At present, cross-validation (Efron and Tibshirani 1993) is considered the best method for checking model prediction ability (Arlot and Celisse 2010). As results found when using quantitative models are known to be model dependent as well as database dependent, changes in both the effects included in the model fitted and the size (or structure) of the database analyzed affect predictive power. When the same database is analyzed, a given model may fit better to data. However, when the goal is to predict performance, it must be ensured that the prediction ability of such model does not drop when the database changes. The most common approach to maximizing predictive power is to: (1) Create different random subsets from a given database, (2) Carry out the analyses excluding one of the subsets created, and then (3) Predict the performance of the excluded subset using the results of the analyses. When this 'cross-validation' procedure is repeated a number of times for each model, the correlation between the predicted and real performance data can be straightforwardly used to compare models for their prediction ability. The use of cross-validation as the selection criterion has an additional benefit. As this procedure is simply based on the correlation between real (removed) data and the corresponding predicted data, the criterion is free of parametric assumptions. This approach can be applied directly to a wide variety of models with which the predictive power of continuous vs. threshold models can be compared. To carry out cross-validation, we randomly removed half of the records of the last 5 years of birth (reference population), the genetic parameters reestimated running the models solved without them, and the removed records estimated according to the obtained solutions. The solutions obtained for the records removed were compared to the real performance data via classical correlation to assess the predictive ability of the model. Then, the correlation (r) between the real removed record and the continuous solution (not rounded estimated record in the continuous models and the underlying variable in the threshold models) was computed. To avoid sampling bias, each model was rerun for 20 random samples and the correlation averaged. Once the best model was chosen, additive genetic values were averaged within year of birth to explore signs of genetic trend of the trait. When the best model had been selected by cross-validation, inferences about systematic effects were carried out in a Bayesian context. Therefore, as marginal posterior distributions are available, inferences can be performed in terms of probability of the parameter being located between arbitrary values. In this case, inferences were provided in terms of probability of some desired parameters being higher than 0. Figure 2 gives information on the solutions found for the major systematic effects included in the linear Calf-dam model. The calf of a multiparous gazelle had four points higher probability of survival than the calves of primiparous gazelles ( Fig. 2A), with 79% of probability of being really higher. Male calves had a lower probability of survival than female calves (71% vs. 82%), with 99% of probability of being really lower. When twin females (FF) were compared with twin males (MM), a female still had nine points higher probability of survival (with 95% of probability of being higher). If considering mixed-sex twins, a female with a male cotwin (F(M)) had 13 points lower probability of survival than with a female as a cotwin (FF), with 99% of probability of being lower; however, a male with a female cotwin (M(F)) had 12 points higher probability of survival than with a male as cotwin (MM), with 99% of probability of being higher (Fig. 2B). The age in days of the mother at calving had a positive regression coefficient (0.10 9 10 À3 ; 87% of probability being positive) for the linear adjustment and negative (À0.03 9 10 À6 ; 87% of probability being negative) for the quadratic adjustment which means that offspring born to young and to old mothers are less likely to survive than those born to middle-aged mothers (Fig. 2C), the optimum of the trait being reached in mothers from 8 to 10 years old. Table 2 gives the mean and standard deviation of the marginal posterior distribution of the parameters estimated for juvenile survival in Cuvier's gazelle using Model I. Under threshold models, the shown parameters were those obtained on the continuous underlying scale. Neither the coefficients of inbreeding (Model II) nor the individual increase in inbreeding (Model III) had relevant effect on the trait analyzed (Appendix S1). When Models II and III were used estimates of the effects included in the models changed less than 3%. Furthermore, the posterior distribution of the differences between the estimates obtained using these Models and Model I always included 0 and, therefore, they could not be considered statistically significant. Therefore, we only give and discuss below results obtained for Model I. In most cases, the continuous models predicted the data better than their threshold counterparts. The continuous models tended to have a better predictive power (higher r values) than their threshold counterparts (Table 2). Heritability estimates of the additive genetic effect found assuming juvenile survival only as a calf trait (Calf model) were higher in the continuous than in the threshold models (h 2 = 0.457 AE 0.173 vs. h 2 = 0.245 AE 0.0.085). These estimates decreased with inclusion of maternally related random effects in the models fitted (Table 2). In threshold models, estimates of maternal effects (both m and c) were even higher than estimates of direct additive genetic effects. In continuous models, however, such maternal effects are always lower than direct genetic effects ( Table 2). As most estimates correlations (all but Calf-dam continuous model) between the direct effects and maternal effects were negative, they can be considered as nonsignificant taking into account that in all cases the standard deviation of the marginal posterior distribution was very high. The worst predictive power was found for the model considering the influence of the mother solely as environmental (Calf-permanent model; r = 0.008 for the continuous and r = 0.015 for the threshold model). From all these models, the best prediction ability was shown by the Calf-dam continuous model, with r = 0.103 ( Table 2). The importance of the genetic background of the mother on the trait was confirmed when complementary models were run (see Tables S1 and S2). Figure 3 shows the phenotypic trend for juvenile survival and the genetic trends for the direct genetic effect estimated using the Calf-dam Model I (which shows the highest r value) by year of birth of the individuals. A positive phenotypic trend for juvenile survival over time was found. The genetic ability for juvenile survival has increased over years. The probability of the genetic response to be higher than zero increased across years, increasing from 81% to 89% for the calves and from 71% to 82% for the mothers since 2000). The increase in both calf and mother's genetic ability for the trait was noticeably congruent. As genetic trends were assessed in a Bayesian context, they are not affected by correlated prediction error among cohorts and genetic drift, as they were if we had used the best linear unbiased prediction (BLUP) to predict breeding values (Hadfield et al. 2010). Discussion In this study, we quantified the genetic basis of juvenile survival in a captive population of the endangered Cuvier's gazelle. An understanding of the relative influence of direct (additive genetic) versus indirect (parental) effects underlying this fundamental life-history trait is essential to predict the strength and direction of the evolution of this captive population. In this extremely bottlenecked population, the heritability of juvenile survival is Figure 2. Probability of calf survival considering major systematic effects: mother parity (plot A; primiparous vs multiparous), litter composition (Plot B; this factor captures sex and litter size; M and F mean male and female, respectively, and sibling sex is given in parentheses) and mother age (as quadratic covariable) in years (Plot C). 0.36 (with 98% of probability of being higher than 0.05), which suggests that a non-negligible phenotypic variation observed in this fitness trait is ascribed to additive genetic variance. There are also indirect parental (mainly maternal) effects in this trait which may produce phenotypic resemblance between relatives equivalent to or even greater than that due to the additive genetic variance. Thus, genes influencing juvenile survival are not only those expressed in the individual (directly inherited from calf's parents), but also those of an interacting phenotype, its mother. This means that a calf's phenotype may also evolve through changes in the environment provided by its mother. Systematic effects and permanent maternal environmental effects on juvenile survival Juvenile survival in Cuvier's gazelle is highly influenced by both mother parity and mother age (Fig. 2), which is consistent with results from other nongenetic studies carried out with this (Ib añez et al. 2013) and other mammal species (Côt e and Festa-Bianchet 2001;Pluh a cek et al. 2007). Offspring survival was relatively low when mothers were young and primiparous (62% at 1 year old), substantially increased when mothers were mid-aged (up to 87% at 8.5 years old) and decreased again in senescent mothers. The optimal age of mothers for calf survival was Table 2. Mean and standard deviations* (in brackets) of the posterior marginal distribution of the genetic parameters for juvenile survival obtained with the four models run under the assumption of either continuous (continuous model) or categorical (threshold model) nature of the studied trait. Abbreviations: h 2 , proportion of total phenotypic variance ascribed to additive genetic variance of the individual (calf) producing data (heritability); m 2 , proportion of total phenotypic variance ascribed to maternal genetic effects; c 2 , proportion of total phenotypic variance attributed to maternal permanent environmental effects; r g , correlation between the genetic components of the effects included in either model fitted; r, the mean correlation (20 replicates) between the real removed records and their prediction. Models fitted did not include the inbreeding coefficient of the individual producing data. Residual variance was arbitrarily set to 1 in threshold models. from 7.5 to 9.5 years old. Breeding before reaching adult body size represents a cost in terms of calf survival added to inexperience on primiparous mothers and decline in offspring survival found in oldest mothers might be the consequence of a decreased body condition due to reproductive senescence (Berub e et al. 1999;Côt e and Festa-Bianchet 2001;Ericsson et al. 2001). Litter composition (a factor that captures sex and litter size) influences infant survival as well in Cuvier's gazelles. The highest mortality was found for single male offspring (M) and for offspring with a male cotwin [F(M); M(M); see also Ib añez et al. 2013]. Our results in a captive Cuvier's population support findings by other authors that female calves are less costly to produce and rear than males, even if they are twins . Maternal permanent environmental effect also explains a proportion of the variance of juvenile survival. The data fit for the Calf-dam-permanent models were slightly lower than for the Calf-dam models. The small size of the available dataset led to poorer performance of the models fitted as the number of effects included increased. Although these maternal effects do not contribute directly to the evolutionary response to selection (Wolf et al. 1998) they might have important management consequences in captive breeding of threatened species as it might help the EEP's manager to identify those dams providing better environment to their offspring, offering a complementary criteria when arranging breeding herds. For example, the manager might detect those mothers more successful at preventing offspring death because they provide more care, and mate them preferably to others tending more to lose offspring. Genetic nature of juvenile survival Heritability (h 2 ) of juvenile survival in the Cuvier's gazelle was moderate (Table 2), but much higher than estimates of h 2 in captive rhesus macaques (Gagliardi et al. 2010). It was also higher than estimates of h 2 for other life-history traits in wild red deer (Kruuk et al. 2000) and other mammals (Holt et al. 2005). Contrary to expectations, our results suggest that some significant amount of additive genetic variance is maintained within this captive population for a character closely related to fitness, revealing that this quantitative trait can potentially still evolve (Charmantier and Garant 2005). Moreover, we found that heritability estimates (h 2 ) were higher when the trait was considered only as a calf trait than with the inclusion of maternally related random effects in the models fitted, suggesting that the additive genetic variances were overestimated due to previously unaccounted for genetic and environmental maternal effects. In our analyses, the maternal variance components indicated that mothers vary in their influence on the survival of their offspring. The models fitted allowed us to separate maternal variance from offspring additive variance. As maternal effects were consistent across models, we infer that indirect maternal effects operate on juvenile survival through maternal selection. When maternal genetic effects are not negligible, response to selection depends not only on direct, but also on the additive genes underlying the maternal genetic effect (m), which can result in accelerated, or dampened response to selection (Wolf et al. 1998). Here, looking at the standard deviations of its posterior marginal distribution, the genetic correlation estimated between u and m was clearly nonsignificant regardless of the model used. Hence, the use of individual additive genetic values for survival as criteria to form breeding herds in this captive population will make sense only if the maternal genetic effects are considered. By doing this, juvenile mortality will tend to decrease in the population thereby increasing its long-term viability. A positive change in genetic trend was thus observed in calves and mothers, which shows selection for juvenile survival over time. These results indicate that (1) the Cuvier's Gazelle captive breeding program is effective in achieving genetic improvement in this fitness trait despite increased inbreeding since it began in 1975 and (2) that genetic changes have occurred in response to natural selection attesting to the evolutionary potential of this captive population. Influence of inbreeding The inclusion of inbreeding in the estimation models (Appendix S1) did not affect the estimates of heritability, suggesting the maintenance of genetic variability in our population. Although a potential change in variance components dependant on inbreeding has not been modelled, if such relationship exists, residual variance would have decreased and heritability would have increased. Even when inbreeding increased, there was no depression, as juvenile survival progressively increased over the 35-year study period. The low impact of inbreeding depression observed in our study (see also Ib añez et al. 2011Ib añez et al. , 2013 could be a consequence of a slow rate of inbreeding in the Cuvier's gazelle population in the past, which may have allowed natural selection to progressively purge some of the negative consequences of inbreeding (Ballou 1997), or it could just be a specific feature of the species, where the consequences of inbreeding seem to be less striking than in others (Ballou 1994). Improvements in husbandry may lead to higher average survival in captive populations in spite of an increase in inbreeding as well (Kalinowski et al. 1999). Although we cannot exclude this possibility, the importance of maternal effects suggests that the increase in calf survival is not solely due to husbandry improvements. Insights for conservation For threatened and endangered species, coordinated captive breeding programs such as the European Endangered Species Programme (EEP) represent the only way to rear and maintain the sustained populations that ensure their survival (Magin et al. 1994;Russello and Amato 2004). However, captive breeding populations are also often observed to be in serious demographic decline. Although their managers have a variety of breeding schemes for maintaining their genetic diversity and alleviating inbreeding depression if necessary, achieving sustainable population sizes of these generally low-founder populations is usually difficult (Kleiman et al. 2010). In this study, we have focused on a key fitness trait, juvenile survival, which represents the greatest contribution to fitness in both captive and natural populations (Houde et al. 2013). Our results underscore that, apart from direct genetic transmission, parents (mainly mothers) contribute to their offspring through indirect (genetic and environmental) effects, these maternal effects increasing the potential of this population to respond to selection on offspring survival. So, to take into account maternal contribution in pairing strategies of captive bred endangered species might be of great importance in predicting a reliable response to selection, as well as to identify those individuals with better ability to recruit. Even more, if traits expressed during social interactions (e. g., mother-offspring interaction) evolved more rapidly than other type of traits (Moore et al. 1997), to consider their likely effects is crucial when arranging pairing strategies as they might be responsible at least partially for the rapid adaptation to captivity described for some species (Frankham and Loebel 1992;Woodworth et al. 2002;Heath et al. 2003;Kraaijeveld-Smit et al. 2006).
8,590.4
2014-10-10T00:00:00.000
[ "Biology", "Environmental Science" ]
Towards Patient-Specific Computational Modelling of Articular Cartilage on the Basis of Advanced Multiparametric MRI Techniques Cartilage degeneration is associated with tissue softening and represents the hallmark change of osteoarthritis. Advanced quantitative Magnetic Resonance Imaging (qMRI) techniques allow the assessment of subtle tissue changes not only of structure and morphology but also of composition. Yet, the relation between qMRI parameters on the one hand and microstructure, composition and the resulting functional tissue properties on the other hand remain to be defined. To this end, a Finite-Element framework was developed based on an anisotropic constitutive model of cartilage informed by sample-specific multiparametric qMRI maps, obtained for eight osteochondral samples on a clinical 3.0 T MRI scanner. For reference, the same samples were subjected to confined compression tests to evaluate stiffness and compressibility. Moreover, the Mankin score as an indicator of histological tissue degeneration was determined. The constitutive model was optimized against the resulting stress responses and informed solely by the sample-specific qMRI parameter maps. Thereby, the biomechanical properties of individual samples could be captured with good-to-excellent accuracy (mean R2 [square of Pearson’s correlation coefficient]: 0.966, range [min, max]: 0.904, 0.993; mean Ω [relative approximated error]: 33%, range [min, max]: 20%, 47%). Thus, advanced qMRI techniques may be complemented by the developed computational model of cartilage to comprehensively evaluate the functional dimension of non-invasively obtained imaging biomarkers. Thereby, cartilage degeneration can be perspectively evaluated in the context of imaging and biomechanics. In the context of personalized medicine, biomechanical computational modelling becomes ever more relevant in patient-specific care 1 . Modelling-based predictions of biomechanical tissue properties hold great potential for the non-invasive and non-destructive characterization of the tissue status in health and disease; however, these go along with high requirements for simulation and measurement techniques. In particular, the constitutive model has to address significant variations in mechanical properties associated with age, gender, lifestyle as well as disease 2 . In the field of cartilage modeling, patient-specific predictions could significantly facilitate the detection of cartilage degeneration, which is the hallmark change of osteoarthritis (OA). Detection of early changes of particular relevance as the degenerative cascade may at least be slowed (if not even halted) in its progression if early preventive action such as modification of activity level, weight loss, pharmacological chondroprotection or axis-modifying surgery is taken in a timely manner 3 . However, up to now, it is not possible to detect the early, potentially reversible stages of cartilage degeneration using clinical standard imaging modalities 4,5 . Against this background, the non-invasive imaging of cartilage by advanced MRI techniques has made considerable progress over the last decades. Functional MRI techniques (synonymous with quantitative MRI [qMRI]) such as T2, T2* and T1ρ mapping have been developed and validated in a variety of scientific and clinical contexts to characterize extracellular matrix properties of cartilage [6][7][8] and provide measures related to tissue composition and structure 4,9 . T2 relaxation describes the decay of transverse magnetization and is reflective of the tissue's collagen structure and water content. In addition to the tissue-characteristic T2 relaxation, T2* relaxation is governed by additional T2* decay secondary to static magnetic field non-uniformity. In contrast, T1ρ relaxation is determined by measuring the decay of locked transverse magnetization and commonly considered indicative of low-frequency interactions between the tissue's macromolecules and extracellular water. Meanwhile, the exact sensitivity and specificity profiles of both T2* and T1ρ need additional clarification 10,11 . To date, a solid body of evidence has been accumulated indicating the diagnostic potential of such qMRI techniques (excellently reviewed in 6,8,12 ). Even though previous studies have reported correlations between qMRI parameters on the one hand and structural as well as compositional features of the tissue on the other hand 13,14 , there is clearly no consensus on any specific relations between these parameters. Previously, our group studied correlations between measured qMRI parameter maps and modelled volume fractions to better refine each qMRI parameter's sensitivity and specificity profile 15 . Additionally, quantitative T2 maps have been referenced to loading-induced changes in cartilage composition and structure based on a constitutive cartilage model 16 . However, to the best of our knowledge, multiparametric qMRI maps have not been modelled as a function of the functional properties of the tissue in general and its biomechanical measures in particular. Thus, this study aimed to establish a framework to integrate sample-specific multiparametric qMRI maps into the proposed constitutive material model while subsequently optimizing the model in terms of weighted structural and compositional tissue features as derived from the multiparametric qMRI maps. Therefore, the working hypotheses of the study were that (1) the complex relation between the stress response of the tissue and its qMRI appearance in terms of respective T1, T1ρ, T2 and T2* maps may be translated into a refined constitutive model of cartilage tissue and (2) the functional properties of the tissue can be described by this model following its comprehensive optimization. Results Upon histological assessment, all samples were found to be grossly intact (Mankin Grade 0, mean sum score 3.2 ± 0.8). MRI measurements. Spatially resolved quantitative T1, T2, T1ρ and T2* maps were obtained for all osteochondral samples. Qualitatively, samples were relatively homogeneous, while uniform changes in signal intensities were found as a function of tissue depth (Fig. 1). Whenever present, focal signal alterations were only slight, and, in any case, adjacent cartilage areas were not affected. Quantitatively, qMRI parameter values were characterized by considerable standard deviations even though the mean parameter values were comparable. Table 1 presents a detailed overview of the qMRI parameter values. Fig. 2 as a function of time for two representative samples. Considerable variability in the stress responses of the samples can be seen. When keeping the sample strained, larger stress relaxation was observed with larger initial stresses. Confined compression tests. The nominal stress response is illustrated in In Fig. 3a nominal stresses are plotted versus the intra-tissue volume changes. Similar to the stress-time response, the stress-induced volume changes were highly variable. Correspondingly, large standard deviations were found when calculating the mean stiffness at ε = 15%: E = 1.53 ± 0.68 MPa (mean ± SD). Sample-specific biomechanical testing details are displayed in Table 1. Model evaluation. Relaxed stress calculated by the model is plotted in Fig. 3b as stress versus intra-tissue volume changes. Predictions of the computational model (Fig. 3b) demonstrated grossly similar characteristics (in terms of trend and overall curve characteristics) as compared to the experimental measurements (Fig. 3a). Table 1. Quantitative characterization of human articular cartilage samples (n = 8) by qMRI parameters and the stiffness at a tissue strain of ε = 15%. The entire sample cross-sectional area was the region-of-interest. Data are given as mean ± standard deviation, while goodness-of-fit measures R 2 and Ω detailing the correspondence between experimentally measured and theoretically modelled data are shown in the last two columns (from right). Table 1. The global set of material parameters obtained by the inverse Finite-Element optimization is given in Table 2. By assuming linear relations between the qMRI parameter-derived specific volume fractions φ ξ  x T (x ∈ {1, 1ρ, 2, 2*}) and scalar coefficients ξ w Tx , sample-specific volume fractions Φ ξ for the fluid and collagen contents (ξ ∈ {f, co}) were obtained as a function of normalized sample depth z (see Fig. 4). Towards the sample surface, fluid content was high, while collagen and proteoglycan contents were low. Towards the sample bottom (i.e. the cartilage-bone transition), the differences in volume fractions were less pronounced even though fluid was still the dominant tissue component. Following optimization of the spatial volume fractions φ ξ  z ( ) x T ( Fig. 5a-d), the φ ξ (Tx) values of the qMRI parameters showed close correspondence to the idealized model of the depth-related volume fractions as proposed by Wilson et al. 13 (Fig. 5e). Accordingly, ξ w Tx represent weighting parameters which define each qMRI parameter's contribution to the collagen and fluid volume fractions (please see MRI-based model input section below for more details). Additionally, six constitutive model parameters were obtained by the optimization procedure. Here, k 1 , k 3 and k 3 describe the material non-linearity of the collagenous part, while a 0 and a 1 are associated with the stiffness and compressibility of the non-collagenous matrix. w ∈ [0, 1] denotes a weighting parameter of the alignment of the collagen fibrils towards its preferred direction with the lower limit representing an ideal fibre alignment, while the upper limit expresses an isotropic fiber distribution. Weighting parameters Material model parameters Table 2. Details of the global material parameter set obtained by the inverse Finite-Element optimization of the spatial volume fractions in the framework of the computational model of cartilage. Weighting parameters ( ξ w Tx ) detail each qMRI parameter's contribution to the collagen and fluid volume fractions and are to be read as follows: The fluid content of the tissue is primarily represented by T1 (41%) and T1ρ (42%), while the collagen content is primarily represented by T1 (31%) and T2* (46%). Six material model parameters complement the proposed constitutive model: k 1 , k 3 and k 3 are related to the biomechanical behaviour of collagen fibrils under tension, while a 0 and a 1 are associated with the stiffness and compressibility of the non-collagenous matrix. w describes the concentration of the collagen fibrils in their preferred direction. www.nature.com/scientificreports www.nature.com/scientificreports/ Discussion The most important finding of this study is that sample-specific and spatially resolved qMRI data may be integrated into a computational model of cartilage to (1) emulate structural and compositional tissue parameters and to (2) reliably capture cartilage functional properties. The computational model of cartilage has been further refined and optimized to integrate imaging (i.e. the qMRI parameter maps) and biomechanical (i.e. its stress response to confined loading) information. www.nature.com/scientificreports www.nature.com/scientificreports/ To the best of our knowledge, this is the first study to bring together imaging and functional tissue parameters within the framework of a computational model. QMRI data were used as input variables in a sample-specific manner, while the remaining parameters were kept constant in efforts to keep the model complexity manageable. In practical terms, the qMRI parameter maps were used to derive measures of structural and compositional tissue properties as a function of sample depth. To this end, the qMRI parameters have been weighted in their respective contributions to the tissue properties on the basis of an idealized cartilage model as proposed by Wilson et al. 17 . Since specific relations between the exact properties of the tissue and the qMRI parameters remain disputed and considerable overlap in specificity has been reported 10,15,16,18-20 the first step was to identify weighting factors for the optimized qMRI-based tissue assessment. In spite of the thorough and user-independent optimization against functional properties of the tissue, this framework can only provide the starting point to further study the correlations between imaging, compositional and biomechanical parameters. In particular, refined biochemical techniques providing spatially resolved measures of the tissue features such as microspectroscopy or polarized light microscopy (e.g. 21,22 ) should be included in future studies to integrate sample-specific data on exact tissue properties rather than an idealized tissue model that gives tissue constituents as a function of depth. Nonetheless, the Wilson model of cartilage is validated and commonly used to convey depth-related information on tissue properties of cartilage 17,23-25 . Correspondingly, higher contents of extracellular fluid and lower contents of the solid constituents (i.e. collagen and proteoglycans) were found at superficial sample zones, while the opposite was observed for deeper cartilage zones. These compositional features are well in line with published reference data, e.g. by Wilson et al. 23 . The computational model of cartilage was efficient enough to successfully describe biomechanical properties (e.g. stiffness) as obtained in the subsequent confined compression tests. Herein, both the non-linearity of the stress-relaxation tests as well as the biomechanical quantities determined at equilibrium (both measured and modelled) are reflective of earlier literature findings 26 . Even though the sample size of the present study is limited and the computational model of cartilage is not yet optimized in terms of refined compositional input variables (as outlined above), our results substantiate the fact that advanced qMRI techniques are adequate to determine relevant tissue features (in structure and composition) that determine the biomechanical properties and stress resilience of the tissue and may be used to non-invasively study tissue functionality. Once further optimized, the computational model might be used to complement clinical-standard MRI examinations to provide a detailed representation of the functional properties of the tissue and to identify tissue regions at risk of incipient degenerative changes. This is of particular relevance as articular cartilage is exquisitely sensitive to the mechanical environment and cartilage degeneration in OA is considered as the key pathophysiological result of abnormal mechanics 27,28 . In the clinical context, such patient-specific modelling approaches aim to obtain image-based surrogate parameters of tissue functionality to identify tissue areas at risk Nonetheless, these approaches need further refinement, in particular when translating the findings to the in-vivo and entire-joint configuration. Although predictions of the proposed model were confirmed to a large extent by the experimental data, there was some discrepancy between modelled and measured datasets. Possible reasons involve a variety of as yet ill-controlled variables: Particular care was taken to standardize storage conditions, yet systematic error may have been introduced by the prolonged a storage in sample non-physiological environment. This may have led to alterations in the extracellular fluid content and biomechanical properties. Here, comparative longitudinal imaging-based assessment of cartilage functionality as a function of loading may be integrated into the model to further refine the input of data. Additionally, the distinct collagen network properties (in terms of orientation and integrity) have not been considered as additional input variables. In view of the ongoing scientific controversy on the imaging correlates of the collagen network 15,29 , future studies should take into account its complex features beyond its mere content, especially when the mechanical behaviour of the entire tissue is of interest. Here, the combination of mechanical stimuli and advanced qMRI techniques may help along since mechanical loading may be applied during imaging to study loading-induced intra-tissue changes to provide additional functional information. This can improve the computational model and its descriptive capacities 11,13,14,30 . Histology was used to confirm gross structural integrity of the cartilage tissue. However, samples may have been affected (in a functionally relevant context) beyond the histologically assessable scope, because OA is a disease that affects the entire joint by triggering catabolic and inflammatory processes in all compartments. As samples were harvested from total knee replacements only, our results remain to be confirmed in truly healthy cartilage tissue, e.g. from organ donor networks or tumor endoprothesis. Moreover, additional research activities have to be aimed at the inclusion of larger sample numbers of variable degeneration (as controlled by histology) to corroborate the potential of the model in predicting functional properties of cartilage in health and disease. Further limitations involve sample size and study setup. In this study, all measured data were used for thorough model parameter optimization. Larger sample sizes, including new samples, need to be included in future studies to assess the model's predictive capabilities. To this end, the sample-specific qMRI data will be the only input data to the model, while the measured respective mechanical responses will be used as benchmark features. By applying k-fold cross validation schemes as in machine learning techniques (e.g. 31 ) the model's predictive capabilities can be assessed. Additionally, the confined compression tests used for biomechanical reference evaluation is unlike the actual weight bearing in vivo, thereby limiting our study's transferability to the in-vivo setting. Against this background, more physiological forms of loading, quite possibly under simultaneous imaging 14,32 , should be applied in future studies. In conclusion, this study introduces a computational model of cartilage that integrates qMRIN parameter maps as spatially resolved measures of the structural and compositional properties of the tissue to describe its biomechanical properties. On the basis of this model, advanced qMRI techniques may be complemented to comprehensively evaluate the functional dimension of non-invasively obtained imaging biomarkers. Thereby, cartilage (2019) 9:7172 | https://doi.org/10.1038/s41598-019-43389-y www.nature.com/scientificreports www.nature.com/scientificreports/ degeneration (as the hallmark change of OA) may be appreciated in the context of abnormal mechanics and used as a potential target in diagnosing early (and potentially reversible) OA. Materials and Methods study design. This study was designed as a prospective, comparative, intra-individual ex-vivo study that aimed to integrate the functional biomechanical properties of cartilage tissue and their qMRI correlates as input variables into the framework of a computational model of the tissue. Prior to this study, local Institutional Review Board approval from the Ethical Committee of RWTH Aachen University, Germany (AZ-EK157/13) was obtained. Only after individual oral and written informed patient consent was the material that had been collected intraoperatively included in the present study. Moreover, all consecutive experiments were performed in accordance with relevant guidelines and regulations. Cartilage sample preparation. Human articular osteochondral samples were prepared as in earlier studies 11,14,20 . Briefly, macroscopically intact osteochondral samples were obtained from eight consecutive patients undergoing total knee replacement at our institution from 11/2017 to 03/2018 (3 male, 5 female; age 64.5 ± 12.3 years [range, 40-77 years]). By obtaining one sample from one patient sample pooling was avoided. The osteochondral samples included in this study were harvested from patients with primary osteoarthritis. Therefore, all forms of secondary OA or other bone and joint disorders were excluded. After its sterile excision during surgery, the material was collected in Dulbecco's modified Eagle's medium (DMEM, Gibco-BRL, Gaithersburg, MD, USA) with a set of standard antibiotics added (i.e. 100 U/ml penicillin [Gibco], 100 μg/ml gentamycin [Gibco] and 1.25 U/ml amphotericin B [Gibco]). The osteochondral samples were then prepared according to the standard procedure as follows: First, for the sake of topoanatomic consistency, samples were harvested from the lateral femoral condyles only. Second, samples were evaluated macroscopically according to the Outerbridge classification 33 and only grossly intact samples were included (i.e. Outerbridge grades 0 and 1). Structural integrity was subsequently confirmed by means of histology (see below). Third, samples were cut to standard square shape (20 × 20 mm [width × length]) and the subchondral lamella was preserved, while all cancellous bone was removed. Samples had a mean thickness of 4.201 mm ±1.01 mm (Mean ± Standard Deviation). Using a rongeur, two notches were created at opposing sample sides to define the mid-sagittal plane for future reference. MRI measurements, data acquisition and analysis. After sample preparation, MR imaging exam- inations were performed on a per-sample basis using a clinical standard 3.0 T MRI scanner (Achieva, Philips, Best, The Netherlands). As before 11,30 , samples were placed in a transparent container fully immersed in DMEM solution with additives. Samples were imaged using a modified single-channel prostate coil (BPX-30 disposable endorectal coil, Medrad/ Bayer, Leverkusen, Germany) 14,20,34 . Particular attention was paid to position the samples at the iso-center of the coil while aligning the mid-sagittal plane along and the surface parallel to the main magnetic field B 0 . Prior to scanning, B 0 inhomogeneities were excluded using B 0 mapping. After scout views, proton-density weighted sequences were acquired in the axial, coronal and sagittal planes oriented perpendicular to each other (Table 3). On the basis of the axial views, the sagittal imaging section was guided along the mid-sagittal plane to generate a centrally bisecting plane through the sample. Afterwards, T1, T1ρ, T2, and T2* sequences were acquired with the sequence parameters detailed in Table 3. MR imaging was performed at room temperature monitored before and after the measurements (19.5 ± 0.7 °C). Once the data acquisition was completed, the MR raw data including time constants for each pixel were loaded into Matlab R2016a software (Natick, MA, USA). Then, spatially resolved parameter maps were generated by means of predefined and customized fitting routines on a per-pixel basis. Individual pixel values were determined with T1, T1ρ, T2 and T2* relaxation times calculated as follows T1, T1ρ, T2 and T2* were the target relaxation times to be quantified on a per-pixel basis. Here, T E is the echo time, T SL the duration of the spin lock pulses, T R the repetition time and T I the inversion recovery time (i.e. the time delay between the initial inversion recovery pulse and the read out), while A and B are the signal pre-factor and offset, respectively, accounting for proton density and background noise. R 2 statistics adjusted to the number of degrees of freedom was used to check the quality of the fits. For T2 and T2*, only pixel values of expected echo times (T E ≤ 60 ms) were included to reduce the potential of mis-fitting. Sample segmentation was performed manually on the proton density-weighted morphologic images by choosing pixels that safely lay within the tissue. www.nature.com/scientificreports www.nature.com/scientificreports/ Boundary pixels were excluded to avoid partial volume effects. Segmentation of sample outlines was subsequently validated against the parameter overlays. Confined compression tests. Within 24 h after the MR measurements, confined compression tests were conducted using a custom-made compression device. In line with literature data 26,35 , the osteochondral samples were placed in an impermeable confining chamber matching the dimensions of the samples (with a diameter of d = 8 mm). Porous stainless steel filters were placed above and underneath the samples to allow for fluid outflow. The samples were compressed by a hollow piston which axially moved down the upper filter (Fig. 6). The entire set-up was placed in a container filled with phosphate-buffered saline (PBS, Gibco) and mounted on a universal testing machine (Zwick Roell Z010, Zwick GmbH, Ulm, Germany). Prior to the tests, a tare load of 1 N was applied to the osteochondral sample by lowering the piston at a rate of 0.16 mm/min to maintain standardized interaction of the interfaces throughout the measurements 26 . The resulting piston position was used to determine sample thickness, while equilibration was achieved by holding the piston in place for 15 min. Then, the osteochondral samples were subjected to a sequence of 20 ramped compressions with a strain-rate of 1%/min to a total strain of 20%. After each applied loading step, relaxation phases variable in duration (as depicted in Fig. 2) were applied to allow for sufficient equilibration 26 . Histological analyses. After biomechanical tests, samples underwent histological processing by simultaneous decalcification and fixation (Ossa fixona, Diagonal, Münster, Germany), sectioning along the mid-sagittal plane and embedding in paraffin. From the mid-sagittal plane, 5-μm-thick sections were cut, stained with hematoxylin/eosin and Safranin O according to standard protocols and visualized using a Leica light microscope (model DM LM/P, Wetzlar, Germany). Histological grading was performed in a blinded manner using the Mankin classification 36 . Based on the quantitative assessment of tissue structure, cellularity, proteoglycan staining intensity and tidemark integrity, the Mankin sum score is a representative measure of tissue degeneration. Ranging from 0 to 14, a Mankin sum score of 0 is indicative of no histological signs of degeneration. Mankin sum scores may be grouped into distinct Mankin grades; i.e. Mankin grade 0 indicates structurally grossly intact cartilage (Mankin sum scores 0-4) 37 . www.nature.com/scientificreports www.nature.com/scientificreports/ MRI-based model input. The qMRI parameter maps were used as exclusive sample-specific input values within the finite element (FE) code. Material constants as well as the weighted qMRI parameters (see Table 2) were obtained in an inverse FE manner with an optimization algorithm updating the parameters for all samples simultaneously and thereby enforcing a global set of parameters. Sequence Type Parameters Cartilage is classically considered as a bi-phasic material with a solid phase hydrated by interstitial fluid. The fluid-filled solid extracellular matrix primarily consists of the collagen (CO) fibril and proteoglycan (PG) fractions, in the following represented with respect to volume by φ co and φ pg , respectively 17 . The interstitial fluid and solid volume fractions are denoted by φ f and φ s , respectively. They are related by φ s = 1 − φ f . The volume fractions are expressed as a function of the qMRI parameters Tx, which satisfy the following condition: Functional dependencies between qMRI parameters and idealized cartilage volume fractions as characterized in previous studies 15,16 provided the basis for the qMRI-informed computational model of cartilage. More specifically, relations between the qMRI parameters and the fractional composition of an idealized cartilage model (as defined by Wilson et al. 23 ) were created in a pixel-wise and depth-related manner for the fluid and CO fractions (φ f and φ co , respectively). Note that this model of cartilage composition defines the mean content of each cartilage constituent as a function of tissue depth. These datasets served as the basis for the non-linear regression analysis with an exponential dependency where a, b and c are fitting coefficients. Based on earlier data 15,16 , these coefficients were determined by optimizing the contribution of each cartilage constituent to the qMRI parameter maps. Accordingly, the inverted generalized form is denoted by φ ξ (Tx), where the function is continuously extended by the constant parameters l Tx,ξ and u Tx,ξ at the lower Thus, one arrives at four distinct qMRI parameter specific volume fractions for CO and fluid, which were fitted against the pixel-wise resolved qMRI parameters. To this end, a one-dimensional phenomenological representation over the normalized depth coordinate z was employed as As all cartilage tissue phases are considered incompressible, any volume decrease is assumed to be solely due to fluid outflow 39,40 . Hence, after complete fluid loss the fluid volume fraction is approximately 0 and the tissue volume approaches its solid constituent fraction (also referred to as compaction point at which J → Φ s ) 41,42 . For a bi-phasic material the Helmholtz free energy (cf. 43 ) can be given as where the first term on the right hand side (Ψ C ( )) reflects the isochoric free energy resulting from the deformation of the solid parts, while the second term (Ψ π (J, μ)) describes the chemo-mechanical interactions. The third term (μC) is determined by the chemical potential of the solvent μ and the molar solvent concentration = Φ C J V f m with the molar volume V m (cf. 43,44 ). Hence, C governs the fluid flux and is consequently neglected in the following, since the modelling is restricted to the relaxed (i.e. time-independent) configuration. Accordingly, the Cauchy stress tensor is given as σ = ∂ Ψ ∂ + ∂Ψ ∂ 2/3 1 1 while S represents the isochoric contribution of the second Piola-Kirchhoff stress tensor. Furthermore, π denotes the osmotic multiplier and can be given by an empiric expression (cf. 43 ) as (11) 2 is a phenomenological relation enforcing a stronger correlation with the sample-specific qMRI data. a 0 and a 1 are material parameters, while π 0 ensures a stress-free reference configuration. As the extracellular matrix is composed of the solid cartilage phase (i.e. CO and PG) 23,39,40 , its free energy can be given as with the parameter a 0 which enforces a stress-free reference configuration, see (11) 1 . = I C tr 1 denotes the first isochoric strain invariant. The fibril network is modelled by a polyconvex expression which can be given as (cf. 46 captures the J-shape response to tension typical for CO fibres, while the latter term −   ( ) describes the fibre contribution to the tissue response to compression due to the tube contraction effect 48,49 . k 1 , k 2 and k 3 denote material constants. The local CO fibril architecture is largely responsible for cartilage anisotropy. Hence, in the constitutive model, cartilage anisotropy is captured by weighted structural tensors L i and associated structural invariants where w i (i = 1, 2, ..., m) denote scalar weighting parameters associated with the preferred fibre families whose directions are specified by unit vectors m i . I denotes the identity tensor. Finally, in view of (10) 3 and (12), the isochoric part of the second Piola-Kirchhoff stress tensor can be given as Numerical implementation and parameter estimation. For the parameter identification, the computational model of cartilage proposed in the previous section was implemented as a user-subroutine UMAT (user material routine) into the commercially available software package Abaqus FEA (v6.17, Simulia Corp., Providence, USA). To emulate the idealized Arcade model of Benninghoff 51 , the preferred fibril directions θ fib (z) were implemented in a rotationally symmetric manner with eight equiangular fibre families as before 16 . The weighted fractional relations (7) were mapped onto the sample-specific geometry. To exclude the possibility of negative volume fractions as a result of the saturation condition (5), the condition of minimally positive PG fraction values was enforced by the tolerance value Δtol = 1 × 10 −6 . Algorithm 1 presents a detailed overview of the code used for FE implementation of the computational model of cartilage. In the confined compression test the sample-specific geometry was implemented on the basis of its mean thickness, while the piston was modeled as a rigid body. The parameter fitting was performed using a constrained optimization in Matlab while satisfying condition (8). The minimal error between the FE simulations and actual experiments for the complete set of samples (n = 8) served as the objective function. An overview of the optimization framework can be found in Algorithm 2. The accuracy of the simulations as compared to the experimental measurements was quantified by determining the square of Pearson's correlation coefficient (i.e. R 2 as in 52 ) and the relative approximated error defined by Ω = ⋅ − y f y 100 ( ) / , where y and f denote vectors of experimental and computational values, respectively.
6,925.8
2019-05-09T00:00:00.000
[ "Engineering", "Medicine" ]
Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. Introduction The problem of data fitting is very important in many theoretical and applied fields [1][2][3][4]. For instance, in computer design and manufacturing (CAD/CAM), data points are usually obtained from real measurements of an existing geometric entity, as it typically happens in the construction of car bodies, ship hulls, airplane fuselage, and other freeform objects [5][6][7][8][9][10][11][12][13][14][15]. This problem also appears in the shoes industry, archeology (reconstruction of archeological assets), medicine (computed tomography), computer graphics and animation, and many other fields. In all these cases, the primary goal is to convert the real data from a physical object into a fully usable digital model, a process commonly called reverse engineering. This allows significant savings in terms of storage capacity and processing and manufacturing time. Furthermore, the digital models are easier and cheaper to modify than their real counterparts and are usually available anytime and anywhere. Depending on the nature of these data points, two different approaches can be employed: interpolation and approximation. In the former, a parametric curve or surface is constrained to pass through all input data points. This approach is typically employed for sets of data points that come from smooth shapes and that are sufficiently accurate. On the contrary, approximation does not require the fitting curve or surface to pass through all input data points, but just close to them, according to some prescribed distance criteria. The approximation scheme is particularly well suited for the cases of highly irregular sampling and when data points are not exact, but subjected to measurement errors. In real-world problems the data points are usually acquired through laser scanning and other digitizing devices and are, therefore, subjected to some measurement noise, irregular 2 The Scientific World Journal sampling, and other artifacts [12,13]. Consequently, a good fitting of data should be generally based on approximation schemes rather than on interpolation [16][17][18][19][20]. There are two key components for a good approximation of data points with curves: a proper choice of the approximating function and a suitable parameter tuning. Due to their good mathematical properties regarding evaluation, continuity, and differentiability (among many others), the use of polynomial functions (especially splines) is a classical choice for the approximation function [16,17,[23][24][25][26][27]. In general, the approximating curves can be classified as globalsupport and local-support. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem. As a consequence, these curves exhibit a global control, in the sense that any modification of the shape of the curve in a particular location is propagated throughout the whole curve. This is in clear contrast to the local-support approaches that have become prevalent in CAD/CAM and computer graphics, usually driven by piecewise functions (such as Bsplines and NURBS) that provide local control of the shape of the curve [23,28]. In this work we are particularly interested to explore the performance of the global-support approach by using different global-support basis functions for our approximating curves. Main Contributions and Structure of the Paper. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using globalsupport approximating curves. In particular, our goal is to obtain the global-support approximating curve that fits the data points better while keeping the number of free parameters of the model as low as possible. To this aim, we formulate this problem as a minimization problem by using a weighted Bayesian energy functional for global-support curves. This is one of the major contributions of this paper. Our functional is comprised of two competing terms aimed at minimizing the fitting error between the original and the reconstructed data points while simultaneously minimizing the degrees of freedom of the problem. Furthermore, the functional can be modified and extended to include various additional constraints, such as the fairness and smoothness constraints typically required in many industrial operations in computer-aided manufacturing, such as CNC (computer numerically controlled) milling, drilling, and machining [4,5,12]. Unfortunately, our formulation in previous paragraph leads to a nonlinear continuous optimization problem that cannot be properly addressed by conventional mathematical optimization techniques. To overcome this limitation, in this paper we apply a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced in 2009 by Yang and Deb to solve optimization problems [21]. The algorithm is inspired by the obligate interspecific brood parasitism of some cuckoo species that lay their eggs in the nests of host birds of other species. Since its inception, the cuckoo search (specially its variant that uses Lévy flights) has been successfully applied in several papers reported recently in the literature to difficult optimization problems from different domains. However, to the best of our knowledge, the method has never been used so far in the context of geometric modeling and data fitting. This is also one of the major contributions of this paper. A critical problem when using metaheuristic approaches concerns the parameter tuning, which is well known to be time-consuming and problem-dependent. In this regard, a major advantage of the cuckoo search with Lévy flights is its simplicity: it only requires two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully applied to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. The structure of this paper is as follows: in Section 2 some previous work in the field is briefly reported. Then, Section 3 introduces the basic concepts and definitions along with the description of the problem to be solved. The fundamentals and main features of the cuckoo search algorithm are discussed in Section 4. The proposed method for the optimization of our weighted Bayesian energy functional for data fitting with global-support curves is explained in Section 5. Some other issues such as the parameter tuning and some implementation details are also reported in that section. As the reader will see, the method requires a minimal number of control parameters. As a consequence, it is very simple to understand, easy to implement and can be applied to a broad variety of global-support basis functions. To check the performance of our approach, it has been applied to five illustrative examples for the cases of open and closed 2D and 3D curves exhibiting challenging features, such as cusps and self-intersections, as described in Section 6. The paper closes with the main conclusions of this contribution and our plans for future work in the field. Previous Works The problem of curve data fitting has been the subject of research for many years. First approaches in the field were mostly based on numerical procedures [1,29,30]. More recent approaches in this line use error bounds [31], curvature-based squared distance minimization [26], or dominant points [18]. A very interesting approach to this problem consists in exploiting minimization of the energy of the curve [32][33][34][35][36]. This leads to different functionals expressing the conditions of the problem, such as fairness, smoothness, and mixed conditions [37][38][39][40]. Generally, research in this area is based on the use of nonlinear optimization techniques that minimize an energy functional (often based on the variation of curvature and other high-order geometric constraints). Then, the problem is formulated as a multivariate nonlinear optimization problem in which The Scientific World Journal 3 the desired form will be the one that satisfies various geometric constraints while minimizing (or maximizing) a measure of form quality. A variation of this formulation consists in optimizing an energy functional while simultaneously minimizing the number of free parameters of the problem and satisfying some additional constraints on the underlying model function. This is the approach we follow in this paper. Unfortunately, the optimization problems given by those energy functionals and their constraints are very difficult and cannot be generally solved by conventional mathematical optimization techniques. On the other hand, some interesting research carried out during the last two decades has shown that the application of artificial intelligence techniques can achieve remarkable results regarding such optimization problems [6,8,10,11,14]. Most of these methods rely on some kind of neural networks, such as standard neural networks [8] and Kohonen's SOM (self-organizing maps) nets [10]. In some other cases, the neural network approach is combined with partial differential equations [41] or other approaches [42]. The generalization of these methods to functional networks is also analyzed in [6,11,14]. Other approaches are based on the application of nature-inspired metaheuristic techniques, which have been intensively applied to solve difficult optimization problems that cannot be tackled through traditional optimization algorithms. Examples include artificial immune systems [43], bacterial foraging [44], honey bee algorithm [45], artificial bee colony [46], firefly algorithm [47,48], and bat algorithm [49,50]. A previous paper in [51] describes the application of genetic algorithms and functional networks yielding pretty good results. Genetic algorithms have also been applied to this problem in both the discrete version [52,53] and the continuous version [7,54]. Other metaheuristic approaches applied to this problem include the use of the popular particle swarm optimization technique [9,24], artificial immune systems [55,56], firefly algorithm [57,58], estimation of distribution algorithms [59], memetic algorithms [60], and hybrid techniques [61]. Mathematical Preliminaries In this paper we assume that the solution to our fitting problem is given by a model function Φ( ) defined on a finite interval domain [] 1 , ] 2 ]. Note that in this paper vectors are denoted in bold. We also assume that Φ( ) can be mathematically represented as a linear combination of the socalled blending functions: (2) the Bernstein basis: Other examples include the Hermite polynomial basis, the trigonometric basis, the hyperbolic basis, the radial basis, and the polyharmonic basis. Let us suppose now that we are given a finite set of data points {Δ } =1,..., in a -dimensional space (usually = 2 or = 3). Our goal is to obtain a global-support approximating curve that best fits these data points while keeping the number of degrees of freedom as low as possible. This leads to a difficult minimization problem involving two different (and competing) factors: the fitting error at the data points and the number of free parameters of the model function. In this paper, we consider the RMSE (root mean square error) as the fitting error criterion. The number of free parameters is computed by following a Bayesian approach (see [62] for further details). This is a very effective procedure to penalize fitting models with too many parameters, thus preventing data overfitting [63]. Therefore, our optimization problem consists in minimizing the following weighted Bayesian energy functional: where we need a parameter value to be associated with each data point Δ . Equation (2) is comprised of two terms: the first one computes the fitting error to the data points, while the second one plays the role of a penalty term in order to reduce the degrees of freedom of the model. The penalty term also includes a real positive multiplicative factor used to modulate how much this term will affect the whole energy functional. This functional L can be modified or expanded to include any additional constrain in our model. For instance, it is very common in many engineering domains such as computer-aided ship-hull design, car-body styling, and turbine-blade design to request conditions such as fairness or smoothness. In our approach, these conditions can readily be imposed by adding different energy functionals adapted to the particular needs. Suppose that instead of reducing the degrees of freedom of our problem, the smoothness of the fitting curve is required. This condition is simply 4 The Scientific World Journal incorporated to our model by replacing the penalty term in (2) by the strain energy functional as follows: Considering the vectors Ξ = ( ( 1 ), . . . , ( )) , with (2) can be written in matricial form as Minimization of L requires differentiating (4) with respect to Θ and equating to zero to satisfy the first-order conditions, leading to the following system of equations (called the normal equations): In general, the blending functions { ( )} are nonlinear in , leading to a strongly nonlinear optimization problem, with a high number of unknowns for large sets of data points, a case that happens very often in practice. Our strategy for solving the problem consists in applying the cuckoo search method to determine suitable parameter values for the minimization of functional L according to (2). The process is performed iteratively for a given number of iterations. Such a number is another parameter of the method that has to be calculated in order to run the algorithm until the convergence of the minimization of the error is achieved. The Cuckoo Search Algorithm Cuckoo search (CS) is a nature-inspired population-based metaheuristic algorithm originally proposed by Yang and Deb in 2009 to solve optimization problems [21]. The algorithm is inspired by the obligate interspecific brood parasitism of some cuckoo species that lay their eggs in the nests of host birds of other species with the aim of escaping from the parental investment in raising their offspring. This strategy is also useful to minimize the risk of egg loss to other species, as the cuckoos can distribute their eggs amongst a number of different nests. Of course, sometimes it happens that the host birds discover the alien eggs in their nests. In such cases, the host bird can take different responsive actions varying from throwing such eggs away to simply leaving the nest and build a new one elsewhere. However, the brood parasites have at their turn developed sophisticated strategies (such as shorter egg incubation periods, rapid nestling growth, and egg coloration or pattern mimicking their hosts) to ensure that the host birds will care for the nestlings of their parasites. This interesting and surprising breeding behavioral pattern is the metaphor of the cuckoo search metaheuristic approach for solving optimization problems. In the cuckoo search algorithm, the eggs in the nest are interpreted as a pool of candidate solutions of an optimization problem, while the cuckoo egg represents a new coming solution. The ultimate goal of the method is to use these new (and potentially better) solutions associated with the parasitic cuckoo eggs to replace the current solution associated with the eggs in the nest. This replacement, carried out iteratively, will eventually lead to a very good solution of the problem. In addition to this representation scheme, the CS algorithm is also based on three idealized rules [21,22]. (1) Each cuckoo lays one egg at a time and dumps it in a randomly chosen nest. (2) The best nests with high quality of eggs (solutions) will be carried over to the next generations. (3) The number of available host nests is fixed, and a host can discover an alien egg with a probability ∈ [0, 1]. In this case, the host bird can either throw the egg away or abandon the nest so as to build a completely new nest in a new location. For simplicity, the third assumption can be approximated by a fraction of the nests being replaced by new nests (with new random solutions at new locations). For a maximization problem, the quality or fitness of a solution can simply be proportional to the objective function. However, other (more sophisticated) expressions for the fitness function can also be defined. Based on these three rules, the basic steps of the CS algorithm can be summarized as shown in the pseudocode reported in Algorithm 1. Basically, the CS algorithm starts with an initial population of host nests and it is performed iteratively. In the original proposal, the initial values of the th component of the th nest are determined by the expression (0) = rand ⋅ (up − low ) + low , where up and low represent the upper and lower bounds of that th component, respectively, and rand represents a standard uniform random number on the open interval (0, 1). Note that this choice ensures that the initial values of the variables are within the search space domain. These boundary conditions are also controlled in each iteration step. For each iteration , a cuckoo egg is selected randomly and new solutions x ( + 1) are generated by using the Lévy flight, a kind of random walk in which the steps are defined in terms of the step lengths, which have a certain probability distribution, with the directions of the steps being isotropic and random. According to the original creators of the method, the strategy of using Lévy flights is preferred over other simple random walks because it leads to better overall performance of the CS. The general equation for the Lévy flight is given by where indicates the number of the current generation and > 0 indicates the step size, which should be related to The Scientific World Journal the scale of the particular problem under study. The symbol ⊕ is used in (6) to indicate the entrywise multiplication. Note that (6) is essentially a Markov chain, since next location at generation + 1 only depends on the current location at generation and a transition probability, given by the first and second terms of (6), respectively. This transition probability is modulated by the Lévy distribution as which has an infinite variance with an infinite mean. Here the steps essentially form a random walk process with a power-law step-length distribution with a heavy tail. From the computational standpoint, the generation of random numbers with Lévy flights is comprised of two steps: firstly, a random direction according to a uniform distribution is chosen; then, the generation of steps following the chosen Lévy distribution is carried out. The authors suggested using the so-called Mantegna's algorithm for symmetric distributions, where "symmetric" means that both positive and negative steps are considered (see [64] for details). Their approach computes the factor where Γ denotes the Gamma function and̂= 3/2 in the original implementation by Yang and Deb [22]. This factor is used in Mantegna's algorithm to compute the step length as where and V follow the normal distribution of zero mean and deviation 2 and 2 V , respectively, where obeys the Lévy distribution given by (8) and V = 1. Then, the stepsize is computed as where is computed according to (9). Finally, x is modified as x ← x + ⋅ Υ, where Υ is a random vector of the dimension of the solution x and that follows the normal distribution (0, 1). The CS method then evaluates the fitness of the new solution and compares it with the current one. In case the new solution brings better fitness, it replaces the current one. On the other hand, a fraction of the worse nests (according to the fitness) are abandoned and replaced by new solutions so as to increase the exploration of the search space looking for more promising solutions. The rate of replacement is given by the probability , a parameter of the model that has to be tuned for better performance. Moreover, for each iteration step, all current solutions are ranked according to their fitness and the best solution reached so far is stored as the vector x best (used, e.g., in (10)). This algorithm is applied in an iterative fashion until a stopping criterion is met. Common terminating criteria are that a solution is found that satisfies a lower threshold value, that a fixed number of generations have been reached, or that successive iterations no longer produce better results. The Method We have applied the cuckoo search algorithm discussed in previous section to our optimization problem described in Section 3. The problem consists in minimizing the weighted Bayesian energy functional given by (2) for a given family of global-support blending functions. To this aim, we firstly need a suitable representation of the variables of the problem. We consider an initial population of nests, representing the potential solutions of the problem. Each solution consists of a real-valued vector of dimension ⋅ + 3 + 2 containing the parameters , vector coefficients Θ , and weights Ω , , and . The structure of this vector is also highly constrained. On one hand, the set of parameters { } is constrained to lie within the unit interval [0, 1]. In computational terms, this means that different controls are to be set up in order 6 The Scientific World Journal to check for this condition to hold. On the other hand, the ordered structure of data points means that those parameters must also be sorted. Finally, weights are assumed to be strictly positive real numbers. Regarding the fitness function, it is given by either the weighted Bayesian energy functional in (2) or by the weighted strain energy functional in (3), where the former penalizes any unnecessarily large number of free parameters for the model, while the latter imposes additional constraints regarding the smoothness of the fitting curve. Note also that the strength of the functionals can be modulated by the parameter to satisfy additional constraints. Parameter Tuning. A critical issue when working with metaheuristic approaches concerns the choice of suitable parameter values for the method. This issue is of paramount importance since the proper choice of such values will largely determine the good performance of the method. Unfortunately, it is also a very difficult task. On one hand, the field still lacks sufficient theoretical results to answer this question on a general basis. On the other hand, the choice of parameter values is strongly problem-dependent, meaning that good parameter values for a particular problem might be completely useless (even counterproductive) for any other problem. These facts explain why the choice of adequate parameter values is so troublesome and very often a bottleneck in the development and application of metaheuristic techniques. The previous limitations have been traditionally overcome by following different strategies. Perhaps the most common one is to obtain good parameter values empirically. In this approach, several runs or executions of the method are carried out for different parameter values and a statistical analysis is performed to derive the values leading to the best performance. However, this approach is very timeconsuming, especially when different parameters influence each other. This problem is aggravated when the metaheuristic depends on many different parameters, leading to an exponential growth in the number of executions. The cuckoo search method is particularly adequate in this regard because of its simplicity. In contrast to other methods that typically require a large number of parameters, the CS only requires two parameters, namely, the population size and the probability . This makes the parameter tuning much easier for CS than for other metaheuristic approaches. Some previous works have addressed the issue of parameter tuning for CS. They showed that the method is relatively robust to the variation of parameters. For instance, authors in [21] tried different values for = 5, 10, 15, 20, 50, 100, 150, 250, and 500 and = 0, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, 0.4, and 0.5. They obtained that the convergence rate of the method is not very sensitive to the parameters used, implying that no fine adjustment is needed for the method to perform well. Our experimental results are in good agreement with these empirical observations. We performed several trials for the parameter values indicated above and found that our results do not differ significantly in any case. We noticed, however, that some parameter values are more adequate in terms of the number of iterations required to reach convergence. In this paper, we set the parameters and to 100 and 0.25, respectively. Implementation Issues. Regarding the implementation, all computations in this paper have been performed on a 2.6 GHz Intel Core i7 processor with 8 GB RAM. The source code has been implemented by the authors in the native programming language of the popular scientific program MATLAB, version 2012a. We remark that an implementation of the CS method has been described in [21]. Similarly, a vectorized implementation of CS in MATLAB is freely available in [65]. Our implementation is strongly based (although not exactly identical) on that efficient open-source version of the CS. Experimental Results We have applied the CS method described in previous sections to different examples of curve data fitting. To keep the paper in manageable size, in this section we describe only five of them, corresponding to different families of globalsupport basis functions and also to open and closed 2D and 3D curves. In order to replicate the conditions of realworld applications, we assume that our data are irregularly sampled and subjected to noise. Consequently, we consider a nonuniform sampling of data in all our examples. Data points are also perturbed by an additive Gaussian white noise of small intensity given by a SNR (signal-to-noise ratio) of 60 in all reported examples. First example corresponds to a set of 100 noisy data points obtained by nonuniform sampling from the Agnesi curve. The curve is obtained by drawing a line from the origin through the circle of radius and center (0, ) and then picking the point with the coordinate of the intersection with the circle and the coordinate of the intersection of the extension of line with the line = 2 . Then, they are fitted by using the Bernstein basis functions. Our results are depicted in Figure 1(a), where the original data points are displayed as red emptied circles, whereas the reconstructed points appear as blue plus symbols. Note the good matching between the original and the reconstructed data points. In fact, we got a fitness value of 1.98646 × 10 −3 , indicating that the reconstructed curve fits the noisy data points with high accuracy. The average CPU time for this example is 3.01563 seconds. We also computed the absolute mean value of the difference between the original and the reconstructed data points for each coordinate and obtained good results: (9.569738 × 10 −4 , 1.776091 × 10 −3 ). This good performance is also reflected in Figure 1(b), where the original data points and the reconstructed Bézier fitting curve are displayed as black plus symbols and a blue solid line, respectively. Second example corresponds to the Archimedean spiral curve (also known as the arithmetic spiral curve). This curve is the locus of points corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity. In this example, we consider a set of 100 The Scientific World Journal noisy data points from such a curve that are subsequently fitted by using the canonical polynomial basis functions. Our results for this example are depicted in Figure 2. We omit the interpretation of this figure because it is similar to the previous one. Once again, note the good matching between the original and the reconstructed data points. In this case we obtained a fitness value of 1.12398×10 −2 for these data points, while the absolute mean value of the difference between the original and the reconstructed data points for each coordinate is (1.137795 × 10 −2 , 6.429596 × 10 −3 ). The average CPU time for this example is 4.68752 seconds. We conclude that the CS method is able to obtain a global-support curve that fits the data points pretty well. Third example corresponds to a hypocycloid curve. This curve belongs to a set of a much larger family of curves called the roulettes. Roughly speaking, a roulette is a curve generated by tracing the path of a point attached to a curve that is rolling upon another fixed curve without slippage. In principle, they can be any two curves. The particular case of a hypocycloid corresponds to a roulette traced by a point attached to a circle of radius rolling around the inside of a fixed circle of radius , where it is assumed that = ⋅ . If = / is a rational number, then the curve eventually closes on itself and has cusps (i.e., sharp corners, where the curve is not differentiable). In this example, we consider a set of 100 noisy data points from the hypocycloid curve with 5 cusps. They are subsequently fitted by using the Bernstein basis functions. Figure 3 shows our results graphically. In this case, the best fitness value is 2.00706×10 −3 , while the absolute mean value of the difference between the original and the reconstructed data points for each coordinate is (1.661867 × 10 −3 , 1.521872×10 −3 ). The average CPU time for this example is 9.82813 seconds. In this case, the complex geometry of the curve, involving several cusps and self-intersections, leads to this relatively large CPU time in comparison with the previous (much simpler) examples. In fact, this example is very illustrative about the ability of the method to perform well even in case of nonsmooth self-intersecting curves. Fourth example corresponds to the so-called piriform curve, which can be defined procedurally in a rather complex way. Once again, we consider a set of 100 noisy data points fitted by using the Bernstein basis functions. Our results are shown in Figure 4. The best fitness value in this case is 1.17915×10 −3 , while the absolute mean value of the difference between the original and the reconstructed data points for each coordinate is (8.64616 × 10 −4 , 5.873391 × 10 −4 ). The average CPU time for this example is 3.276563 seconds. Note that this curve has a cusp in the leftmost part; moreover, the data points tend to concentrate around the cusp, meaning that the data parameterization is far from uniform. However, the method is still able to recover the shape of the curve with great detail. The last example corresponds to a 3D closed curve called Eight Knot curve. Two images of the curve from different viewpoints are shown in Figure 5. The CS method is applied to a set of 100 noisy data points for the Bernstein basis functions. Our results are shown in Figure 6. The best fitness value in this case is 3.193634 × 10 −2 , while the absolute mean value of the difference between the original and the reconstructed data points for each coordinate is (2.7699870 × 10 −2 , 2.863125 × 10 −2 , 1.3710703 × 10 −2 ). The average CPU time for this example is 8.75938 seconds. Conclusions and Future Work This paper addresses the problem of approximating a set of data points by using global-support curves while simultaneously minimizing the degrees of freedom of the model function and satisfying other additional constraints. This problem is formulated in terms of a weighted Bayesian energy functional that encapsulates all these constraints into a single mathematical expression. In this way, the original problem is converted into a continuous nonlinear multivariate optimization problem, which is solved by using a metaheuristic approach. Our method is based on the cuckoo search, a powerful nature-inspired metaheuristic algorithm introduced recently to solve optimization problems. Cuckoo search (especially its variant that uses Lévy flights) has been successfully applied to difficult optimization problems in different fields. However, to the best of our knowledge, this is the first paper applying the cuckoo search methodology in the context of geometric modeling and data fitting. Our approach based on the cuckoo search method has been tested on five illustrative examples of different types, including open and closed 2D and 3D curves. Some examples also exhibit challenging features, such as cusps and selfintersections. They have been fitted by using two different families of global-support functions (Bernstein basis functions and the canonical polynomial basis) with satisfactory results in all cases. The experimental results show that the method performs pretty well, being able to solve our difficult minimization problem in an astonishingly straightforward way. We conclude that this new approach can be successfully applied to solve our optimization problem. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. This simplicity is also reflected in the CPU runtime of our examples. Even though we are dealing with a constrained continuous multivariate nonlinear optimization problem and with curves exhibiting challenging features such as cusps and self-intersections, a typical single execution takes less than 10 seconds of CPU time for all the examples reported in this paper. In addition, the method is simple to understand, easy to implement and does not require any further pre-/postprocessing. 10 The Scientific World Journal In spite of these encouraging results, further research is still needed to determine the advantages and limitations of the present method at full extent. On the other hand, some modifications of the original cuckoo search have been claimed to outperform the initial method on some benchmarks. Our implementation has been designed according to the specifications of the original method and we did not test any of its subsequent modifications yet. We are currently interested in exploring these issues as part of our future work. The hybridization of this approach with other competitive methods for better performance is also part of our future work.
8,071
2014-05-28T00:00:00.000
[ "Computer Science", "Mathematics" ]
Potential for mitigation of solar collector overheating through application of phase change materials – a review Demand for domestic hot water and heating is rarely perfectly concurrent with solar irradiation, which means that collectors can overheat in periods of high incident radiation and low demand. Phase change materials have been used as energy storage in space heating applications to absorbs excess heat during low demand periods for use in peak demand periods. This paper reviews the current state of research on the possibility of application of such materials as energy storage for solar collectors, in order to avoid collector overheating. Finally, various materials were evaluated and ranked for this application based on required properties and price. An example model of such materials being applied in a typical family house domestic hot water solar system is also provided. INTRODUCTION One of the major limiting factors in large scale application of solar thermal collectors is their price. In mass production conditions economies of scale minimize many production costs present in smaller production runs. This makes limitations of the production process itself and the price of raw materials the primary driving forces behind high production cost. Currently the selection of materials used in collector design is relatively limited due to strict and often conflicting demands of necessary material properties. This in turn also limits the production processes which can be used. In an average collector there are strict requirements for thermal, mechanical and optical properties of the material. A significant cause of this problem is collector overheating i.e. a high temperature of stagnation. The temperature of stagnation is the highest temperature reached by the collector when exposed to maximum incident solar radiation and high ambient temperature at a time when no flow through the collector is present. This can happen due to flow problems in the system, but its most common cause in normal collector operation is that the set point temperature in the hot water tank is achieved and the pump is turned off in order to avoid overheating the water in the tank. The temperature of stagnation of the most basic flat plate solar collector design is high and regularly exceeds 150°C. This limits the selection of materials for collectors to materials with high thermal stability and appropriately high melting points. In industrial practice this has meant that metals have been the most common materials used to produce collector absorbers. The use of alternative and cheaper materials such as polymers has been very limited. The primary factor limiting the use of polymers is the fact that most polymers undergo glass transition at temperatures as low as 100 °C. That temperature is much lower than the temperatures a standard flat plate collector's absorber reaches during stagnation. In spite of these limitations there have been attempts to make a polymer flat plat solar collector. An example is the research by de la Peña et al. [1]. This solution doesn't reduce the high temperature of stagnation, but instead uses a special polymer that is stable at high temperatures. The price of this material is high, and the overall price of the collector is not reduced compared to collectors made with industry-standard materials. Another option discussed is the use of channels for air-cooling behind the collector, as presented by Hengstberger et al. [2]. Kessentini et al. [3] used a polymeric transparent insulation material. Föste et al. [4] approached the problem from another angle, and use butane instead of water as a heat transfer medium. In addition to these, thermotropic and thermochromic materials have shown promise. Significant reduction of absorbed solar radiation has been noted by this method by Muehling et al. [5]. Similar results were obtained by Föste et al. [6] and Hussain et al. [7]. This reduction is still not sufficient to enable the use of commodity polymers in every part of the collector. The methods listed above aim to reduce or increase heat transfer to or from the collector during stagnation. Another approach was considered in [2], by examining the use of phase changing materials (PCMs) as a form of overheating protection. This potential solution is positively reviewed and potential PCMs suitable for further study were suggested. Forzano et al. [8] and Dehgahn et al. [9] discuss integration of PCMs as energy storage in buildings in a method that may be transferable for collector applications. The possibility of application of PCMs as overheating protection in solar collectors is a topic which has not been extensively researched so far. This paper aims to provide an extensive overview of currently available materials. It also aims to provide an analysis of their potential applicability in this application. To show the potential of this technology an example PCM protected system will be given for a typical family house solar domestic hot water system. OVERVIEW OF PHASE CHANGE MATERIALS Phase change materials are materials which undergo phase transition at a technologically advantageous temperature and have a relatively high latent heat of fusion. While phase transition can occur in any material, only a limited number meet these specific requirements. There are usually additional requirements which include, but are not limited to: non-toxicity, low or high thermal resistance, small volume change after phase transition, inertness in contact with other materials, durability, and affordability. All of these requirements make PCM selection a complex task which requires an in-depth analysis. Mao METHODS OF INTEGRATION OF PCMs AND NUMERICAL ANALYSIS PCMs need to be integrated into a solar system to be an effective overheating protection. This section of the paper gives an overview of possible methods of integration of PCMs and their feasibility for domestic solar collector application. Building integration is one possible method of solar collector installation. [82] presented a mathematical model of a PCM thermal process in a solar collector system. They focused primarily on heat loss reduction and overall performance improvement. The behavior of PCM in a solar chimney system was simulated numerically by Xaman et al. [83]. A similar model was developed by Fadaei et al. [84]. This is conceptually similar to a PCM integrated in a ventilated collector, and as such may be of interest for further research. A theoretical model for a solar desalinator with a phase change material was given by Abu-Arabi et al. [85]. It showed good agreement with experimental results. Swami et al. [86] used a similar approach for solar dryer with PCM. Yadav et al. [87] provided a CFD simulation of the drying process in a system with a PCM. Amirifard et al. [88] suggested the use of PCMs for solar ponds. They found a 6.1% increase in charging time. Plytaria et al. [89] discussed the use of PCMs in solar cooling. They reported that the use of PCMs provided an up to 30% reduction in auxiliary energy. Wei et al. [90] presented a novel PCM based thermal energy storage system. The thermal performance of this system was evaluated with a detailed analytic thermodynamic model. The results showed that such a system is feasible. Mao et al. [91] developed a similar model in MATLAB. An experimentally validated model of an integral solar collector which has a PCM storage section integrated into a flat-plate collector was developed by Bilardo et al. [92]. This model uses an electrical analogy scheme to model the behavior of the collector. Zhao et al. [93] provided a detailed overview of the practical application of PCM integrated solar heating in Tibet. Hirmiz et al. [94] proposed a reduced analytical methodology for sizing PCM storage tanks based on a comparison of numerical and analytical methods. Gulfam et al. [95] provided an overview of the selection process for a paraffinic PCM. Numerical parametric analysis was conducted by Kazemian et al. [96]. It was found that an increase in the melting temperature of PCM employed in a photovoltaic thermal system increased the surface temperature and decreased the percentage of PCM melted. A review was given by Jimenez-Xaman et al. [97] on the current state of research in PCMs for solar chimneys. It had a particular focus on computational fluid dynamics and global energy balance models. Based on this, a new model was formulated by Vargas-Lopez et al. [98]. Reyes et al. [99] showed that by using a fuzzy logic control system the period of energy retention of the PCM could be extended. In principle, the same should be true for the discharge from a PCM. Elbahjaoui et al. [100] developed a model based on the finite volume method for a flat-plate collector with latent heat storage units composed of rectangular slabs. This model was later used by Elbahjaoui et al. [101] to perform an optimization study for a solar system in Marrakesh. A similar analysis was given by Allouhi et al. [102]. Sarbu et al. [103] provided a review of PCM materials followed by a two-dimensional heat transfer simulation model using a control volume technique. Forzano et al. [104] gave a model for energy savings per m 3 of PCM integrated into a building's envelope. This is similar in its approach to the evaluation presented in this paper. A model for encapsulated PCMs was given by Raul et al. [105]. Augspurger et al. [106] provided a model for solar salts. EVALUATION OF PHASE CHANGE MATERIALS To determine the usefulness of individual PCMs it is necessary to compare them to a standard solution. In this paper that solution is taken to be a larger water tank. The first step in this evaluation process is to calculate the effective heat capacity (cef) for all considered materials. This is done using equation 1.It includes both sensible and latent components. Tm is the transition temperature of the PCM. The variables cp,s and cp,l are the heat capacities of the solid and liquid states of the PCM. ∆Hm is the latent heat of fusion. These values are given for individual PCM in literature as stated in the previous section. The value of 0.27778 is used to convert the kJ/kg heat capacities found in literature into the desired Wh/kg unit. Before calculation it is necessary to check that the temperature of fusion falls between Tlow and Thigh. If not, then the appropriate term is removed from the equation, and in the other term Tm is replaced by Thigh or Tlow respectively. ∆Hm also needs to be disregarded in the case where Thigh is lower than Tm. For this analysis Tlow the temperature the PCM tank reaches after overnight cooling is taken to be 20°C. Thigh is the highest allowed temperature in the solar system and therefore the tank. In this case it is set at 70°C. This is well below the glass transition temperature of most commodity plastics which is between 90 and 120°C. While it is true that heat transfer to and from PCMs is limited by their high thermal resistance i.e. low thermal conductivity, the focus of this paper is on PCMs and tank design is beyond its scope. It should be noted that specific solutions were discussed by Faegh et al. [107], Khan et al [108], Silva et al. [109], Liu et al. [110] and Kapsalis et al. [111]. Guidelines outlined in these reviews are considered in this evaluation, but are not included in the calculation process. Using equation 1 the baseline value for cef for water (cef,w) can be calculated as 58.1 Wh/kg. The density of water is taken from the literature for Thigh. The values obtained for other materials are then compared with water using equation 2. The values of heat capacity and density are taken for each PCM that is considered, based on equation 1 and data from literature. Based on ref another selection can be made. All PCMs which have a ref smaller than 100% can be discarded, since using water is more volumetrically favorable than using such materials. As water is significantly cheaper than all considered PCMs, there is little reason to use PCMs instead of water if there is no decrease in tank volume. To determine the economic viability of the remaining PCMs, the required capacity of the PCM tank needs to be determined based on the installed area of collectors of a domestic hot water solar system. The simulation model includes a flat plate collector array. To be applicable for polymer solar collectors the outlet temperature of water in the collector array (Tw) was set to 70 °C. This temperature is high enough to allow the water in the tank to be heated to a temperature above 60°C and low enough to be below the glass transition temperature of most commodity plastics. The efficiency of the solar collector was calculated using equation 3, taken from Rodriguez-Hidalgo et al. [112]. Collector inclination was set at an angle which ensures perpendicular incidence of solar radiation at noon and an intensity and temporal distribution for the summer solstice at a latitude of 45° North. This determines the incident angle (θinc) on the collector. The ambient temperature was set to 35°C to simulate high temperatures in the summer. These inputs are shown in Table 2. Values in the table above were obtained from the Photovoltaic Geographical Information System (PVGIS) of the European Institute for Energy and Transport. The values are given for a location with longitude 16°E and latitude 45°N, during the summer solstice. Qdiff is the diffuse component of incident solar radiation. Qbeam is the direct (beam) component of incident solar radiation. Reflected incident radiation from surrounding surfaces has to be disregarded to generalize the case considered, since reflected radiation entirely depends on the local conditions on site. The total incident radiation on the collector surface is given by the following equation: 5 6 = 9 : 4 * cos > ?@ + 9 A !"ℎ/B 8 ' This is the maximum hourly value that can be transferred into the PCM tank by a collector array with ideal efficiency. From this the minimum (theoretical) volume of the PCM tank that needs to be included in the system to prevent collector overheating can be calculated, considering a collector with real efficiency obtained from equation 3. This is given by equation 5. Minimum tank capacity required and price of the required PCM for each case are given in Table 3. The prices for PCMs are wholesale prices of the material, and do not include transport and installation costs, as that is beyond the scope of this analysis. Both the price and the volume are given per m 2 of flat plate collector surface. In real world applications they need to be multiplied by the actual installed area of the collectors. Values presented per m 2 of collector surface are more versatile as they enable easy calculation for different commercial collector setups. Table 3. provide a minimum cost of the necessary PCM, as the method outlined in the previous section does not take into account heat transfer efficiency or rate in the PCM tank. In practice an increase in PCM volume would be necessary depending on the exact configuration of the tank and its heat transfer system. However, since most listed materials can be used in a number of configurations then a comprehensive comparison would only be possible if all such configurations were compared. This is not feasible because of the wide scope and because data is not available for a wide variety of configurations. Therefore, most configurations would need to be either experimentally or numerically tested. Prices given in Further consideration needs to be given to health concerns for any solution with PCMs. Domestic hot water applications require a medium that is non-toxic to humans as there is a risk of contamination of hot water. This water may be ingested by humans in case of tank or heat exchanger failure. Paraffins are the safest PCM option in this respect as they are safe for human consumption. While some inorganic and eutectic PCMs have significantly lower cost per m 2 of collector the added cost of additional heat exchangers needed to separate them entirely from the domestic hot water circuit, and the potential danger in case of human ingestion may render them inapplicable. Compared with water, PCMs are not as sensitive to low temperatures and are at no risk of damage if the PCM is cooled to lower temperatures during winter months since the phase change behavior is already accounted for in tank design. Such tanks are therefore suitable for external installation. This is also favorable for overnight heat dissipation. If the tank can be placed outside it can cool using outside air without affecting the heat balance of the building. The cooling will also be at a much higher rate than would be possible in a boiler room setup. Finally, the applicability of PCM tanks as passive method of overheating protection needs to be further examined. This is due to the fact that they need to be in reserve during normal operation. Furthermore, they need a more complex regulation system to ensure that flow through their heat exchanger only occurs during collector stagnation. To achieve this a separate pump or an automatic valve would need to be used. Both of these components are susceptible to various modes of failure. Failure of these components could then cause the collector to be left without overheating protection. Therefore, such a solution can't be considered truly passive. In order to achieve a fully passive solution the PCM would need to be integrated in the collector, as suggested in [2]. CONCLUSION From the above analysis it can be concluded that PCMs have potential as overheating protection in solar collectors. Many of the materials reviewed offer advantages in terms of volumetric savings compared to using larger water tanks. The downside of such a system is cost as PCMs are significantly more expensive than water. Yet, the possibility of integration of PCMs directly into collectors or in tanks which can be left outside during the whole year, may justify their use regardless of cost. Of the materials analyzed, two salt hydrates (sodium sulphate decahydrate and sodium thiosulfate pentahydrate) and two eutectic PCMs (CaCl2 (H2O)6 MgCl2 (H2O)6 and Mg(NO3)2 (H2O)6-MgCl2 (H2O)6) proved the most cost-effective. These two salt hydrates also have the highest ref out of the materials that were analyzed. While paraffins take all top ten spots in terms of cef, their relatively low density means that they are not able to achieve as high a value of ref as these other materials. It should be noted however that paraffins still do achieve relatively high values of ref and Paraffin 30 ranks among the top ten materials considered here. Paraffins are very interesting materials as they offer significant advantages in terms of safety, given that they are safe for human ingestion. Further research is recommended towards more practical applications of this technology and the design of a practical system which would employ PCMs as overheating protection in solar collectors. Future research should expand this model to take into account the heat transfer in the PCM itself, as that can be a bottleneck for the operation of the system and may influence material choice in the end.
4,431
2020-12-30T00:00:00.000
[ "Engineering" ]
Kinematic selection criteria in a new resonance searches: Application to pentaquark states In this note I recall some features of two-body decay kinematics which can be effectively applied, in particular, in experimental searches for pentaquark states. Introduction In experimental searches for resonances, tracks of secondary particles are combined to form resonance candidates. In high energy reactions with multiparticle final states, this method of resonance reconstruction may cause a huge combinatorial background. Any additional physical information about the resonance and its decays (life-time, the mass of secondary particles etc.) helps considerably to reduce the background. For instance, the presence in an event a well separated secondary vertex allows us to reconstruct Λ, K s and mesons from the D-meson family with a very low background. One such example is in Ref. [1]. Moreover, if masses of decay products are known and significantly differ from each other, this information also has to be used to reduce the background. This note is a discussion of these issues. Below only two-body decays are considered and for each final state particle a set of equations describing boundaries of the physical region is given. Features of each physical region, through implementation in selection criteria, can be used in the background suppression. Two particle decay Let us consider a two-particle decay, R → a + b, of a resonance R with a mass M R and a momentum P R in the laboratory frame. Masses of the decay products are denoted as m a and m b . At the rest frame of R the particles a and b are flying in opposite directions with the momentum [2] In the laboratory frame, the absolute momenta p a and p b of the particles a and b depend on the relative orientation of the rest frame vectors p * a and p * b with respect to the boost vector. We shall consider only Lorentz boosts along the momentum P R and denote by θ * a the polar angle between p * a and the direction given by P R . In that case, θ * b = π − θ * a . The energy and momentum components in both frames are related via [2] For a boost along P R , the boost parameters are and therefore If in an experiment there is no possibility to determine a particle type corresponding to a given charged track, then only information about the particle momentum, Eq. (5), is used. In the opposite case, when the particle type can be identified, for instance, by ionization, time of flight etc., one may incorporate this information and in addition use Eq. (2). Below, these cases are considered separately. Particles without identification The boundaries of the physical regions of the particle a on the (P R , p a ) plane are easy to obtain with the use of the equation (5). For a given P R , p a reaches the upper limit at θ * a = 0 when the lower limit at θ * a = π The equations similar to (6)-(7) are valid for the particle b too. At the resonance rest frame (7) demonstrates an interesting feature of the momentum of the particle flying backward. With increasing P R , the momentum p a (p b ) first decreases, at P R = M R P * /m a(b) it reaches the zero value and only at lager P R starts to increase. When plotted on the momentum (P R , p a ) plane, Eqs.(6)-(7) select a band-like physical region (m-band). For secondary particles with equal masses, m a = m b , these m-bands are fully overlapping. However, if for instance, m a > m b , the m-bands overlap only partially or even separate out at The last equation follows from the condition p − a ≥ p + b . In Eq (8) the expression under the square root is positive only if Thus, for m a ≫ m b the physical regions of the particle a and b do not overlap if and P R > P (s) R . We shall now consider a few particular applications of equations (6)- (10). According to the PDG [3] only resonances with mass close to the threshold value m a + m b satisfy the condition (10). Some of them are listed in Table 1, where P * and P (s) R values were calculated with use of (1) and (8), respectively. at P R > P (s) R , and m a significantly larger than m b . We will refer to the momentum condition (11) as a m-selector. Thus, for resonances of the type in Table 1, fulfillment of the condition (11) allows an assignment to the particle a of the mass m a , i.e. we identify the particle a. The m-selector (11) is a powerful tool for background suppression. In the two-body decay modes one meet with a higher incidence the situation shown in Fig. 1d. For these decays the conditions (9)-(10) are not fulfilled and at any P R the phase space bands remain overlapping. Nevertheless, if m a > m b , one can demand fulfillment of (11) for the particle a in order to assign it the mass m a . In course of the resonance search the condition (11) rejects not only a significant part of the background, but also some fraction of the signal combinations. To estimate the efficiency of the m-selector we proceed in the following way. On (P R , p a ) plane the condition (11) is valid up to the line defined by the equation For K * → Kπ decays the last equality is shown in Fig. 1d by the dashed line. The true K * is a combination of a kaon from the region above the line (12) with a pion below it, and vice versa. From (12) and (5) we find the variation of θ * a = π − θ * b along the line (12) At the rest frame of R, (11) is equivalent to the exclusion of the region θ * a >θ * a . For unpolarized particles the distribution of cosθ * a is uniform and we define the efficiency of the m-selector (11) as With the definition (14) we get E f f = 100%, when the conditions (9)-(10) are fulfilled, and E f f = 50% for decays with m a = m b 1 . As another example, in Fig. 4 by the dashed line is shown the evolution of E f f with P R in K * → Kπ decays. E f f grows because with the increase of P R the overlap of m-bands decreases. 1 For real data E f f < 50%, see discussion in Sec.3 In the limit large P R , the expression for the efficiency can be simplified: The last equation is a good approximation only if m a ≫ m b . Equations (15)-(16) confirms the result we have already seen in Fig. 4a, where E f f is independent of P R at large P R . On the other hand, for fixed values of m a and m b , E f f decreases with increasing M R , the mass of the resonance candidate. Thus, the higher the invariant mass of a two-particle combination, the more strongly (11) suppresses that part of the mass spectrum. Identified particles Accounting for the particle masses 2 transforms Eqs (6)- (7) into and at low P R leads to a 'repulsion' between the phase space E-bands on the energy (P R , E a ) plane (Fig. 2). For all resonances with m a ≫ m b (see Table 1) independently of P R . This is not always true for the background combinations. Therefore, if applied, the condition (19) suppresses the background even more strongly than (11). We will refer to the energy condition (19) as a E-selector. As in the previous section, if the masses of the secondary particles do not differ significantly, at P R greater thanP the E-bands start overlapping in the way shown in Fig 2d. The loss of signal combinations by demanding (19) for resonance candidates we estimate again with Eq. (14). In the case under consideration, cosθ * a is a solution of the equation Thus In the limit large P R , with (22) we again recover Eqs. (15)-(16). The evolution of E f f in K * → Kπ decays is shown in Fig. 4a by the full line. E f f = 100% at P R <P R and drops down up to the value (15), E f f ≃ 71%, with increase of P R . 3 Remark about D * reconstruction Charged particles are tracked in a tracking detector (TD). The resolution of the transverse momentum of a track traversing the TD is parametrized by σ(p T )/p T = Ap T B C/p T , with p T being the track transverse momentum (in GeV). The coefficients A, B and C characterize the resolution of the TD. Usually, to increase the momentum resolution, only tracks with p T > 0.12 − 0.15 GeV are selected. That cut, as shown in Fig. 1c, make impossible the reconstruction of D * mesons with momenta lower than 1.8-2.0 GeV. The m-band of π mesons is very narrow and grows rather slowly with P R . Tracks with momenta greater than p + π (P D * ) belong to the background. This property can be used to suppress the background contribution to the distribution of the mass difference, ∆M = M (Kππ s ) − M (Kπ), by applying to the momentum of the soft pion (π s ) the following cut where p 0 =0.0-0.3 GeV is a some shift from the pion m-band. In the decay D * → D 0 π, the rest frame momentum is small, P * =0.038 GeV, and the D * momentum can be estimated with (6) by means of the reconstructed D 0 momentum, Thus, from Eqs. (24) and (6) one get for (23) p πs < p 0 + (m π + P * ) Instead of (25) it is possible to apply another, less strict cut here P max is the right-hand edge of the kinematic range of the D * candidates, P D * < P max . Pentaquark states Now we apply the results of the previous sections to new resonances predicted [4] by Diakonov, Petrov and Polyakov in the framework of the chiral soliton model and detected both in the formation type [5] and in the production type [6] experiments. Θ(1530) → N (939) + K(498) With the mass value predicted for Θ + , M Θ =1.530 GeV, and the masses 3 of decay products, m N =0.939 GeV and m K =0.498 GeV, the condition (9) is not satisfied. That implies the overlap of the m-bands at all P Θ (Fig. 3a), as well as the overlap of the E-bands at P Θ >P Θ =2.15 GeV (Fig. 3b). The overlapping is not strong and both selectors (11) and (19) work with high efficiency (see Fig. 4). The m-and E-selectors were already successfully applied in searches for the Θ + in production-type [6] and formation-type [7] experiments. used to suppress combinations which do not result from a Ξ decay and helps to reconstruct the Ξ candidate and its invariant mass. In spite of the large mass asymmetry, m Ξ ≫ m π , the conditions (9)- (10) are not fulfilled in Ξ 3/2 decays and the pictures of the m-and E-bands shown in Figs 3c, and 3d are very similar to those in Θ + decays. At low momenta the efficiency of the E-selector is 100%. The E-bands start overlapping at P Ξ >1.96 GeV. Thus, the efficiency of the E-selector is no worse than 84% (Fig.4). Figs 3e and 3f shows the m-and E-bands in Ξ 3/2 (2070) decays. The m-bands slightly overlap at low P Ξ and diverge at P Ξ > 7 GeV. The E-bands are totally separated. Thus, in that decay mode, the m-and E-selectors work with 100% efficiency (see Fig. 4). Ξ 3/2 (2070) or Ξ 3/2 (1860) ? The NA49 collaboration is provided [8] the evidence for the existence of a narrow Ξ − π − baryon resonance with mass of 1.862±0.002 GeV. This state is considered as a candidate for the exotic pentaquark state Ξ 3/2 . The reported mass value is much lower as predicted in [4]. The last developments in the theory of pentaquark states did not exclude the lower mass for the Ξ 3/2 [9]. There are also arguments [10] that the result of the NA49 collaboration perhaps is inconsistent with data collected over the past decades. In Fig. 5 shown the m-bands and the efficiency. They are much the similar to those in Fig. 3e and Fig. 4d. The picture of the E-bands is also similar to Fig. 3f. Thus, the E-selector will not suppress the signal but suppress background at higher masses. Conclusions Kinematics of the two-body decay, R → a + b, has been analyzed in terms of the phase space mand E-bands. On the basis of many examples, in particular, the exotic anti-decuplet baryons (pentaquark states), it has been demonstrated that for m a > m b the selection rules p a > p b and E a > E b can be with a high efficiency applied to reconstruct many resonances and to suppress backgrounds.
3,183.6
2004-01-16T00:00:00.000
[ "Physics" ]
IMPACT OF SUSTAINABLE DEVELOPMENT INDICATORS ON ECONOMIC GROWTH: BALTIC COUNTRIES IN THE CONTEXT OF DEVELOPED EUROPE . The paper aims to analyse sustainable development indicators taken from the Eurostat database and to determine a rational relationship between the sustainable development indicators and the economic growth of the country. The suggested hypothesis implies that the process of the country’s development differs depending on the stage of the development. In order to establish the relationship between the sustainable development and the economic growth, the correlation analysis was used. Lithuania, Latvia and Estonia were taken as the research objects and the results obtained were compared with those describing the developed countries (Austria, Belgium, Denmark, Netherlands, France, Germany). The results obtained outline the main economic trends as well as determining their variation depending on the development stage of the country. Introduction. A theoretical review Sustainable development is a leading concept of nowadays. It started to be considered as urgent issue since 1972 Stockholm Conference on the Human Environment, where the conflicts between environment and development had been acknowledged the first time (Kates et al. 2005). A lot of studies have been devoted to the new philosophy. However, there are various definitions of this phenomenon. The widely known definition states that it is the "ability of humanity to ensure that it meets the needs of the present generation without compromising the ability of future generation to meet their own needs" (Brundtland 1987). Germination of the concept of sustainable development at institutional level has been analysed by Grybaitė and Tvaronavičienė (2008). The concept of sustainable development contains three dimensions of welfare, comprising economic, environmental, social aspects and their interrelations. The aim of the paper is to analyse the relationship between the economic growth and sustainable development in different countries, juxtaposing the Baltic states with developed European countries. The assumption is made that economic processes of a particular country differ depending on the stage of its development. It might be the case that, at the lower level of economic development, there is a stronger relationship between the economic growth of the country and social-economic variables, while at the higher level, the environmental indicators are very important. The modern study of economic growth was started by Adam Smith and David Ricardo. Since then, many theories have been defining the major factors of economic growth, but three basic factors, such as capital formation, population growth, technological changes and their interactions were mainly emphasized. The role of capital formation, reflected in saving and investment has been a crucial factor in many works of economics thinkers for centuries, and it remains important even today (Tvaronavičienė 2006;Tvaronavičienė, Tvaronavičius 2008). During the past fifty years the term "economic growth" was being transformed to a new notion -economic development, emphasizing not only the growth of the quantity of material goods and services, but also a higher level of welfare of the country. It was suggested (Theobald 1961) to divide the process of economic development into five stages, which every nation can pass regardless of its social and political structure. Notably, this approach is often presented as one of the leading theories of economic development (Parr 2001). In the middle of the 20th century social capital was made a focus in the analysis of factors, influencing the economic growth. T. W. Schultz was one of the first researchers who began to treat entrepreneurship as human capital, i.e. skills obtained by investing in a particular type of human resources (Huffman 2006). The social positions of development became even more prominent with the adoption of Human Development Index (Ghosh 2008). Social factor has been widely explored and now social development is considered to be a prerequisite for economic growth, while economic and social systems are often presented as one. Since the 1970s, when the Club of Rome put forth the theory of "limits to growth", environment has been considered as a new prerequisite for economic growth. The world has recognized new challenges and responsibilities for changing climate and diminishing natural resources. The most effective theory based on the relationship between pollution and income level was developed (Bradford et al. 2005). Since then the economists have been analysing the question: "Do poor people care less about their health than rich people? If not, what makes the populations in poor countries, generally speaking, less healthy?" (Torras 2006). The later notion emphasizes that economic development and ecological services cannot be observed as one system, where the causality flows from both directions. Hence, the concept of sustainable development is a vision of progress that links economic development, protection of the environment and social justice, and its values are recognised by democratic governments and political movements the world over (Grybaitė, Tvaronavičienė 2008). The evaluation of sustainable development is the basic approach to the assessment of the development level. This is the only way to find reasons and solutions to our position in many fields of sustainable development (Kovačič 2007). Not going into the discussion about the need for new theories of inclusive development, the research is framed as follows. If sustainable development leads to higher and more stable economic growth, then the goal of the analysis is to find the most important sustainable development indicators for the Baltic region and compare them with those characteristic of developed countries in the European Union's context. The relationship between economic growth and sustainable development In order to define the relevant variables, Eurostat sustainable development indicators, which are grouped in ten areas, were analysed. The groups were described in detail by Grybaitė and Tvaronavičienė (2008). All indicators from the Eurostat sustainable development database were reviewed and only those satisfying the conditions given below were chosen for analysis: Lithuanian data is available; - The same data sets cover more than one country; - The data gathered in the period from 1997 to 2006 without intervals; The variables are statistically measured. -A correlation analysis is used as a statistical method to define the relationship. Calculations were made using MS Excel and are presented in Appendix 1. Based on the countries' classification provided by the World Economic Forum (Sala-i-Martin et al. 2008), several countries were chosen for comparison: the Baltic states as having the most similar economic situation. Austria, Denmark, Belgium, Netherlands were chosen as small but highly developed countries, while France and Germany were included as similarly developed big countries. As GDP growth is used as a basic indicator for this analysis it should be emphasized that it is different in the groups of the Baltic states and the developed countries. The growth of all three Baltic states is about four times that (except Russian crisis in 1999) of other countries analysed. This is graphically shown in Fig. 1. Only the indicators with the statistically significant correlation are chosen and presented in Table 1. Correlation coefficients were calculated using the log change of sustainable development indicator and the log change of GDP (in constant prices) for the respective country. In many sources (Čekanavičius, Murauskas 2002) the correlation coefficient of 0.30 is described as a minimal level for the relationship to be valid, but this is only true for large data samples, more than 50 data points. For a small data sample (as in this work) the significance of the correlation coefficient can be determined by using standard distribution calculated by Student t test. Alternatively, a simple formula to determine the approximate critical value of the correlation coefficient at 0.05 level of significance was introduced (Walsh 2008): where n is the number of data items. Accordingly, the calculated threshold for the correlation coefficient in the presented data sample is 0.63. Hence, the indicators for further analysis were chosen based on the following criterion: at least one country in the group (the Baltic region -a transition stage versus the (developed countries) should have a coefficient of more than 0.63, while others should demonstrate similar trends. Based on the use of the mentioned criteria, only socialeconomic and environmental variables presented in Table 1 were chosen. Other (social inclusion, demographic, health) indicators were excluded from the analysis as their correlation coefficients were insignificant. Social-economic indicators make the largest subgroup, covering the topics such as investment, labour, exchange rate, energy intensity, sustainable consumption and production and good governance. Total investment is divided into the areas of public and business investments. Public investment was eliminated from the analysis because the correlation coefficients were insignificant. Hence, the correlation coefficients established between investment and GDP are being higher than 0.50 (Lithuania -0.67; Estonia -0.79; Latvia -0.75; Germany -0.71; Belgium -0.55). It confirms the statement of the classical theories of economic growth that investments, particularly those made in business, contribute to the growth. The investment as the most important driving force influencing the economic development of a country in transition was analysed by many scholars (Tvaronavičienė, Tvaronavičius 2008;Tvaronavičienė 2006). The established significant relationship between labour market indicators and GDP confirms the importance of this variable in Europe. The total employment is the classical Most values are less than -0.50, which confirms that unemployment reduces the economic growth, acting in the opposite direction to employment. It is evident that the higher rate of labour productivity per hour worked should result in GDP. The strong relationship between them can be observed in Austria -0.91; Belgium -0.75. Strong correlation between these indicators has not been found in the Baltic countries. In general, the relation between all labour indicators and GDP is lower in the Baltic states compared to that in the developed counties. This leads to the conclusion that the economic processes in this market of transitional countries are more volatile. Real effective exchange rate can be used to assess the competitiveness of the state's currency. It should be noted that national currency in all Baltic states is historically pegged to base currency (USD, SDR, EUR), while all analysed developed countries have introduced Euro since 1999. These historical differences in foreign exchange mode can be seen from the correlation results. There is a strong economically logic negative correlation, indicating that the increase in competitiveness causes the growth of GDP in the Euro zone countries. However in the Baltic region, the exchange rates have been fixed and the relationship is not so straightforward. Social-Economic indicators Energy intensity shows the amount of energy needed to produce one unit of economic output. A lower coefficient number indicates energy efficiency. The correlation obtained in the developed countries confirms that energy efficiency contributes to the GDP growth (negative correlation Austria -0.88; Denmark -0.64; Netherlands -0.86; Italy -0.85). In Lithuania, the coefficient (0.36) is statistically insignificant but surprisingly positive. It can be attributed to a lower technological level. In economic terms, the sustainable consumption and production subgroup is based on life cycle approach to the use of resources, every day consumption and waste. In this chain, all levels embracing governments, citizens/consumers and business should be included. The importance of the indicator showing the waste generated by the industry has increased in recent years and this process is related to the increasing irresponsible production and consumption. Hence, it is evident that it is closely connected with GDP. The significant positive correlation coefficient was found only for Lithuania (0.70). In the developed countries, the relationship is weaker; it can be attributed to better management of waste in highly developed countries. There are significant statistical correlation coefficients of household expenditure per inhabitant and GDP in all the countries analysed (except Denmark -0.12). Therefore, it can be concluded that stimulation of this rate leads to a higher GDP. On the other hand, it is very important to have sustainable consumption. Total energy consumption shows the use of energy in all areas, including industry, transport, household, agriculture, services and others. Energy is an important economic resource in all regions. High significant correlation is found for Lithuania (0.80), Estonia (0.62), Latvia (0.62). These results indicate that the economy of the Baltic region is highly dependent on energy. On the other hand, there is no significant relation in this area in the developed countries. The concept of sustainable consumption and production is more widely known in the developed countries. The Baltic states have to use and sustain resources effectively. Good governance is a subgroup related to institutional work and some scholars suggest adding institutional work as the fourth dimension to a concept of sustainable development concept since it is very important for smooth development. Shares of labour taxes in the total tax revenue are gen-erally defined as all personal income taxes, payroll taxes and social contributions of employees and employers that are levied on labour income (both employed and nonemployed).The higher rate of labour taxes in the total tax revenues leads to a lower GDP as taxes make a kind of business costs. Significant and negative correlation is found in the Baltic states (Lithuania -0.71; Estonia -0.82; Latvia -0.55). The statistically significant relation is not found in the developed countries. The results might indicate that in the Baltic states, changes in labour taxes greatly affect GDP, and therefore, they should be implemented with care. General government debt is the financial indicator showing the ability of the government to meet its future liabilities. The results show that in many countries a lower debt relates to a higher GDP (Lithuania -0.83; Netherlands -0.77; Germany -0.78). This rate is one of the most important values at all stages of development. The conclusion can be made that there are indicators, such as investment, government debt, household expenditure, employment rate and exchange rate, which influence GDP at all stages of development. There are areas which have to be monitored at particular stages in order to improve the welfare of the country. Sustainability is a popular philosophy in the developed world. The lower rates of waste and effective energy consumption in the developed countries encourage others to use resources in such a way to preserve the environment for future generations. Many scientists and the growing number of Greenpeace activists warn us that every day of rough development is damaging our environment irreversibly. There is a paradox question associated with the nature of sustainable development: "Is it possible to reconcile sustainability with development?" Trying to ensure the welfare of human beings, at the same time, they are destroying the main surroundings of every creature. Therefore, not going into a painful theoretical discussion, let us define the group of environmental indicators. Environmental indicators chosen as having a statistically significant correlation are climate change and sustainable transport. Climate change is linked with many areas of human activities. It requires taking some measures in many sectors from energy and transport to land use and urban development. All these measures, if successfully managed, result in sustainable development. Climate change is mostly caused by greenhouse gas emission. This indicator is strictly controlled by Kyoto protocol, however, despite this fact, it is increasing every year in most of the countries. The high rate of positive correlation between the total greenhouse gas emissions and GDP is found in the Baltic region (Lithuania -0.82, Estonia -0.35, Latvia -0.69). The relation with GDP is negative or insignificant in the developed countries. The results show that the Baltic states produce GDP using energy associated with high greenhouse gas emission. In the developed West European context the weak points of the Baltic states' sustainable development may be clearly seen. Climate change is closely related to energy consumption. Gross inland energy consumption shows the usage of various energy sources (fuel, gas, renewable energy sources, etc.). Positive significant correlation is found for Lithuania (0.64) and Latvia (0.57) with not so strong correlation observed in Estonia (0.26). Negative and significant correlation is found in Austria (-0.81), and negative but not strong correlation can be observed in Denmark (-0.39) and Netherlands (-0.49). Hence, higher consumption of fuel is closely related to the GDP growth in the Baltic states. In general, it can be seen that the Baltic region does not demonstrate the effective policy against the climate change. The results are different in the developed European countries. Only Estonia show positive trends to sustainability. Vehicles make a large contribution to overall pollution and climate change. The chosen indices prove the importance of this factor. The indicator "Energy consumption in transport" has a significant high correlation with GDP in the Baltic region (Lithuania -0.77; Latvia -0.54; Estonia -0.44), while in the developed countries similar results can be observed in France (0.71) and Germany (0.52). In other countries, this relationship is insignificant. A significant and positive relation was found between the GDP growth and greenhouse gas emission by transport facilities as well as the emission of particulate matter by vehicles, exclusively in the Baltic region (among the developed countries only Belgium and Germany show the relationship between GDP and emission of particulate matter by transport facilities). Hence, the data obtained in the pre sent work allow us to conclude that the Baltic region is lagging behind in creating a sustainable transport system. Based on the results of the correlation analysis performed using the environmental indicators, the differences in their relationships in the Baltic states and those characterizing developed countries can be observed. This can be used as a proof that sustainable development policy is being implemented more effectively in the developed countries. Conclusions The philosophy of sustainable development has been built evolutionally, as the economic growth was supplemented with new social values and environmental protection challenges. Despite many attempts to frame this concept, it is still alive and changes in time. Recently, the scholars have been challenged to create a unified theory of many dimensions, including economic, social and environmental and other (institutional, religion, etc.) aspects in order to ensure prosperity in the world for the present and future generations. After the collapse of the centrally planned and controlled systems in the Baltic states these countries demonstrate a rapid economic growth (despite some economic downturns). It should be noted that during the last decade the rate of GDP growth in the Baltic region has been much higher than in the developed European countries. Still, a large gap in the level of per capita income remains between the Baltic states and the developed European countries. The results of the analysis performed confirm the widely accepted notion that positive macroeconomic indicators impact on the prosperity of a country at every stage of its development. Investment, total employment, exchange rate, household expenditure, government debt have a statistically significant relationship with GDP. To ensure the long-term economic growth, using the policy of sustainable development, environmental protection programs have to be implemented. Based on the results of the correlation analysis performed using the environmental indicators, the difference in the priorities in the Baltic states and those characterizing developed countries can be defined. The Baltic region has a highly significant correlation between environmental indicators and GDP and, unfortunately, the relationship is the strongest in Lithuania. It confirms the statements of the theories that at the lower stage of development, pollution is associated with a higher GDP. In the developed countries, the relationship is often negative: the lower level of pollution is related to a higher GDP. This is a proof that the policy of sustainable development is being implemented more effectively in the developed countries, and this does not contradict to the prospect of higher economic growth in the longer perspective. Kates, R.;Parris, T.;Leiserowitz, A. 2005
4,484.2
2009-01-01T00:00:00.000
[ "Economics", "Environmental Science" ]
Microbial Community Redundancy and Resilience Underpins High-Rate Anaerobic Treatment of Dairy-Processing Wastewater at Ambient Temperatures High-rate anaerobic digestion (AD) is a reliable, efficient process to treat wastewaters and is often operated at temperatures exceeding 30°C, involving energy consumption of biogas in temperate regions, where wastewaters are often discharged at variable temperatures generally below 20°C. High-rate ambient temperature AD, without temperature control, is an economically attractive alternative that has been proven to be feasible at laboratory-scale. In this study, an ambient temperature pilot scale anaerobic reactor (2 m3) was employed to treat real dairy wastewater in situ at a milk processing plant, at organic loading rates of 1.3 ± 0.6 to 10.6 ± 3.7 kg COD/m3/day and hydraulic retention times (HRT) ranging from 36 to 6 h. Consistent high levels of COD removal efficiencies, ranging from 50 to 70% for total COD removal and 70 to 84% for soluble COD removal, were achieved during the trial. Within the reactor biomass, stable active archaeal populations were observed, consisting mainly of Methanothrix (previously Methanosaeta) species, which represented up to 47% of the relative abundant active species in the reactor. The decrease in HRT, combined with increases in the loading rate had a clear effect on shaping the structure and composition of the bacterial fraction of the microbial community, however, without affecting reactor performance. On the other hand, perturbances in influent pH had a strong impact, especially when pH went higher than 8.5, inducing shifts in the microbial community composition and, in some cases, affecting negatively the performance of the reactor in terms of COD removal and biogas methane content. For example, the main pH shock led to a drop in the methane content to 15%, COD removals decreased to 0%, while the archaeal population decreased to ~11% both at DNA and cDNA levels. Functional redundancy in the microbial community underpinned stable reactor performance and rapid reactor recovery after perturbations. High-rate anaerobic digestion (AD) is a reliable, efficient process to treat wastewaters and is often operated at temperatures exceeding 30 • C, involving energy consumption of biogas in temperate regions, where wastewaters are often discharged at variable temperatures generally below 20 • C. High-rate ambient temperature AD, without temperature control, is an economically attractive alternative that has been proven to be feasible at laboratory-scale. In this study, an ambient temperature pilot scale anaerobic reactor (2 m 3 ) was employed to treat real dairy wastewater in situ at a milk processing plant, at organic loading rates of 1.3 ± 0.6 to 10.6 ± 3.7 kg COD/m 3 /day and hydraulic retention times (HRT) ranging from 36 to 6 h. Consistent high levels of COD removal efficiencies, ranging from 50 to 70% for total COD removal and 70 to 84% for soluble COD removal, were achieved during the trial. Within the reactor biomass, stable active archaeal populations were observed, consisting mainly of Methanothrix (previously Methanosaeta) species, which represented up to 47% of the relative abundant active species in the reactor. The decrease in HRT, combined with increases in the loading rate had a clear effect on shaping the structure and composition of the bacterial fraction of the microbial community, however, without affecting reactor performance. On the other hand, perturbances in influent pH had a strong impact, especially when pH went higher than 8.5, inducing shifts in the microbial community composition and, in some cases, affecting negatively the performance of the reactor in terms of COD removal and biogas methane content. For example, the main pH shock led to a drop in the methane content to 15%, COD removals decreased to 0%, while the archaeal population decreased to ∼11% both at DNA and cDNA levels. Functional redundancy in the microbial community underpinned stable reactor performance and rapid reactor recovery after perturbations. INTRODUCTION The high demand milk and milk products has led to an increase in dairy production globally. In the EU, since the removal of milk production quotas in 2015, the dairy industry has undergone rapid growth (Gil-Pulido et al., 2018). Dairy plants produce large volumes of wastewater; it is estimated that 1-2 m 3 of wastewater is produced per m 3 of manufactured milk (Quaiser and Bitter, 2016;Slavov, 2017). These wastewaters are characterized by high organic load and nutrient composition (Demirel et al., 2005;Lateef et al., 2013;Gil-Pulido et al., 2018). Several approaches, including physical-chemical and biological processes, are applied to treat dairy wastewaters. However, physico-chemical processes present high reagent costs and low chemical oxygen demand (COD) removals, leading to the favoring of biological processes (Demirel et al., 2005;Gil-Pulido et al., 2018). High-rate anaerobic digestion (AD) is an efficient and well-established biological process to treat wastes and wastewaters. By comparison with aerobic processes, AD presents several advantages, including lower quantities of generated waste-sludge, smaller reactor volumes and the production of a renewable fuel-biogas methane, that can displace fossil natural gas to produce heat and energy (McKeown et al., 2012). Highrate AD technology relies on the retention of high levels of active microorganisms within the system. This is achieved by the immobilization of the microbes on a support material or by the formation of granules (McKeown et al., 2012). These reactors tolerate short HRT (1-24 h) and high organic loading rates (up to 100 kg COD/m 3 /day; McKeown et al., 2012). In general, AD systems are operated under mesophilic (30-37 • C) or thermophilic conditions (45-55 • C) ensure maximum microbial growth and reaction rates. However, dairy wastewaters are often discharged at lower temperatures (∼17-18 • C in winter and 22-25 • C in summer (Slavov, 2017). If AD is to be used to treat this wastewater at high-rates, heating such large volumes of wastewater for this purpose is economically and environmentally unfavorable. AD at ambient or low temperatures (<20 • C) (Lt-AD) is an economically attractive alternative. Research on the treatment of domestic sewage at low temperature reported promising results with good COD removals being reported: up to 87% in two hybrid reactors with HRT of 8 h (Elmitwalli et al., 1999) and up to 81% in a two-step system consisting of an anaerobic filter and an anaerobic hybrid operated at HRT 4 h (Elmitwalli et al., 2002). Despite that, many limitations were associated with Lt-AD and thus it was initially considered unfeasible for many complex industrial streams including those produced by dairy-processing (McKeown et al., 2012). A better understanding of the nature and limitations of anaerobic microbial consortia and improvements in process configuration has suggested, however, that the process was feasible and suitable for scale-up trials (McHugh et al., 2006;Akila and Chandra, 2007;Enright et al., 2009;McKeown et al., 2009). To our knowledge, this is the first report of pilot-scale, high-rate, AD of dairy-processing wastewater. The AD process relies on the degradation of organic matter by a network of microorganisms presenting diverse nutritional requirements and physiological characteristics (Shah, 2014). These microorganisms also present different responses to environmental stresses, such as temperature, pH variations, substrate composition/concentration or the presence of inhibitory or toxic compounds (Shah, 2014;Venkiteshwaran et al., 2015). Several studies have focused on the development of microbial communities in laboratory-scale AD bioreactors operated at >35 • C to lower temperature conditions, with special focus on the methanogenic portion of communities (Enright et al., 2009;McKeown et al., 2009;O'Reilly et al., 2009;Abram et al., 2011;Bandara et al., 2012;Zhang et al., 2012;Gunnigle et al., 2015a,b;Keating et al., 2018). Nevertheless, very little is known about the potential development of such communities at pilot and full-scale, or how they respond under environmental stresses, such as variations in operational parameters. The main goal of this study was thus to explore the relationships between microbial community structure and reactor performance in a pilot scale (2 m 3 ) high-rate AD reactor, operated at ambient temperature, during treatment of industrial dairy processing wastewater. Pilot-Scale Reactor Design and Operation The reactor was a stainless-steel vessel mounted on a transportable steel frame, designed in the configuration described by Hughes et al. (2011) with a total volume of 2 m 3 and an active volume of 1.8 m 3 (Figures S1, S2). In summary, the reactor was a hybrid of a sludge blanket reactor divided in two different main parts, the first corresponding to a granular sludge system in the lower section of the reactor, and a second part corresponding to an anaerobic filter located on the top section. The reactor was seeded with anaerobic granular sludge from an industrial UASB for the treatment of wastewater in a slaughterhouse plant. The trial was performed at a wastewater dairy processing plant in the Republic of Ireland. The wastewater used in this trial was taken from the dairy processing plant effluent, after the bulk of the fats, oils and grease (FOG) were separated by dissolved air flotation. Prior to entering the reactor, the wastewater was first diverted into a homogenization tank of 1 m 3 , where the pH was maintained at 7.5 ± 0.2 for the inlet flow using a pH controller Alpha pH 200 (Thermo Scientific), connected to two 323S Watson-Marlow (UK) pumps for addition of NaOH or HCl as required. The influent was then pumped into the pilot reactor from the homogenization tank using a 620S Watson-Marlow (UK) pump. The reactor was operated with a constant liquid up-flow velocity of 1.8 m/h by recirculation of reactor effluent using a 620S Watson-Marlow (UK) pump. No temperature control was applied to the wastewater or to the reactor vessel. The in-reactor temperature fluctuated between 21.9 and 30.1 • C during the trial ( Figure S2). The trial was carried out over a period of 291 days, divided into 7 different phases ( Table 1). During the course of the trial, the applied hydraulic retention time (HRT) was reduced from 36 h (Phase 1) to 6 h (Phase 7). Microbial Community Analysis Sample Collection and DNA/RNA Extraction Granular sludge samples were periodically withdrawn from the reactor via a sampling port located close to the base of the unit. The samples were instantly frozen in liquid nitrogen and stored at −80 • C until processing for DNA/RNA extraction. Granules were crushed in liquid nitrogen using a pestle and mortar until is a fine powder. Approximately 0.1 g of granule's powder was weighted in sterile 2 mL vials containing zirconia beads, 500 µL of 1% CTAB buffer and 1 ml of Phenol:Chloroform:Isoamyl alcohol (25:24:1). Cells were disrupted using a VelociRuptor Microtube Homogenizer for two cycles of 60 s each. For each time point, DNA/RNA were extracted in triplicate according to the protocol described by Griffiths et al. (2000) with the modification of Thorn et al. (2018). DNA/RNA quality was assessed using 1% (w/v) agarose gel containing 1× SYBR R Safe (Invitrogen, Carlsbad, CA). RNA was treated with Turbo-DNA free TM Kit (Thermo Fisher Scientific, Whaltam, MA) to remove contaminating DNA. RNA and DNA concentrations were determined using a Qubit Fluorometer (Thermo Fisher Scientific). Library Preparation Reverse transcription was performed using Primers for cDNA Synthesis (Thermo Fisher Scientific) and SuperScript TM III Reverse Transcriptase (Thermo Fisher Scientific). DNA and cDNA were amplified by targeting the V4 region of the 16S rRNA using the primers 515f (5 ′ -GTGC CAGCMGCCGCGGTAA) and 806r (5 ′ -GGACTACHVGGG TWTCTAAT). Analysis of the primer coverage can be found in the Supplementary Material. The amplicons were generated using one-step PCR. For this, 70-barcoded primers were used as described by Ramiro-Garcia et al. (2018). The 10-20 ng of DNA was used as template in the PCR reaction (50 µL), which contained 10 µL HF buffer (Thermo Fisher Scientific), 1 µL dNTP Mix (10 mM; Bioline, London, UK), 1 U of Phusion Hot Start II DNA Polymerase (Thermo Fisher Scientific), 500 nM of each barcoded primer. PCRs were performed with an Alpha cycler 1 (PCRmax, Staffordshire, UK) using an adaptation of the cycling conditions of Caporaso et al. (2012). The cycling conditions consisted of an initial denaturation at 98 • C for 3 min, 25 cycles of: 98 • C for 10 s, 50 • C for 20 s, and 72 • C for 20 s, and a final extension at 72 • C for 10 min. The size of the PCR products (∼330 bp) was confirmed by agarose gel electrophoresis using 5 µL of the amplification-reaction mixture on a 1% (w/v) agarose gel. For each sample, the PCRs were done in duplicate and pooled together before purification. The pooled PCR products were purified with HighPrep TM (Magbio Genomics, Gaithersburg, MD, United States) using 20 µL of Nuclease Free Water (Bioline) for elution and then quantified using a Qubit (Thermo Fisher Scientific) in combination with the dsDNA HS Assay Kit (Thermo Fisher Scientific). The purified products were mixed together in equimolar amounts to create two library pools, one for DNA and one for cDNA, and sent for sequencing on the Illumina Hiseq 2000 platform (GATC Biotech AG, Konstanz, Germany). Sequence data have been deposited in European Nucleotide Archive, accession number [PRJEB29981]. Bioinformatics and Statistical Analysis Data was analyzed using NG-Tax (Ramiro-Garcia et al., 2018), a validated pipeline for 16S rRNA analysis, under default parameters. Independently for each sample, most abundant sequences (>0.1%) were selected as ASV collecting 9.485.867 reads for all samples. To correct for sequencing errors, the remaining reads were clustered against those ASVs allowing one mismatch, reaching a total of 12.862.549 reads. The database used for the analysis was Silva 128 and the primers covered 98.4% of the 1.783.650 Bacteria and Archaea phylotypes included. AD specific databases like MiDAS (McIlroy et al., 2017) may improve the accuracy of the taxonomical assignments by reducing the number of possible candidates at the expense of generating misannotations due to its lack of completeness. Since the average accuracy for the ASVs in this study was very high (97.3%), with 76.7% of the ASVs having an accuracy of 100% (meaning all hits belong to the same genera) specific databases were not used. Alpha diversity was calculated and plotted using the R packages picante (Kembel et al., 2010) and ggplot2 (Wickham, 2016). Beta diversity and Constrained Analysis of Principal Coordinates (CAP) under the model ∼ HRT + pH were performed using phyloseq (McMurdie and Holmes, 2013) via the capscale (Oksanen, 2012) package. Reactor Performance The HRT applied to the reactor was decreased stepwise from 36 to 6 h in seven phases. The average total and soluble influent COD during the trial fluctuated greatly (0.20 and 4.9 kg/m 3 of total COD and between 0.05 and 3.1 kg/m 3 for soluble COD), mainly due to changes in production processes of the factory (Table 1, Figures 1A,B). This corresponded to an organic loading rate of 1.3 ± 0.6 to 10.6 ± 3.7 kg COD/m 3 /day ( Table 1). Total COD removal was between 49 and 71%, while the average soluble COD removal was more stable over the trial, fluctuating between 71 and 84.3%. A technical failure of the acid-addition pump resulted in a significant pH perturbation on day 246 that lasted until the pump was repaired on day 250, resulting in the pH of the reactor liquor increasing to >8.5. A number of less significant pH perturbations occurred on days 26, 220, 279 etc., arising from power supply interruptions, resulting in transient increase in pH to >8.5 for 1-2 days ( Figure 1C). Low COD removal and low methane content was observed when the pH was above 8.5 ( Figure 1C). This was especially so during days 246-251, when the reactor pH was 9.7-9.8 for four days, no COD removal was observed, and the methane content dropped to ∼15%. Once the pump was operating again, COD removal rates recovered to values in the same magnitude as seen prior to the incident within 5 days ( Figure 1B). However, the methane content required almost 15 days to reach the previous values. The methane content in the biogas during the whole trial averaged between 73.4 ± 29.5%, if excluding the values obtained during the pH shock (days 246 to 251) the overall methane content was 89.6 ± 3.2% (Figure 2). The FOG content in the inlet of the reactor along the trial was 60.5 ± 39.7 mg/L during the trial, with the lowest value of FOG corresponded to 21 mg/L and the highest value to 244 mg/L. No significant effect was observed in the reactor's performance due to increases in influent FOG concentrations. Microbial Community Analysis The composition of the microbial community was analyzed at several time points over the trial (Figure 1C). The results showed a stable core of archaeal populations, both at DNA and cDNA level (Figure 3). Furthermore, a heat-map was constructed for the relevant taxonimcal groups and is provided in Figure S4. DNA-based data indicated that Methanobacteriaceae species (up to 10%), Methanosaetaceae species (up to 23%). Furthermore, cDNA results indicated that members of Methanosaetaceae species, with relative abundances up to 47%, were the active core of microbial community. Unclassified members of VadinHA17 (up to 29%), and unclassified members of Synergistaceae (up to 21%), were the most relatively abundant bacterial groups present. Although there was a homogenization tank system in place prior to the AD system where the pH was controlled, the pH inside the AD system suffered periodic oscillations. Effects at microbial community level associated with pH perturbations were, however, only observed when pH values where higher than 8.5 (at days 70-75, 221, and 246-250). These pH shocks induced changes in the community, which can be divided into four phases (Figure 3). In phase I (days 0-75), in addition to the core members, a high relative abundance of Carnobacteriaceae species where present both in DNA-based (up to 18%) and cDNA-based (up to 13%) datasets. Furthermore, cDNA-based analysis also revealed the high relative abundance of Desulfobulbaceae species (up to 22%). After the pH shock on days 70-75, both taxa were present as <1% of the community in relative abundance terms. In phase II (days 96-214), the DNA-based data indicated an increase in Methanosaetaceae species from 8 to 13% (phase I) to 18-23%. At the cDNA level, there was an increase of unclassified members of Bacteroidetes' class vadinHA17, from 9-13% to 17-31%. This taxon had decreased in relative abundance to <2% by phase III. In the third phase (days 221-242), an increase in unclassified members of the family Comamonadaceae was observed, from 0-2.5% to 12-20%, as well as an increase in the relative abundance of Bacteroidales ML635J-40, from <1% to 2-3.2%, at the DNA level. At the same time, a decrease in the relative abundance of unclassified Bacteroidetes vadinHA17 for <2% at cDNA level and ∼6% at DNA level. During this phase, an increase in members from the Rhodocyclaceae family was observed, up to 4 and 13% in DNA-based and cDNA-based datasets, respectively. Finally, in phase IV (days 252-277), a considerable reduction in the relative abundance of the Archaea, both at a DNA (from 25-35% to 10-11%) and a cDNA (from 35-50 to 11%) level could be observed due to the exposure to high pH levels for several days. However, the system was able to recover and 20 days after the pH shock, the relative abundance of active Archaea was ∼30%. It was also observed that the relative abundance of Bacteroidales ML635J-40 increased up to 13.5 and 16% at the cDNA and DNA levels, respectively. Moreover, after the prolonged pH shock, an increase in members of Pseudomonadaceae family was observed, both in DNA-based (from <1% to ∼20%) and in cDNA-based (from 5 to 50%) datasets. Following the return of pH values to ∼7.5, the relative abundances of Pseudomonadaceae family decreased over time and had reverted to the same values as before the shock by day 277 (∼1% for DNA and ∼6% for cDNA; Figure 3). At the DNA level, a doubling of the relative abundances of unclassified members of the family Synergistaceae was observed. The prolonged pH shock (days 246-250) also affected the alpha diversity of the microbial community (Figure 4), which significantly decreased both at the DNA and cDNA level, while the shorter pH shocks showed no visible effect on diversity. As the applied decreases of the HRT occurred simultaneously with increases of the average loading rate, the effects of each of these individual parameters on the microbial community could not be distinguished. The CAP plots (Figure 5) for both DNA and cDNA showed, however, that sample separation, and thus microbial community structure, was dependent on pH and HRT/loading rate, and that the four operational phases identified were grouped separately. DISCUSSION To our knowledge, this is the first report of pilot-scale AD as a technology to treat real dairy processing wastewaters, in situ, at ambient temperatures. Good COD removals were obtained during most of the trial period; on average, soluble COD FIGURE 1 | Total and Soluble COD in the influent and effluent and COD removal (A,B) and FOGs, pH (C) over time. FOGs and pH data were collected for the inlet of the reactor. Biomass sampling dates are also indicated (C). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org . The data represents, for each day, the average off the triplicates and a cut-off of 1% was applied. removals were above 80% during all phases, while total COD removal efficiency ranged from 50 to 70%. These results are comparable with those reported from laboratory-scale studies of low temperature AD to treat dairy wastewaters at laboratoryscale, indicating a successful scale up of the process. For example, an expanded granular sludge bed-anaerobic filter (EGSB-AF) treating synthetic skimmed dairy wastewater, operated at 10 • C, presented removals of 74 to 90% (Bialek et al., 2013). Another study using two EGSB bioreactors operated at 15 • C and treating real dairy wastewaters showed removals of 54 to 92% (Gunnigle et al., 2015a). The same reactors were initially operated at 37 • C and exhibited comparable removals (88-96%). This indicates that the psychrophilic and mesophilic treatments of dairy wastewaters can have similar efficiencies for COD removals at low-medium organic loading rates. The efficiency of the reactor performance, evidenced by the good COD removals reported, and a high methane content in the biogas, was underpinned by a stable core archaeal community. In particular, there was a high relative abundance of active methanogenic species of Methanothrix, which represented up to 50% of the active microbial community and up to 25.5% of the total community. The presence in high abundance of Methanothrix species in low-temperature AD systems was observed at laboratory-scale in several studies (Enright et al., 2009;Siggins et al., 2011a,b;Bandara et al., 2012;Gunnigle et al., 2015a,b;Keating et al., 2018). Furthermore, real-time PCR results of an archaeal populations revealed that Methanosaeataceae was the dominant methanogen in the bioreactor treating dilute dairy wastewater and its numbers remained stable during the complete trial (Bialek et al., 2013), along with high numbers of Methanobacteriales and Methanomicrobiales. Members of these orders were also found in the community in the pilot-scale reactor, but in low relative abundances. Methanothrix species are described to play an important role in the formation and maintenance of a strong granular sludge (MacLeod et al., 1990;McHugh et al., 2005) and it is believed to be dominant at low acetate concentrations (De Vrieze et al., 2012), but it was also recently reported to be able to become dominant at high acetate concentrations (Chen and He, 2015;Chen et al., 2017). Methanobacteriaceae species members could be found in total community, but their presence in the activity community was very low (∼1%). These combined findings led us to believe that aceticlastic methanogenesis was the main active pathway for methane formation in in the pilot-scale reactor. While DNA-based relative abundances indicates which microorganisms are present, relative abundances based on cDNA are a more accurate indicator of which populations are active at a given time point. In general, we observed a good agreement between both sets of data. The results obtained provided insights on the evolution and dynamics of the microbial community over time, and as result of reactor perturbations. For example, Carnobacteriaceae (mainly the genus Trichoccocus) family members were abundant in the beginning, both in total and active community, but their abundance decreased from ∼13% (cDNA level) at day 0 to <1% at day 47. On the other hand, members of the family Desulfobulbaceae presented relative abundances of ∼23% at day 0 in the active communit, but represented only 3% of the total community. Furthermore, their presence at cDNA level decreased to <2% after day 96. These results might indicate that the members of those families played important roles in the seed sludge, or source reactor, and for that, were abundant in the active community. However, these roles were less relevant or less well-suited to growth under the conditions prevailing in the pilot reactor, leading to their disappearance, both in the total and active communities. It was also observed that Pseudomonas species were not abundant at day 0, but their abundance in the active community increased during the start-up (phase I), remaining stable in the total community. While they were almost undetectable during phase II, they emerged again after the pH shock at day 221 (up to 20% in the total community and up to 54% in the active community), and decreased in relative abundances, both total and active, again when the reactor performance stabilized. This could indicate that they might have a competitive advantage when perturbations are induced to the reactor. Pseudomonas species have been identified as key players in AD and it is possible that their versatility provided the necessary functional redundancy that allowed the reactor to stabilize after each perturbation. Members of Rhodocyclaceae family, mainly from the genera Thauera and Azoarcos, emerged during phase III. This family comprise mostly aerobic or denitrifying aquatic bacteria with versatile metabolisms (Wongwilaiwalin et al., 2010). They are also known to use acetate under anaerobic conditions (Wongwilaiwalin et al., 2010), which could lead to competition between them and the aceticlastic methanogens. The most abundant members in the cDNA-and DNAbased bacterial community profile were unclassified members of Bacteroidetes' class vadinHA17, until end of phase II. On the other hand, during phases III and IV, other members of the Bacteroidetes phylum emerged in both communities, Bacteroidales ML635J-40, which increased from <2% until day 221 to ∼10% at day 252, in both active and total communities, and remaining stable after that. This group was identified as being responsible for the hydrolysis of algae during anaerobic digestion at high pH (pH 10; Nolla-Ardèvol et al., 2015). Moreover, this group was identified as one of the more abundant inside submarine ikaite columns, a permanently cold (<6 • C) and alkaline (pH >10) environment (Glaring et al., 2015). Those results seem to indicate an adaption of this group to alkaline environments and may explain why they emerged following the pH shocks in our reactor. Bacteroidetes are commonly found in the microbial communities of anaerobic digesters (Werner et al., 2011;Shah et al., 2014;Guo et al., 2015;Sun et al., 2015), including low-temperature AD digesters (McKeown et al., 2009;Abram et al., 2011;Bialek et al., 2012Bialek et al., , 2013, and lowtemperature AD digeste treating dairy wastewater (Bialek et al., 2011(Bialek et al., , 2014Keating et al., 2018), which indicates their crucial role in anaerobic treatment. Their presence in abundance indicates a high hydrolytic activity in the system . Hydrolysis is a crucial step in AD systems and is often reported as the limiting step and the cause of poor reactor performances especially at lower temperatures, therefore a high abundance in the system is core to the efficiency of the process/system (Ma et al., 2013;Bialek et al., 2014;Azman et al., 2015). Their presence in high abundance in our system can be linked with the good reactor performance observed. Our results also showed a stable presence of active members of the Synergistaceae family. This family belongs to phylum Synergistetes, which was also observed in other reactors treating dairy wastewater (Gunnigle et al., 2015a;Keating et al., 2018;Callejas et al., 2019) and it is known to degrade peptides, proteins and amino acids. On the other hand, contrary to other studies of room/low temperature reactors treating dairy wastewater (Bialek et al., 2011(Bialek et al., , 2014Keating et al., 2018;Callejas et al., 2019), our results showed very little active presence of Firmicutes. Nevertheless, Gunnigle et al. (2015a) reported a decreased in Firmicutes associated with low temperature. Interestingly, Callejas et al. (2019) observed an increase in Firmicutes from 29 to 79% after a pH increase in a full-scale UASB treating dairy wastewater. On the other hand, the phylum Proteobacteria, commonly found in dairy-treating reactors (Bialek et al., 2011(Bialek et al., , 2014Gunnigle et al., 2015a;Keating et al., 2018;Callejas et al., 2019), aside from Pseudomonas, represented 8 to 16% of the active community, although no family was highly abundant. These values are much lower than the 62% relative abundance of this phylum observed by Gunnigle et al. (2015a), but more similar to the 27% observed by Callejas et al. (2019). Overall, our results and the literature indicate that a high relative abundance of methanogens, specially Methanotrix species, and bacterial members of Bacteroidetes, Synergistaceae and Proteobacteria are the core players in active communities of AD-digesters treating dairy wastewater at low temperature. However, the relative abundances of the bacterial members is variable, most likely due to the differences in processes and products that can be found in this type of industry. One of the known challenges inherent to the treatment of dairy wastewater is the presence of FOG. In this trial the influent FOG concentration to the pilot-reactor varied during the trial (21 to 244 mg/L). Although FOG are reported to benefit biogas production, they are also reported to cause operational challenges related to inhibition, substrate and product transport limitations, sludge floating, foaming and clogging biogas collection systems (Long et al., 2012). The recommended concentration of FOG for the optimal performance of the reactor was reported as being c. 100 mg/L (Passeggi et al., 2012); this value was exceeded for short periods of time during this trial, but did not result in any obvious effect on the performance or microbial community of the reactor. Furthermore, no clogging issues arose, and the presence of fat was not observed on the sludge granules. The capacity to operate at low HRT is advantageous because it allows the reduction of reactor volumes, which in turn reduces the capital investment cost. In our trial, the HRT was reduced from 36 to 6 h in several steps over the trial. At the same time, an increase in the average loading rate was applied, and coincided with increased influent COD concentrations during the seasonal processing cycle. None of the changes HRT or loading rate, had a negative impact on the reactor performance in terms of either COD removal or biogas methane content. On the other hand, these changes could be correlated with changes in the microbial community. No other measured parameter could be tied to these community changes. This is the first major long-term study that describes such a clear correlation and suggests a role the HRT and loading rate in selecting the microbial population in granular sludge reactors. In the past, the effect of both parameters on the microbial community was studied separately, but even in this case, the literature is scarce. For example, it was shown that HRT had a role in selecting the microbial population and had an impact on the reactor performance of a UASB reactor treating synthetic wastewater with trichloroethylene (Zhang et al., 2015). The authors observed that the relative abundance of the different phyla, especially for the dominant phyla, changed with the different HRTs tested. The impact of a HRT change from 8 to 4 h was analyzed in an anaerobic moving bed membrane bioreactor fed with synthetic domestic wastewater (Win et al., 2016). In this case, both the microbial community and biogas production were affected by the variation in HRT, but not the COD removal efficiency. When HRT was reset to 8 h, the reactor performance was able to recover. The effect of increasing loading rate was analyzed in a UASB reactor treating diluted pharmaceutical fermentation wastewater by Chen et al. (2014), who reported a shift in the microbial community where Firmicutes, Bacteroidetes, Thermoplasmata and Methanobacteria became the dominant phyla at high organic loading rate (OLR). pH is known to be a key parameter influencing microbial community composition and function (Liu et al., 2002;Zhang et al., 2016a,b). During this study, the pH was controlled to 7.5, but due to operational issues, occasional perturbations occurred. Both reactor performance and the microbial community structure were immediately impacted by these pH changes. Anaerobic reactions are highly pH dependent and the optimal pH for methane production should range between 6.5 and 7.5 (de Mes et al., 2003). However, a stable performance, with concomitant biogas production, might be achieved over a wider pH range (6.0-8.0). At pH values below 6.0 and above 8.3, inhibition of methanogens can occur (de Lemos Chernicharo, 2007). In our pilot reactor system, a decrease in COD removal efficiency and a shift in the microbial community was observed every time the pH increased above 8.5. Nevertheless, a decrease in the relative abundances of Archaea, and a consequent decrease in methane content in the gas, was only observed when the pH remained above this value for 4 days (days 246-251). The influence of pH on a microbial community was also observed in a staged anaerobic digestion system treating food waste, where it was one of the parameters responsible differences in the bacterial community (Gaby et al., 2017). Furthermore, a decrease of 95% of average specific methane yield and a corresponding decrease in the abundances of Methanosarcina and Methanothrix was observed at pH 8.5 in a two-phase anaerobic co-digestion of pig manure with maize Straw (Zhang et al., 2016b). Also, in other environments, such as soil, pH was reported to be one of the main factors responsible for shaping the microbial community (Bartram et al., 2014;Wu et al., 2017). Despite the microbial community changes, the pilot reactor performance was always able to recover, with efficient wastewater treatment performance (high methane content, good COD removal). In similar fashion, a full-scale UASB treating dairy wastewater suffered from a pH increase to 9.0 for 2 days (Callejas et al., 2019). The authors observed an effect on reactor performance, as well as a decrease in relative abundance for most phyla, but, also in this case, the reactor and community were able to recover from the pH imbalance. It is known that, in response to a disturbance, the microbial community can either maintain the composition (resistance), temporarily change the composition, but returning to the initial one (resilience) or shift to a new composition able to perform identical processes (functional redundancy) (Allison and Martiny, 2008;Shade et al., 2012). In our system, each change in the HRT/loading rate or pH shock led the community to change to a different composition, but the performance of the system remained stable, which means that the main microbial functions were unaffected. These results point to functional redundancy in the sludge community, such as the switch between members of the Bacteroidetes family (unclassified vandinHa17 by Rhodocyclaceae) or the increase in abundance of Pseudomonas species when there were major perturbations in the reactor. Furthermore, it also points to some resistance since the main active core remained stable for most perturbations. A similar result was observed for the microbial communities of AD digesters treating molasses wastewater and disturbed with high salinity (De Vrieze et al., 2017). Such results pinpoint the importance of the microbiology for the success of the AD. Functional redundancy, resilience, and resistance in anaerobic sludge are fundamental for having a robust and versatile system, able to keep high performance standards even when facing wastewater variability and perturbations. CONCLUSIONS The results obtained from this in situ pilot-scale trial represent a successful scale-up of ambient temperature AD as a technology with the ability to sustainably treat dairy-processing wastewaters at high-rate, resulting in high COD removal and high-quality biogas production. We have demonstrated the impact which operational parameters, such as pH and HRT/loading rate, have on system performance and/or microbial community composition. Notably, despite alterations to its composition, the microbial community was able to recover and perform up to a similar standard as before perturbations, thereby exhibiting clear hallmarks of functional redundancy, but also resistance by the main active archaeal core, who remained stable for most of the trial. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS LP, JC-A, and VO'F designed the experiment. LP and JC-A performed the experiments, analyzed the data, and wrote the manuscript. JR-G did the Bioinformatics and statistical analysis. JE-P helped with the reactor's maintenance and with the analytical methods. DH helped the reactor's performance analysis and commented on the manuscript. TM, MM, PW, and VO'F critically revised the manuscript. All authors read and approved the final manuscript. FUNDING This work was financially supported by the Irish State through funding from the Technology Centres programme (TC/2014/0016) and Science Foundation Ireland (14/IA/2371). ACKNOWLEDGMENTS We gratefully acknowledge Conall Holohan, Dr. Fabiana Paula, and Dr. Camilla Thorn for their help with protocols and suggestions. We would also like to acknowledge Dennis Kenneally for all his invaluable help during the pilot trial.
8,647.8
2020-03-13T00:00:00.000
[ "Engineering" ]
A Privacy Protection User Authentication and Key Agreement Scheme Tailored for the Internet of Things Environment : PriAuth In a wearable sensor-based deployment, sensors are placed over the patient to monitor their body health parameters. Continuous physiological information monitored by wearable sensors helps doctors have a better diagnostic and a suitable treatment. When doctors want to access the patient’s sensor data remotely via network, the patient will authenticate the identity of the doctor first, and then they will negotiate a key for further communication. Many lightweight schemes have been proposed to enable a mutual authentication and key establishment between the two parties with the help of a gateway node, but most of these schemes cannot enable identity confidentiality. Besides, the shared key is also known by the gateway, which means the patient’s sensor data could be leaked to the gateway. In PriAuth, identities are encrypted to guarantee confidentiality. Additionally, Elliptic Curve Diffie–Hellman (ECDH) key exchange protocol has been adopted to ensure the secrecy of the key, avoiding the gateway access to it. Besides, only hash and XOR computations are adopted because of the computability and power constraints of the wearable sensors.The proposed scheme has been validated by BAN logic and AVISPA, and the results show the scheme has been proven as secure. Introduction As sensors become widespread in their usage regarding health monitoring scenarios, a significant amount of personal sensitive data like blood pressure, pulse, or electrocardiogram readings will be monitored.These sensors could be interconnected to compose a Wireless Body Area Network (WBAN).With different sensors gathering patient's data and continually sending these data to doctors or to a remote monitoring station for further analysis, it is necessary to make sure that these data are transferred confidentially.The usual way is to encrypt them first before they are sent.The proposal presented in this paper, named PriAuth, aims to help the patient and the doctor build a shared key for encrypting health parameters. Because only appointed doctors are allowed to access the patient's data, the patient and the doctor have to authenticate each other first.A workable way is to introduce a gateway to help the patient authenticating the legitimacy of the doctor and vice versa.After authentication, the two parties will build a shared key for further communication. When a doctor wants to read patient's data, he sends a request to the patient.The patient forwards this request together with his own identification information to the gateway.The gateway checks whether the patient and the doctor are legitimate, and if any of them is not regarded as such then the scheme is aborted.Only when they are all legitimate, the gateway sends the authentication result to the patient.Once the patient has become aware of the legitimacy of the doctor, he sends the authentication result to the doctor as well.Based on the authentication result, the patient and the doctor can build a shared key, which is used for encrypting confidential information sent between them. There are many research results focusing on the authentication and key agreement problems; while most of them could ensure the safety of the data, this is not enough, as there is also a need to protect privacy. Wireless Communications and Mobile Computing In the authentication process, the patient and the doctor have to send their identities and some other related information to the gateway.It has to be ensured that the patient's identity should not be leaked.Of course, a patient is usually unwilling to leak his identity information, because if the patient's identity is leaked, the health history and status of the patient will be freely available for anyone in the system, regardless of the patient wishes. On the other hand, when a doctor sends his identity to the gateway for authentication, we have to make sure that the doctor's identity is kept confidential, too (e.g., when an adversary eavesdrops the identity of the doctor and finds out the doctor's major is dermatology according to the identity of the doctor, there is a great chance that the patient has a skin related problem).Therefore, it is also necessary to keep the doctor's identity confidential in order to protect the privacy of the patient.In PriAuth, Elliptic Curve Cryptography (ECC) is adopted as the method used to protect the identities of the data transmission participants, which is similar to [15][16][17][18][19][20][21]. After the gateway finishes the authentication process, the gateway will send the authentication result to the patient and the doctor.Based on the authentication result, the patient and the doctor could build a shared key.In some traditional schemes, the gateway could learn the key shared from the authentication information it gets from the patient and the doctor.This means the patient's personal health data could be leaked to the gateway.It is necessary to prevent the gateway learning this key.In PriAuth, Elliptic Curve Diffie-Hellman (ECDH) key exchange protocol is adopted to ensure the shared key secrecy between the patient and doctor.Besides, only hash and XOR operations are adopted, which is suitable for the wearable sensors. PriAuth has been validated by BAN logic and AVISPA.BAN logic is one of the most prevalent methods that help determine whether the exchanged information is trustworthy, secure against eavesdropping.BAN logic is also adopted to prove the security of the schemes by [22][23][24].AVISPA (Automated Validation of Internet Security Protocols and Applications) is a tool for the automated validation of Internet security-sensitive protocols and applications, which has been widely adopted by [24][25][26], and so forth. This paper is organized as follows: Section 2 is related works; Section 3 is the preliminary knowledge.In Section 4, we introduce PriAuth; Section 5 provides the BAN logic validation.Section 6 includes AVISPA verification.Section 7 is the security analysis part.Section 8 provides a comparison with other schemes.Section 9 is the validation part.Section 10 concludes with a summary of the contributions. Related Works In several papers of the researched literature, the authors use different acronyms; user and sensor are the most commonly used, which equals to doctor and sensor in our scheme.Thus, from now on, we will use user and sensor instead of doctor and patient.D. Wang and P. Wang provide overviews of some of the schemes described in [27,28].Farash et al. use a single shared key between all the users or sensors to encrypt the identities [13].All the sensors use the same key ℎ( GWN ‖ 1) to encrypt the sensor identity, using XOR method where SID is the sensor identity and 2 is a timestamp. where ℎ( GWN ‖ 1) is a key that is shared by all the sensors, so malicious or curious sensors could learn the identity of sensor SID .As ESID , 2 are sent via a public channel.A malicious or curious sensor with identity SID can eavesdrop sensor SID to get ESID , 2 .In order to get the sensor id SID , SID could decrypt ESID using the same key ℎ( GWN ‖ 1): Lu et al. use a random identity TID to protect identity privacy [10].But as the identity is a fixed value, a user could be tracked by an adversary.Schemes [29][30][31][32] use a similar method, but all these procedures are prone to suffer from tractability attack. In scheme proposed by Wu et al., every time the gateway gives a new PID newMU for the user [4].But in this case, there is a potential loss of synchronization problem: if the adversary blocks the PID newMU from being sent to the user, then the two parties may lose their synchronization.Das et al. protect the identity of the user by generating a new masked identity every time in a similar way, but this scheme suffers from loss of synchronization problem, too [33]. Jung et al. use the similar method with the scheme [13] of Farash et al. [6].The key to encrypt the identity of a single user is the same for all the users.This scheme has the same problem that has been discussed.What a user sends to the gateway node is as follows: DID = ℎ(ID ‖ 1 ), = ℎ(DID ‖ V * ‖ 1 ), = (DID ‖ 1 ‖ 1 ), so other users could learn DID by decrypting with the same key V * .Besides, this scheme has the same inner side attacker problem, a detailed analysis is shown in Section 7.4. Rabin cryptosystem with quadratic residue problem is used to encrypt a message [11,34].Assume = , where and are two large primes.If = 2 mod has a solution, that is, there exists a square root for , then is called a quadratic residue mod.The set of all quadratic residue numbers in [1, −1] is denoted by QR .The quadratic residue problem states that, for ∈ QR , it is hard to find without the knowledge of and due to the difficulty of factoring [35]; this is a kind of public-key encryption method. Chatterjee and Das provide a similar methodology of protecting the identity of the user.They use the ECC based public key methods [15].Besides, they try to combine the authentication scheme with an attributed based access control scheme.He et al. use a similar method, while they use exponentiation operations instead [36]. We summarize some of them in Table 1.From the table, it can be inferred that privacy is a problem that has not drawn enough attention from the researchers.In some schemes, all the users share the same key to encrypt their identities, this means the encrypted identity could be decrypted by a malicious or curious user using the same key [5,6,10,13].Some of the schemes fail to enable the anonymity of the user or sensor, such as [37][38][39].We adopt the ECC based method to enable the anonymity, which is similar to [15][16][17][18][19][20][21] because "ECC requires smaller keys compared to non-ECC cryptography (based on plain Galois fields) to provide equivalent security" [40].The gateway has a public key that is known by every user; all the identities are encrypted by an XOR method with a new key which is generated from gateway's public key before the identities are sent to the gateway.Thus, only the gateway could learn the identities. As for the shared key between user and sensor, in some schemes, the gateway knows the shared key in schemes [6][7][8][11][12][13][14], while, in some others, the gateway does not know the key, they use Diffie-Hellman (DH) anonymous key agreement protocol to build the shared key [1,2,4,5,9,30].As we have discussed, the gateway is not allowed to know the shared key in order to prevent a curious gateway from eavesdropping the sensor data. Preliminary Elliptic Curve Cryptography (ECC) is a public-key cryptography approach based on the algebraic structure of elliptic curves over finite fields.For current cryptographic purposes, an elliptic curve is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the following: In order to use ECC, all parties must agree on all the domain parameters of the elliptic curve {, , , , , ℎ}: (): the finite field over , where is a prime and represents the size of the finite field Elliptic Curve Diffie-Hellman (ECDH) is an anonymous key agreement protocol that allows two parties; each has an elliptic curve based public, private key pair, to establish a shared secret over an insecure channel.Suppose Alice wants to establish a shared key with Bob, but the channel available for them is not safe.Initially, the domain parameters (, , , , , ℎ) must be agreed upon.Also, each party must have a key pair suitable for elliptic curve cryptography, consisting of a private key (a randomly selected integer in the interval [1, − 1]) and a public key (where = , that is, the result of adding together times). Alice's private key and public key are ( , ); Bob's key pair is ( , ).Alice computes while Bob computes .So the shared key between them is = , because Privacy Enhanced Scheme: PriAuth The structure model of our scheme is depicted in Figure 1. A gateway is introduced to help user and sensor authenticate each other.We suppose this gateway is trustworthy. Symbols Used in the PriAuth.Before the scheme begins, GWN (gateway node) generates the parameters for ECC encryption (, , , , , ℎ).After that, GWN generates its public-key pair ( , ); besides, GWN generates a secret key GWN .The symbols are summarized in Table 2. (1) It creates a random number and gets the timestamp 1 . User Gateway Sensor (2) It covers its password with , = ⊕ GWN- and generates a hash value After GWN receives 's registration message {SID , , , 1 }.GWN has to check the freshness of the message by 1 , if the message is not fresh, GWN abandons the message.Then GWN computes ).If they are not equal, GWN abandons the message.GWN continues the sensor registration phase in the following steps.The registration phase is described in Table 3. (1) GWN computes (2) GWN gets the timestamp 2 and gets the hash value After user passes through the verification, then SC prepares for the authentication process.SC computes = ⊕ using in login phase.SC chooses a random number 1 ∈ [1, − 1] and gets the timestamp 1 .SC then computes the following data: Then SC sends Message 1 = {, 1 , 2 , 1 } to sensor via a public channel. (2) calculates the shared key SK between and : Password Change Phase. If a user wants to change his password, he has to be authenticated by the smart card first.We state the password change process in Table 6, which is a summary of the steps: (1) A user inserts his smart card SC into a card reader and inputs their identity and password: ID , PW . (3) SC compares ℎ( ‖ ID ‖ PW ) with the stored version of in the smart card; if they are equal, SC acknowledges the legitimacy of user . Some Basic Knowledge of BAN Logic. A security analysis of PriAuth using Burrows-Abadi-Needham logic (BAN logic) [41] is conducted in this part.With the help of BAN logic, we can determine whether the exchanged information is trustworthy and secure against eavesdropping.First, some symbols and primary postulates used in BAN logic are described in Tables 7 and 8.The proof goals of PriAuth in BAN logic form are in the way described below.These goals could ensure and to agree on a shared key SK. The Premise and (5) Preparation for Proof. Before the proof begins, messages have to be transformed into an idealized form, the messages of PriAuth in idealized form in BAN logic are given in Table 9 ( = ℎ( 1 ‖ 1 ⋅ )).At the same time, some assumptions have to be made, so (postulate B) and (postulate C) are included as assumptions A11 and A12.The assumptions are listed in Table 10. The Proof of PriAuth. The whole proof of the proposal is in Appendix A. It has been divided into 3 parts related to Message 2, Message 3, and Message 4 separately.The two goals of the scheme are proved at the Message 3 and Message 4. The proof results show that PriAuth is secured under BAN logic. AVISPA Verification AVISPA (Automated Validation of Internet Security Protocols and Applications) is "a push-button tool for the automated validation of Internet security-sensitive protocols and applications" [42].Recently, many papers have used this method as a way to authenticate their protocols, like [24][25][26].HLPSL (High Level Protocols Specification Language) is a role-based language that is used to describe security protocols and specifying their intended security properties, as well as a set of tools to formally validate them.We write the protocol in HLPSL and test the protocol.The code is in Appendix B. The goal of PriAuth is to create a key that is shared by a user and a sensor.The validation result of the protocol is in Table 11.Considering all these testing activities, it could be concluded that our protocol is safe.PriAuth can protect the privacy of the user identity, sensor identity, and the key between the user and sensor. Security and Privacy Analysis In this section, we conduct a security comparison of the schemes that has been depicted as Table 12.For the scheme in [3], we only consider the second situation. Traceability Protection. Traceability means the adversary can track a user or a sensor according to their identities or masked identities like in the scheme [5,10,[29][30][31][32].Once some fixed information about the identities is used in a scheme, then this scheme could probably be tracked by an adversary.One possible solution is to update their masked identity every time like in the schemes shown in [4,7].But these kinds of solutions are vulnerable to loss of synchronization attack. Synchronization Loss Attack. In order to protect the identity of the user, the gateway will generate a new identity for them when it is requested [4].But if an adversary prevents this new identity from being received by the user, the user could not update his old identity while the gateway has updated its stored version of the user's identity.When the user logs in for the next time, this legitimate user will not be treated as a legal one anymore.A similar problem exists in the scheme [7]. 7.3.Malicious Sensor Attack.Like in scheme [13], the gateway only checks the legitimacy of a sensor.If the sensor is a legitimate one, the gateway will reply some key information to the sensor, but the gateway does not check if the sensor is the one that the user wants to talk to.So a legitimate but malicious sensor could launch an attack.When a user sends a request message { 1 , 2 , 3 , 1 } to a sensor, an inner side legitimate sensor can intercept this message to generate its own { 4 , 5 , ESID , 2 } and send this message to the gateway, as the gateway only checks the legitimacy of the sensor.Therefore, this inner side sensor will definitely be treated as a legal sensor.The gateway will send { 6 , 7 , 8 , 9 , 3 } to the sensor.Afterwards, the sensor will be able to send { 6 , 8 , 10 , 3 , 4 } to the user, and it will be treated as a legal sensor by the user, but the user will not check if this is the sensor he wants to talk to.In this way, the sensor could send false data to the user. Inside User Attack. In scheme [6], all the users share a key V * , so there is a potential risk.The message a gateway sends to the user is = (DID ‖ SID ‖ SK ‖ 1 ‖ 4 ), where = ℎ(DID ‖ V * ‖ 4 ), in which DID and 4 are public message, and V * is shared by all the legitimate users.This means any legitimate user could decrypt to get the shared key SK.7.5.User Impersonation Attack.In scheme [1], when a user asks to access a sensor's data, he could send his request 1 = {ID , ID , , , , } to the sensor. ID , , , and ID are sent publicly; is a random number generated by the user, whereas is a timestamp.Only ℎ( ⊕ ) is regarded as secret information between the user and the gateway.ℎ( ⊕ ) is shared by all the users; other legitimate users, say a legitimate user with ID , could easily generate a request the same as 1 , and then ID will be treated as ID by the gateway. Comparison 8.1.Computational Performance.The normal way to compute the execution time of the protocol is to calculate protocol's computational costs of different operations, and the operations' execution time is measured by simulation [3][4][5][6][7][8][9][10][11][12][13][14].The execution time of XOR operation is very small compared to an elliptic curve point multiplication or hash operation; we neglect it when computing the time approximately [3].We use the famous MIRACL++ Library [43] (example code can be found at [44]).The experiment is conducted in Visual C++ 2017 on a 64-bit Windows 7 operating system, 3.5 GHz processor, 8 GB memory.The hash function is the SHA-1; the symmetric encryption/decryption function is AES with a 128-bit long key of the MR PCFB1 form (using one string to encrypt another string, the same hash function is called to get the hashed form of the key string).The elliptic curve encryption scheme is ECC-160.The results are shown in Table 13. mac is the time for HMAC with SHA-1 operation, according to [9] mac ≈ .The final result is in Table 14. Communication Performance. The sum of each variable length in bytes which a sensor node and a gateway node need while performing authentication process is calculated for comparison of the communication cost.The identity or password is 8-byte long [13].The sizes of the general hash function's output and timestamp are 20 bytes and 4 bytes, respectively [45].The random point of ECC-160 is 20 bytes.The result is shown in Table 15.The byte length of the AES encryption result is treated as byte length of the original data for approximation. Validation LifeWear project intends to improve the quality of human life by using wearable equipment and applications for everyday use [46].The main objective of LifeWear is the development of modern physiological monitoring to inspect human health parameters, like blood pressure, pulse, or the electrocardiogram of a patient in different environments.With realtime data of these health parameters, medical staffs can take actions instantly, which can greatly improve the quality of a treatment.Since medical parameters are sent from patients to medical staffs, data security and patient's privacy are a must.In order to ensure the data confidentiality, all the data must be encrypted before they are sent.The proposed scheme helps the patients and medical staff building a shared key.This key will be used to encrypt the health parameters of the patient.In order to protect the privacy of the patient, all the identities are encrypted before they are sent as well.Since wearable sensors have only limited computability, we introduce a gateway to provide the patients and medical staff the shared key to be used in the system. LifeWear project also makes use of a middleware solution able to hide heterogeneity and interoperability problem.This middleware is composed of four abstraction layers related to the functionalities covered in each of them, namely, hardware abstraction layer, low and high services, cross-layer services, and service composition platform. The hardware abstraction layer includes the IoT hardware platform, the operating system, and the networking stack.It offers an easy way to port the solution to other hardware platforms.The low and high service layers define the software components needed to abstract the underlying network heterogeneity, thus providing an integrated, distributed environment to simplify programming tasks by means of a set of generic services, along with an access point to the management functions of the sensor network services.The upper layer is the service composition platform, designed to build applications using services offered by the lower layers.The cross-layer services are offered to both high and low level services in order to provide inner service composition.The proposal presented in this paper (PriAuth) has been deployed as a service inside this layer.The security service can be used by the upper layer (service composition) to compose newly secured services, based on the services presented in the lower layers. The architecture has been deployed over a commercial IoT node solution called SunSPOT platform, manufactured by Oracle.Main characteristics of SunSPOT hardware platform are as follows: Conclusions Privacy will be a big concern as more and more IoT equipment is applied into the medical scenarios.In this paper, we propose an authentication and key agreement scheme tailored for Wireless Sensor Networks.We focus on the privacy problems during the authentication process.Our scheme not only ensures the security of the data but also protects the identity privacy of the users and sensors.The shared key between the user and sensor is built by means of the Elliptic Curve Diffie-Hellman method, which could ensure forward privacy.The proposed scheme has been verified with BAN logic and AVISPA, which are the two most commonly used tools to validate the security of the communication scheme.Simulation results show that our scheme is feasible and secure.Furthermore, experiment results show that our scheme is comparable with the related works in terms of computation cost and more efficient in communication cost.As part of our work in the LifeWear project, we focus on privacy problems during the authentication and key establishment processes.In future, we will pay more attention to authentication scheme without the help of the gateway. Figure 1 : Figure 1: The structure of the model. Table 1 : Comparison of protection of privacy. Table 2 : Symbols used in the PriAuth. GWN GWN's secret value, master key GWN- Shared key between and GWN ( , ) The private key and public key of GWN The generator of ECC SK, SK Shared key between user and 1 , 2 Timestamp ℎ Hash function 4.2.Registration Phase of the Sensor.The registration messages of the sensor in registration phase are sent via the public channel.Sensor conducts the following steps for registration: first checks the freshness of 2 , then computes = ⊕ℎ(SID ‖ GWN- ), and checks if = ℎ( ‖ GWN- ‖ 2 ); if they are equal, stores { , , , , , , ℎ, } in its memory.4.3.Registration Phase of the User.User chooses a random number and computes = ℎ( ‖ ID ‖ PW ). then sends {ID , } to GWN via a secure channel.After receiving the user registration message {ID , }, GWN computes = ℎ(ID ‖ GWN ), = ⊕ .Finally, GWN sends { , , , , , , ℎ, } to .After receiving { , , , , , , ℎ, }, inserts the previously selected random nonce into it, now what in the smart card is { , , , , , , , , ℎ, }.The registration phase is described in Table 4. 4.4.Login and Authentication Phase.If user wants to access a sensor's data, has to login first.This login process is completed by the smart card SC.A user inserts his smart card SC into a card reader and inputs his identity ID and password PW .SC computes a temporary version = ℎ( ‖ ID ‖ PW ) using the inserted PW , ID and the stored value .Then SC compares with in the smart card.If they are equal, SC acknowledges the legitimacy of . Table 3 : Registration phase of the sensor.Sensor Gateway SID , GWN- master key GWN for each sensor stores SID , GWN- random number gets timestamp 1 Table 6 : Password change phase of the user.SC computes = ⊕ using the stored values and the user password . Table 7 : Symbols of BAN logic. Table 8 : Some primary BAN logic postulates. Table 13 : Computation time of different operations. Table 14 : Computation cost of the login and authentication. MUL22.551
6,241.2
2017-12-24T00:00:00.000
[ "Computer Science" ]
WAMI: a web server for the analysis of minisatellite maps Background Minisatellites are genomic loci composed of tandem arrays of short repetitive DNA segments. A minisatellite map is a sequence of symbols that represents the tandem repeat array such that the set of symbols is in one-to-one correspondence with the set of distinct repeats. Due to variations in repeat type and organization as well as copy number, the minisatellite maps have been widely used in forensic and population studies. In either domain, researchers need to compare the set of maps to each other, to build phylogenetic trees, to spot structural variations, and to study duplication dynamics. Efficient algorithms for these tasks are required to carry them out reliably and in reasonable time. Results In this paper we present WAMI, a web-server for the analysis of minisatellite maps. It performs the above mentioned computational tasks using efficient algorithms that take the model of map evolution into account. The WAMI interface is easy to use and the results of each analysis task are visualized. Conclusions To the best of our knowledge, WAMI is the first server providing all these computational facilities to the minisatellite community. The WAMI web-interface and the source code of the underlying programs are available at http://www.nubios.nileu.edu.eg/tools/wami. Minisatellite maps A DNA region is categorized as a minisatellite locus if it is composed of tandemly repeated DNA stretches and spans more than 500 bp. Each of these stretches is called a unit and it holds (by most definitions) 10-100 bp. The units are not necessarily identical due to point mutations, and their number and organization may vary among individuals as a result of subsequent evolutionary events. The variant repeat mapping by PCR (MVR-PCR) is a popular technique to reveal the structure of a minisatellite locus as it enables unit typing and minisatellite map production. Unit typing is the classification of the variable units into distinct types (called variants and denoted by different symbols) according to their DNA sequences. A minisatellite map is a compact representation of the minisatellite locus, where each unit is replaced with the respective symbol. Figure 1(a) shows an example of a minisatellite locus and the respective map. Applications of minisatellite map analysis Minisatellite maps have manifold applications in forensics and population studies. Foster et al. [1] used minisatellite maps to resolve the dispute on the fatherhood of President Jefferson to a son of his slave. They showed that Jefferson is the biological father of her last son, but not her first son as thought before. Based on the MS205 dataset, Armour et al. [2] confirmed the African origin for modern humans, Alonso et al. [3] proved a European affiliation for the Basques, and Rogers et al. [4] dated the Eurasian population as 52000-66000 years and the oldest European as 37600-56200 years. Using the MSY1 dataset, which was first investigated by Jobling et al. [5], Brión et al. [6] showed that European lineages are more similar than North African ones. Bonhomme et al. [7] used minisatellites to study house mouse population and provided a migration map for them. Very recently, Yuan et al. [8] used the MS32 minisatellites to study the population specificity among Thai, Chinese, and Japanese. They showed that the MS32 minisatellite is an effective tool to distinguish individuals from these populations. The functional and medical roles of minisatellites have also been addressed in many studies for the last two decades, and the interest increases with more individual genomes becoming available. To mention a few examples, Thierry et al. [9] discovered a class of minisatellites involved in cell adhesion and pathogenicity. Vafiadis et al. [10] proved that the Insulin minisatellite plays an important role in the regulation of Insulin and the authors of [11,12] showed that it is associated with polycystic ovary syndrome, obesity, and type I diabetes. Raeder et al. [13] showed that the mutations in the CEL minisatellite is correlated with exocrine dysfunction in diabetic patients. Tsuge et al. conjectured that polymorphisms in minisatellites at the flanking region of SMYD3 are susceptible risk factors for human cancer [14]. For more studies, we refer the reader to the review of Vergnaud and Denoeud [15] and the WikiGenes page in [16]. Computational challenges in minisatellite analysis Researchers analyzing minisatellite maps usually perform the following computational tasks: 1. Comparison of minisatellite maps by computing all pairwise alignments. 2. Construction of a phylogenetic tree based on all pairwise distances, to show the relatedness between the involved individuals. 3. Studying structural variations, to examine how the unit types vary and distribute along a minisatellite map. 4. Studying duplication dynamics, to infer the type from which the map originated and in which direction the map elongates. Recent studies often relied either on visual inspection or on heuristic methods. To our surprise, most did not make use of the recent advancement in the bioinformatics methods developed for pairwise map comparison [17,18]. We think this situation is mainly due to the lack of both web-servers and open source tools performing the aforementioned tasks. To the best of our knowledge, there is currently just one server, called MS_ALIGN (http://www.atgc-montpellier.fr/ms_align/) for minisatellite map comparison [17]. It is, however, limited to computing all pair-wise alignments, with no post-processing and visualization of map alignments. In this paper, we present the web server WAMI for the analysis of minisatellite maps. The server uses a recent algorithm for map alignment, improved over the one in MS_ALIGN, and provides a workflow for the execution of the four computational tasks mentioned above, including visualization. These capabilities are demonstrated here by the analysis of the MSY1 [19] and MS205 [2][3][4]20] datasets. Model of minisatellite map evolution and alignment Minisatellite maps can be studied in an independent or a comparative fashion. In the former, a map is analyzed to identify the evolutionary history that gave rise to the observed sequence of units. In the latter, two maps are aligned together to figure out regions of common and individual evolution histories. However, both tasks are entangled, since a region of individual evolution, juxtaposed to a gap in the map alignment, must have a plausible individual history. This makes minisatellite map alignment algorithmically more challenging than the standard sequence alignment problem. Map evolution Our evolutionary model of minisatellite maps includes the following operations acting on the unit level: • Unit mutation: This is the change of one unit type into another. For example, the unit b in the map abd mutates into c leading to the map acd. The unit mutation is a consequence of point mutations (substitution and indels) acting on the DNA sequence of the units. In the example of Figure 1, the differences between the three unit types are attributed to nucleotide substitutions. • Duplication: Duplication (also known as expansion or amplification) is the generation of new copies of the units by tandem duplications. Replication slippage, reciprocal exchange (unequal crossover or unequal sister chromatid exchange), and gene conversion (including synthesis-dependent strand annealing, abbreviated by SDSA) are potential mechanisms for unit duplications. The first is suggested for short segments while the others are for long ones; see [21][22][23][24][25] for more details. Figure 1(b) illustrates the unequal cross over mechanism, where the paired homologous chromosomes exchange unequal segments during the cell division. This results in the duplication of the unit b in one chromosome and the deletion (contraction) of it in the other. The singlecopy duplication model assumes that one unit can duplicate at a time while the multiple-copy duplication model assumes that multiple adjacent units can duplicate at a time. For example, the adjacent units bc in the map abbc can duplicate in one event, leading to the map abbcbc. • Insertion/Deletion: Insertion is the appearance of unit types, possibly due to errors or translocation events. For example, insertion of unit z in the map ac leads to the map azc. A dual operation to insertion is deletion where one unit disappears, leading also to map contraction. Potential mechanisms for these events include the ones mentioned above except for replication slippage. Each of these operations is assigned a cost to reflect the relative rate at which it occurs in nature. The cost of a unit mutation is proportional to the Hamming/edit distance between the nucleotide sequences of the units. We write d M (x,y) to denote this cost between two units x and y. (Of course, d M (x,y) = 0 if x = y.) In Figure 1, d M (a,b) = 1 because of one mismatch at the last nucleotide, and d M (a,c) = 2 because of mismatches at the fourth and the last nucleotide. The costs of duplication, insertion, and deletion are arbitrary and usually chosen such that the duplication is less than the mutation, deletion and insertion cost. Reconstruction of evolutionary history The evolutionary history of a map is the series of evolutionary operations leading to the observed sequence of units. This history is also called duplication history, because the duplication is the main event contributing to map evolution. The cost of a duplication history is the total cost of the occurring operations. An optimal (most parsimonious) history is one with the minimal cost. For example, one history of the map bcaccbb originated from the leftmost unit b is as follows: The leftmost unit b duplicated three times to the right leading to the sub-map bbbb. Then the second b mutates into c leading to the sub-map bcbb. The unit c duplicates two times to the right producing the sub-map bcccbb. The second c mutates into a and the last c duplicates once again to the right leading to the final observed map. Assuming that d M (a,b) = d M (b,c) and d M (a,b) <d M (a,c), we leave it as an exercise for the reader to verify that this scenario is indeed an optimal one. Map alignment The alignment of the minisatellite maps includes the operations of replacement (match/mismatch, where mismatch corresponding to mutation), free insertion/deletion (indel), and duplication. Given a cost for each operation, an optimal alignment is the one of minimum cost. An efficient algorithm for finding optimal map alignments is ARLEM [18]. An optimal map alignment delivers a three-stage scenario: The aligned units (match/mismatch) refer to common ancestors, the duplications refer to differences in the individual duplication histories, and the indels may refer to errors or translocations. Figure 2 (right), shows an alignment of two maps where the replaced (matched/ mismatched) characters are put above each other and the units evolved by duplications are attached to arcs. In this representation, an arc connecting two identical units corresponds to a duplication event, and an arc connecting two different units corresponds to a duplication followed by a mutation. In this alignment, the sub-map bcaccbb has emerged as a result of duplication/mutation events from the leftmost unit b. This sub-map is the example given above in the duplication history, and no indels exist. Biologically, the map alignment, compared to individual map analysis, provides clues about the timing and direction of map evolution as well as the type of operation. In the alignment, we can conclude that the replaced units had appeared before the units evolved by duplications because of inheritance. We can also conclude that the evolved regions emerged from the inherited units that occur either on its left or right side. Furthermore, if we know that one sequence is the ancestor of the other, then we can distinguish better between the loss and gain of units, i.e. contraction versus duplication and insertion versus deletion. Extension beyond the single-copy model For WAMI, we extended ARLEM with a simple heuristic algorithm to account for double-copy duplications, where at most two different units can duplicate at a time, e.g., bcTbcbc. The idea of our algorithm is to pre-process the map to locate each sub-array of units of the form "xyxy...", where x ≠ y. We then create a new type X = xy and replace the units in this array with the new type to yield the array "XX...". The distance between the new type X and each original type z is the cost of optimal duplication history of xy emerged from or contracted to z. The distance between two new types X = xy and X' = x'y' is the cost of aligning xy and x'y'. Finally, the alignment algorithm of ARLEM runs on the transformed map using the new distances between the map units. In WAMI, the use of double copy model is optional, because the single copy duplication model is already sufficient for many data sets. Computationally, it is infeasible to infer a history under the still more general, multiple copy duplication, model involving arbitrary number of copies [26]. Four tasks supported by WAMI Fast computation of pairwise map alignment The basic step in WAMI is the computation of all pairwise alignments of the input maps, uploaded or edited online in multi-fasta format. The user can use default parameters (costs of each operation) or specify other ones through the use of a cost file uploaded to the server. The map alignment model implemented by ARLEM allows that aligned units duplicate either to the left or to the right. For example, the sub-map dd in the aligned lower sequence of Figure 2 (right) was originated from the inherited unit c on the right, where c duplicated to the left to produce cc, then the leftmost c mutated into d which eventually duplicated to the left to produce another d. Previous programs allowed duplication only in left-to-right direction, where such a scenario cannot be modeled. This leads to an alignment of higher cost. This symmetric feature is crucial for studying the direction of map elongation, discussed below. Phylogenetic tree construction WAMI uses the program BIONJ [27] to construct the phylogenetic tree from the pairwise distances computed by ARLEM. BIONJ is based on a neighbor joining algorithm. The program njplot [28] is then used to visualize the tree. Analysis of structural variation In studying structural variation, researchers try to identify highly variable regions of the map. Most previous studies showed that map extremities are more variable than other map regions, a phenomenon known as (bi)polar variability [2,5,20]. WAMI can automatically provide evidence of (bi)polarized variation for a given dataset based on a scramble (randomization) test. The lower section sets the option for building a phylogenetic tree over the input map. Note that there are other related tabbed pages, including introduction, web-service, download, and help/Blog pages. Right: The result page of WAMI. The output is organized into three categories: Alignment, phylogeny, and batch retrieval. In the alignment category, all pairwise alignments can be displayed. Here, an alignment between maps one and two (given in the left screen shot) is visualized. The replaced (match/mismatch) units are put above each other. An arc connecting two identical units corresponds to a duplication event, and an arc connecting two different units corresponds to a duplication followed by a mutation events. The sub-map composed of the units " bcaccbb" of the lower sequence emerged from the leftmost unit b of this submap. The duplication history was the one explained in the subsection about duplication history and alignment model. The category showing the phylogenetic tree appears only if this option was set. We provide the tree in text, JPEG (shown image), and PDF format. Finally, we provide a link to a compressed file containing all the input/output files of a WAMI run. The program ARLEM was augmented with an extra option that determines the location associated with half of the optimal score in the aligned maps. We call this location the pivot-point. The rationale of the pivot point is that if the variations were accumulated at one end, then the pivot-point would be shifted towards this end. The pivot-points are calculated for all pairwise alignments and normalized with respect to the respective sequence lengths. A histogram for the pivot-points is then generated. To qualify the results, WAMI computes another histogram for a randomized dataset obtained by shuffling the units in each map of the input dataset. It is expected that the histogram for uniformly distributed unit types along the maps is close to the Gaussian distribution, centered around the value 0.5. WAMI produces a single plot containing the two histograms overlaid on each other. The results section contains examples of applying this procedure to MS205 and MSY1 datasets. Analysis of duplication dynamics Determining the direction in which the units duplicate is an interesting issue that can help in inferring the evolutionary processes and the source/origin unit of the map. For the MSY1 dataset, for example, Jobling et al. [5] conjectured that Type 4 (4a) is the source of the map and assumed that the units preferably duplicate in the 3' T' 5' direction. WAMI has a procedure that can test this kind of hypothesis based on another scramble test. In ARLEM, units are allowed to duplicate towards the left or towards the right to achieve the best alignment score, while accommodating the most parsimonious series of duplication events. We added an option to ARLEM to restrict the duplications to originate either from the leftmost or from the rightmost unit of a map interval with duplication events. For example, if only the option imposing left-to-right duplication origin was set, then the sub-map "dd" in the aligned lower sequence of Figure 2 (right) could not have been originated from the unit "c" on its right, leading probably to increased alignment cost under this restriction. To detect directional bias, WAMI invokes ARLEM three times on the dataset: 1) with both duplication directions allowed, 2) with only left-to-right duplications allowed, and 3) with only right-to-left duplications allowed. The latter two cases tend to yield higher cost than the first, because the duplications may be forced to follow a non-parsimonious scenario. Then the number of alignments in the second invocation with cost higher than the optimal one (as determined by the first invocation) is counted. Let E l denote this number. The analogous number E r for the third invocation is also computed. A normalized value combining both figures E n = (E l -E r )/ E l + E r ) is then computed. The idea is that if E l was differ-ent from E r , and E r was small, then E n would converge to + 1, and one could argue that the duplications in the rightto-left direction are almost sufficient to yield alignments close to the optimal ones. Hence, right-to-left duplications appear preferred in the evolution of the minisatellites at hand. To further validate the results, WAMI runs a scramble test and computes the normalized E n values for many random datasets, obtained by shuffling the map units. Finally, the E n values are summarized in a histogram and plotted along with a peak representing E n of the original dataset. For random datasets where duplications to the left and to the right occur in an equal rate, it is expected that the distribution of E n is close to the Gaussian distribution centered around the value zero. The scramble test is compute intensive, because the map alignment phase is repeated many times over scrambled datasets of the same size as the original. To speed up the computation, we use an approximation technique. We reduce each map to its modular structure, which is the sequence of distinct units in the map. For example, the modular structure of the map aaabbc is abc. This is reasonable because transitions between unit types strongly indicate the direction of duplication. Because the modular structure is typically much shorter than the map, a significant speed up is achieved. User Interface WAMI has an easy to use and intuitive interface. The main web-page contains four examples to help the user format map data and cost file. (One example is about the real dataset for the President Jefferson's fatherhood case, mentioned above. Other two examples about some published maps of the MSY1 [4] and MS205 [2][3][4]20]. datasets.) Tool tips and a help menu are also provided. For sustainability of service, we attached a blog to the website, to collect user feedback and learn about new features requested by the community. A part of the main interface is shown in Figure 2 (left). Upon job termination, the user is directed to the results page, where pairwise alignments are displayed and one can toggle between them, see Figure 2 (right). The duplication events within optimal alignments are represented by arcs. The images depicting the alignments are produced based on LaTeX. (The respective Tex files are included in the batch download). If the phylogeny option was chosen, the tree in Newick/JPEG/PDF format can be retrieved. The results of structural variations and duplication dynamics options are summarized and presented to the user in the form of histograms. For datasets larger than 50 sequences, the user is prompted to enter an email address to receive a notification when the job terminates. All these results can be downloaded as a compressed file. Computational efficiency The program ARLEM uses a highly optimized algorithm for map alignment. It is based on a compression technique to save redundant computations and its speed is not affected by any increase in the number of types. In [18], we reported that ARLEM is 18 to 24 times faster than the previously available algorithm MS_ALIGN, using real and artificial datasets. For further speed-up, the options for computing phylogeny, analyzing structure variations, and duplication dynamics run in parallel over a computer cluster of four nodes, where each node contains two Quadcore CPUs (2.5 GHz each) with 64 GB RAM. Results and Discussion The examples given in the sequel are based on the minisatellite datasets MSY1 [5] and MS205 [2][3][4]20]. The former dataset is composed of 345 maps and the number of distinct unit types is eight; the types are assigned the codes {0, 1, 1a, 2, 3, 3a, 4, 4a}. The latter dataset is composed of 653 maps of which 429 valid maps belong to haplotype C [4]. The number of distinct unit types is two and the types are assigned the codes {A,T}. Table 1 shows the running times for real and artificial datasets of varying sizes and for different scramble test parameters. The number of iterations is the number of random datasets analyzed for studying the directional bias based on the modular map structure. The number of iterations based on the non-modular structure is a multiplication of the alignment time. The time for constructing the phylogenetic trees is not shown in the table, because it is in the range of seconds, i.e., negligible compared to other steps. The alignment time of the MSY1 dataset is higher than that of the MS205 because the average length of the MSY1 maps is higher than that of the MS205. But in analysis of directional bias, MS205 takes more time because the average length of its modular structure is three times the one of MSY1 with much higher variability, and our approximation technique described above is less effective for MS205. (The average modular structure lengths is approximately 13 and 4 for the MS205 and MSY1, respectively.) The random datasets were generated such that each map has an average length of 80 units (minimum and maximum are 60 and 100 units, respectively) with average modular structure length of 12 units to simulate difficult scenarios. Figure 3 shows two phylogenetic trees produced by WAMI for a subset of the MS205 and MSY1 datasets. In these trees, we see that individuals from the same population are clustered together, which is in accordance with published results [2,3,6]. Structural variation We applied WAMI to both datasets to investigate structural variation. When studying structure variation with MS205, Armour et al. [2,20] noticed polarized variability at the 3' end, where most of the differences between the alleles (individual maps) accumulate at the 3' end. Figure 4(a) and 4(b) shows the histograms of the pivot points obtained for the original MS205 dataset and a subset of it Running times in minutes on WAMI for the MSY1 and MS205 (haplotype C) datasets. The column titled "Dataset" contains the dataset used. "Random100", "Random200", and "Random400" are datasets with 100, 200, and 400 artificial maps, respectively. The column titled "Num." contains the number of pairwise map alignments which need to be computed. The column titled "iteration" includes the number of randomization steps (and hence increased data size) in the analysis of duplication dynamics. The number of iterations in the task of analyzing structural variation is 2 because it runs one time on the original dataset and one time on randomized dataset of the same size. including haplotype C. It is clear that the histograms of the original datasets are biased to the right in comparison to that of a randomized datasets. This bias indicates polar variability towards the 3' end. These plots confirm the results obtained by Armour et al. [2]. (The presented results of MS205 are obtained using the double-copy option, but the results under the single-copy model are very similar.) For the MSY1 dataset, lying on the Y chromosome, Jobling et al. [5] noticed high variability at the 5' end in contrast to the autosomal MS205 dataset, and they noticed also that Types 4 and 4a, existing almost solely at the 3' side, causes another source of variation at this end. This suggests bi-polar variability of this dataset. For us it was interesting to see how WAMI can thus help in spotting not only polar but also bi-polar variability. Figure 4(c) shows our observations for the MSY1 dataset. The resulting histogram has peaks at both ends. This indicates that the variations are bi-polar. To further verify our procedure on the MSY1 dataset, we removed Type 4 and 4a from the 3' end and repeated the experiment. Figure 4(d) shows biased histogram to the 5' end. That is, both extremities of the MSY1 maps are highly variable, and the unit types 4 and 4a already introduces another source of variation, verifying the observation of Jobling et al. [5]. Duplication dynamics We used WAMI to study duplication dynamics with the MSY1 and MS205 datasets. Figure 5(left) shows the resulting histogram for MSY1. The peak value on the right shows E n of the real dataset, where E l = 876, and E r = 0. It is clear that this value is far from the E n values of the randomized datasets with expected equal rates of left-toright and right-to-left duplications. That is, the plot indicates that left and right duplications do not contribute equally to the duplication history, and the units duplicate preferably in the direction 3' T 5', as conjectured by Jobling et al. [5]. In Figure 5 (right), we show the histogram for the MS205 dataset (haplotype C), which also shows directional bias, but this time towards the right ( E l = 1940, and E r = 13318). These results for both datasets may indicate the existence of unknown (chromosomespecific) dynamic constraints governing the duplication Left: Histogram to detect directional bias for the MSY1 dataset. The distribution of E n of the randomized data is centered around zero. The peak at point 1 on the x-axis is for E n of the original dataset, and it is clearly far from that of data with expected equal rates of left-to-right and right-to-left duplications. Right: Histogram to detect directional bias for the MS205 dataset. The peak on the left on the x-axis is for E n of the original dataset. of the minisatellite units. Hence, they call for further investigation. Conclusions In this paper, we presented WAMI, a web server for comprehensive analysis of minisatellites. The server provides many of the functionalities needed by researchers in this area. Future versions of the server are planned to provide data-mining functionalities for associating the map comparison results to other features, like age, ethnicity, or genetic markers on the chromosomes. The algorithms of WAMI for minisatellite map analysis can also be used for comparing arrays for tandemly repeated units within proteins or gene sequences; the work of Rivals et al. [29] shows an example of this application. The alignment part of WAMI can also be used to compare parent/son microsatellite datasets, provided that the microsatellite units are mapped to symbols, in analogy to the unit typing step of minisatellites. In addition to its applications in parental tests, this comparison helps in studying the mutation rates in association with other map characteristics and helps in estimating parental ages. The work of Dupuy et al. [30] is an example of such studies. In this paper, we rely on a map evolution model based on single-and double-copy duplications. In spite of the computational difficulty, it is still interesting to incorporate the multiple copy duplication model in map alignment, eventually through heuristic algorithms. Furthermore, it is also desirable to incorporate recently suggested evolutionary operations, such as boundary switch and modular structure change [31] appearing in some minisatellite datasets. These operations could be modeled by block exchange within the map, in an analogous way to the block exchange operation in genome rearrangement studies. But a practical solution to this problem is algorithmically challenging and remains a subject of future research. Availability and requirements Project name: WAMI: A Web Server for the Analysis of Minisatellite Maps. Programming language: Perl, C, Java script, JSF Other requirements: Better viewed on the browsers FireFox, Internet Explorer 8 (IE8), Safari, and Opera. For local installation, Tomcat 6.0 or more, JDK 1.5 or more, Apache Ant 1.7 or more are needed. License: Free for academics. Authorization license needed for commercial usage (Please contact the corresponding author for more details). Any restrictions to use by non-academics: No restrictions. Authors' contributions MA and RG contributed to theoretical developments which form the basis of WAMI. MA and MEK developed and tested the software. All authors wrote and approved the manuscript.
6,942.6
2010-06-06T00:00:00.000
[ "Biology", "Computer Science" ]
Classifying Sources Influencing Indoor Air Quality (IAQ) Using Artificial Neural Network (ANN) Monitoring indoor air quality (IAQ) is deemed important nowadays. A sophisticated IAQ monitoring system which could classify the source influencing the IAQ is definitely going to be very helpful to the users. Therefore, in this paper, an IAQ monitoring system has been proposed with a newly added feature which enables the system to identify the sources influencing the level of IAQ. In order to achieve this, the data collected has been trained with artificial neural network or ANN—a proven method for pattern recognition. Basically, the proposed system consists of sensor module cloud (SMC), base station and service-oriented client. The SMC contain collections of sensor modules that measure the air quality data and transmit the captured data to base station through wireless network. The IAQ monitoring system is also equipped with IAQ Index and thermal comfort index which could tell the users about the room’s conditions. The results showed that the system is able to measure the level of air quality and successfully classify the sources influencing IAQ in various environments like ambient air, chemical presence, fragrance presence, foods and beverages and human activity. Introduction People normally spend most of their time in indoor environments. Therefore, their health depends heavily on the indoor environment in which they live. Hence, meticulous attention should be given to make sure the indoor environment is safe and comfortable. As a major part of the indoor environment, attention should also be given to indoor air quality (IAQ). Continuous monitoring of IAQ is important to make sure people breathe in a healthy and safe air. Real-time IAQ monitoring keeps people alert to any pollution that might be present in an indoor environment right as it happens. A good IAQ monitoring system should also be able to tell the users about the source of pollutants (for example: volatile organic compounds (VOCs) emitting from a chemical product). A better IAQ monitoring system with enhanced featured is proposed in this paper-a smart IAQ monitoring system. This smart IAQ monitoring system could identify and inform the users about the source influencing the IAQ level. For example, when there is smoke in a room, this system could identify the instance and inform the users. The information is sent through wireless network to a database where the users can access it from anywhere. In order to make sure that the level of IAQ is within the acceptable level, many parties especially building administrators have made a considerable effort. Some researchers found that low level of IAQ could affect the quality of life of the occupants and may result in low productivity [1]. Modern buildings, especially, need more attention on IAQ because these buildings have been built with the concern of energy conservation [2]. Following the oil crisis in the late 70s, the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) had come out with certain guidelines for new buildings designed to be energy efficient [3]. This change in the practice of modern buildings however has increased the possibility for building occupants to develop Sick Building Syndrome (SBS) and some other Building Related Illness (BRI). The standards for IAQ have been issued by various parties such as ASHRAE, WHO, US EPA and by individual country like Malaysia, Singapore and Hong Kong. For the purpose of this study, the IAQ standards issued by US EPA and ASHRAE have been followed closely. IAQ, as defined by the ASHRAE in Standard 62-2001: Ventilation for Acceptable Indoor Air Quality, is air in which there are no known contaminants, as harmful concentrations are determined by cognizant authorities and with which a substantial majority (80% or more) of the people exposed to it do not express dissatisfaction. Even though IAQ problems happen occasionally, it could be severe when they do happen. The studies conducted by the United States Environment Protection Agency (US EPA) showed that pollutants present in indoor environments sometimes can be more dangerous than pollutants in outdoor environments. In most cases, the concentration of pollution was usually about five times higher than the concentration level found outside. However, in severe cases, the concentration of pollution could be 100 times higher [1]. Among the most common indoor air pollutants include gases, chemicals and living organisms like mold and pest. These pollutants could impair the health of the building occupants. Sore eyes, headaches and fatigue are among the common complaints associated with low IAQ level. In serious cases, occupants always complain about respiratory related illness, heart disease, and some may even suffer from cancer and other serious health problems [4,5]. At a very high concentration, pollutants like carbon monoxide can lead to death [6]. This study aims to propose a smart IAQ monitoring system where the sources influencing IAQ can be identified. This study uses ANN to train the system to recognize the sources influencing the IAQ level in five conditions: ambient air, chemical presence, fragrance presence, foods and beverages presence and human activity. In artificial intelligence (AI) field, ANN is one of the major components which use a modeling technique inspired by the human brain (memory-less processing elements known as neurons or nodes which are nonlinearly interconnected) and which can be trained from examples. When ANN is properly trained, it can cleverly classify a data to a population. This attribute of ANN is so special that it may give better results as compared to other techniques in the same situation. Therefore, many researchers have opted for the use of ANN including in environmental sciences-usually to predict atmospheric concentration of NO2, Ozone, benzene and PM10 [7][8][9]. This IAQ monitoring system consists of three parts: sensor module cloud, base station and service-oriented client. The sensor module cloud (SMC) contains collections of sensor modules that measure the air quality data and transmit the captured data to base station through wireless network. Each sensor modules includes an integrated sensor array that can measure indoor air parameters like Nitrogen Dioxide (NO2), Carbon Dioxide (CO2), Ozone (O3), Carbon Monoxide (CO), Oxygen (O2), Volatile Organic Compounds (VOCs) and Particulate Matter (PM10) along with temperature and humidity. Related Works Some researchers have proposed a framework or a system to monitor IAQ using wireless sensor networks such as [10][11][12], while others have incorporated index calculation to determine the IAQ level such as [13][14][15]. Hui Xie et al. [7] exploited the ANN in order to predict the SBS. The results were compared to the other linear regression models. It was concluded that ANN gave better predictions and could be useful in other areas as well. Ghazali et al. [16] also developed ANN air quality prediction model using feed-forward network. The result is similar to Hui Xie et al. [7] where the model with ANN structure produced the best prediction. Abd Rahman et al. [17] proposed a forecasting of air pollution index. The forecasting was based on 10-year monthly data of Air Pollution Index (API) located in industrial and residential monitoring stations area in Malaysia. The autoregressive integrated moving average (ARIMA), fuzzy time-series (FTS) and ANN were used to forecast the API values. The results showed that ANNs gave the smallest forecasting error as compared to the other two methods. Clearly, ANN is useful in decision making processes for air quality control and management. Previous researches on IAQ monitoring system usually used the ANN to predict or forecast either the value of air quality or the value of air pollution. This project is different from previous researches because this project aimed at training the ANN to classify and identify the sources that influence the indoor environment. The functionality of this system and the methodology of this project are explained in the next section. At present, odour has also been included as part of sources influencing IAQ [18]. Therefore, many studies had been carried out to use the IAQ monitoring system for odour detection in real time. In fact, the IAQ monitoring system is the only system available to get continuous, quick and reliable information about the presence of odours in ambient air [18]. However, the odour recognition principle was mainly used to investigate the odour in outdoor environments [19][20][21][22]. In indoor environments, odour recognition was usually used to detect odour from one single category. For example, the use of odour recognition to recognize types of mushrooms, oil flowers and pure chemicals only [8,15,23]. Nonetheless, odour recognition from sources of indoor air pollutants has not been carried out. Selection of IAQ Parameters There are various parameters involved in measuring IAQ. These parameters are divided into four categories: physical condition, chemical contaminants, biological contaminants, and other common contaminants. However, for the purpose of this project, only nine parameters have been chosen. These parameters have been identified as the parameters that are most used in measuring the IAQ [24][25][26][27]. The parameters that are used in this project include parameters of common indoor air contaminants (CO2, CO, O3, NO2, VOCs, O2 and PM10) and thermal comfort parameters (temperature and humidity). System Architecture The proposed system has been designed in such a way that it is able to monitor air quality in both indoor and outdoor environments. Nine sensors have been used to capture data for the nine parameters required in this project. Figure 1 shows our proposed system architecture for real time IAQ monitoring. The proposed system consists of sensor module cloud (SMC), base station and service-oriented client. The SMC contains an array of sensor modules that captures the air quality data. The data are then transmitted to the base station through wireless connection where the data are stored in a server. The server functions as data logger to keep track of data received in base station. It stores the data in database, processes the data, performs analysis and provides information about IAQ information through web service. The web service enables the clients or users to be informed about the IAQ level in real-time. Sensor Module The sensor module is used to sense and measure air quality in indoor environment. Figure 2 depicts a schematic diagram of the major components of the sensing module. Basically, there are three major components: sensor components, microcontroller and wireless transceiver. This entire component is powered-up by using wall adaptor for continuous power supply. In this study, a microcontroller unit (MCU) named STC12C5A32S2 with built in 10 Bits Analog-To-Digital Converter (ADC) converts the analogue reading for each sensor response to digital value. STC12C5A32S2 is chosen because this 8-bit microcontroller processes 12 times faster than general 80C51 (it can go up to 35 MHz operation frequency) [28]. After the conversion of the sampling voltage by ADC, the microcontroller then is responsible for encapsulating the converted data into packets and passing the packets via serial port to the wireless transceiver. The wireless transceiver unit transmits the data packets by using Multi-Hop Wireless Sensor Network (WSN) algorithm through the Radio Frequency (RF) chip. Implementation of WSNs in IAQ monitoring has reduced the installation costs due to wire depositions. The IRIS mote from Memsic Company has been selected as the wireless transceiver unit for its low power consumption, low price and complies to the IEEE 802.15.4 wireless protocol [29]. The operating system used in this mote is based on TinyOS which allows the user to quickly implement the communication network [30]. In this system, each IRIS mote is responsible to receive data from sensor components through microcontroller. The data from IRIS is sent to the base station. However, if there is another IRIS mote nearer to the base station, the data is sent to its neighbour (the other IRIS mote) before sending the data to the base station. This data transmission process is known as multi-hopping process. Figure 3 shows the prototype sensor module, consisting of nine gas sensors, and wireless transceiver with microcontroller attached to the wireless transceiver. The sensor components consist of three types of sensors: gas sensor, particle sensor and thermal sensor. Lists of sensors used in the proposed system along with their operational range are presented in Table 1 below. Each sensor generates voltage signal based on current environment. These sampling voltage levels are read by microcontroller periodically. Selecting a proper gas sensor is a relatively complicated issue as many factors need to be taken into consideration. For this study, most of the gas sensors are metal oxide based, while the rest are electrochemical based. These Metal Oxide Semiconductor (MOS) sensors contain tin dioxide (SnO2) sensing element that responds to the gas molecules, which are typically volatile compounds [31]. It consists of two major parts, namely the heater and sensor substrate. The substrate has two terminals and its resistance is measured as a representation of the amount of gas concentration in the environment while the heater provides the stabilized temperature needed for the measurement [32]. Due to its long lifetime, high sensitivity response and low cost, this type of sensor is commonly used in many indoor applications such as homes, offices and factories appliances. The second type of gas sensor used in this study is electrochemical based sensor. This type of sensor has high sensitivity to environmental change and it does not need power to operate. However, this type of sensor has its limitations. These low cost sensors cannot provide accurate readings of the gas presence in the air since they are strongly affected by the temperature, humidity and the presence of other gases. To compensate this limitation, the patterns are observed instead of the actual data measured. The sensor resistance and the gas concentration in the environment interact in the following expression. where Rs is the sensor resistance of the sensor, A is constant, C is the gas concentration and α is the slope of Rs curve [33]. To detect the physical contaminant like dust or particulate matter, Sharp GP2Y1010 optical dust sensor from Sharp has been chosen. This sensor contains an infrared emitting diode and a phototransistor which are arranged diagonally. Any dust that gets into the sensor makes the infrared light reflect which can be detected by phototransistor. The advantage of this dust sensor is it can detect small particles like cigarette smoke. It is commonly used in an air purifier system because it is small, cheap, robust and uses low current consumption [34]. For thermal sensor, HSM-20G sensor is used to detect temperature and humidity. HSM-20G is an analog sensor that converts the ambient temperature and relative humidity into standard output voltages (1-3.3 V). It can measure the temperature between the ranges of 0-50 °C. It is also able to detect the relative humidity (RH) from 20% to 95% with accuracy ±0.5% at 25 °C [35]. With the calibration curve provided in the datasheet, these analog voltages can be converted to the unit of temperature (°C) and relative humidity (%). Base Station The base station in IAQ monitoring system contains two components which are wireless transceiver and a server that act as a data logger. The base station is responsible for managing, collecting and recording the data before displaying it on a computer screen and on the web service. The wireless transceiver unit is similar to the sensor module, which contains IRIS mote-ATmega1281 low power microcontroller and AT86RF230 radio frequency (RF) [29]. It also contains Future Technology Devices International (FTDI) device which emulates RS 232 transmission protocol and communicates with Data Processing Module (DPM). DPM is responsible for processing and writing the air quality data into the database. At the same time, DPM sends the data to the Web Service which allows users to access the information in real-time. A simple database is based on a SQLite format which is used to log the data for further processing (if necessary) on the server system. SQLite has been selected instead of alternatives like MySQL simply due to the fact that it is easier to setup and it uses single file storage. However, due to its limitation of storage, the DPM has been programmed to create one SQLite file which contains one week of data, and this process is repeated for the following weeks. Figure 4 shows the block diagram of the base station. Service-Oriented Client Service-oriented client provides all information on IAQ accessible in real-time. The data is shared through self-developed Graphical User Interface (GUI). The GUI is the interface that facilitates the users to interact with the programs. In this research, the GUI was developed by using LABVIEW software program. In order to stream sensor data for web service that was located in the server, the GUI adopted websocket technology. Figure 5 shows the visual of GUI that has been developed. This GUI provides a map location of the sensor modules which are placed at different locations such as meeting room, lecture room and postgraduate room (this study uses CEASTech Institute in UNIMAP, Perlis as its base location). It also provides the current value IAQ parameters with color-coded graph based on IAQ index level. The system receives the data in the forms of voltages. These voltages are compared to the specified ANN output model that has been trained and then the data are classified according to the training set (any one of the five categories trained). The GUI then displays the information of the source of activity influencing IAQ level. Development of IAQ Index (IAQI) Indoor environmental index has been developed to identify the quality of indoor air as well as the comfort level of the occupants inside a building. To achieve this, two separate indices have been developed: indoor air quality index (IAQI) and thermal comfort index (TCI). Both indices are divided into four status categories as shown in Table 2 below. The indoor environmental index was actually modified from the U.S. EPA Air Quality Index (AQI) which was used for outdoor air so that it fits with the indoor air parameters. Calibration and Validation Calibration of gas sensors is one of the main challenges during the development phase. Although calibration was carried out for all sensors, the discussion was limited only to the calibration of CO2. Calibration of the CO2 sensor was performed in a laboratory environment with each sensor mounting in a completely sealed gas test box SR3 (manufactured by Figaro) specially designed for the testing of gas sensors to avoid any other gases affecting the experiment. The size of the box is 235 mm × 180 mm × 210 mm. Figure 6 shows calibration setup for CO2 gas sensor which was set in the test box. For the use of the SR3 box, the method specified by the manufacturer was used. Initially, the test box was left open in a clean environment and mixing fan was turned on for 3 min to ensure that all contaminants had been removed. After that, a lid was put on the box. Subsequently, the syringe was filled with a volume of CO2 extracted from a gas syringe adaptor of the gas cylinder. High purity gas (≥99.995%) was used for the calibration. CO2 gas was injected in the box through a silicon septum and the mixing fan was turned on for 30 s. A time lapse of 30 s was allowed before reading the sensor output. The lid of the box was removed so that it could return to the 3 min cleaning cycle. During the experiment, room temperature of the calibration environment was maintained at 25 °C. Figure 7 shows the scatterplot of CO2 sensor calibration result at different CO2 gas concentrations. It was observed that the output of the gas sensor was linear to the gas sample concentration. Simple linear regression (LR) method was used as the calibration method for the gas sensors. The coefficient of determination (R 2 ) was 0.97 indicating that the value was a good correlation between the measured data to the gas injected. This data was used to recalibrate the equations provided by the manufacturer in the datasheet. Validation procedure had also been carried out to make sure that the data collected by the self-developed sensor node was similar to the data collected by a commercialized device. The discussion of this procedure was limited only to the NO2, temperature and humidity sensors. The sensor validation was carried out with Aeroqual portable indoor monitor device (a professional grade air quality measurement system) which had been pre-calibrated [36]. Three sensor nodes and the Aeroqual device were placed in a clear sealed glass container of 100 × 40 × 30 cm which was completely sealed. Then, the gas concentration inside the sealed container was made varied by injecting the particular gas of interest. The outputs of the sensors were recorded continuously for 1 h and plotted ( Figure 8). Figure 8a shows that the result of NO2 sensor when 35 ppb of NO2 gas concentration was injected to the sealed container. It can be observed that the value for all sensor nodes including Aeroqual device gave relatively similar readings for 1 h period. During the experiment, the same room temperature setting at 25 °C was applied as shown in Figure 8b while Figure 8c shows the readings for humidity. For validation purposes, means and standard deviations for all three parameters were calculated as shown in Table 3 below. Also shown, the mean value for NO2, temperature and humidity of three nodes (Node 1, Node 2 and Node 3) did not differ substantially between Aeroqual (a pre-calibrated device). This shows that the data measured for each developed sensor modules provide a similar response with the calibrated device. The standard deviation (Sd) from the table shows how the data differed from the mean value for each node. Overall, it shows that the developed system provides reliable data. Experimental Setup and Data Collection The experiment was conducted in a medium-sized room of 4.5 m × 2.4 m × 2.6 m. There was an air-conditioner located at the center of the room at a height of 2.2 m from the ground. The sensor module was installed hanging up to the wall of the room at a height of 1.1 m from the ground. According to the Malaysian Standard on IAQ, the monitoring device or instrument should be positioned at a height between 75 cm and 120 cm, preferably 110 cm from the floor [37]. This position is considered as a breathing zone for the occupants. The node was powered up by using adaptor 7.5 V and was programmed to send the data to the base station every 1 minute. The data collection was conducted in 22 days between 9.00 a.m. and 5.00 p.m. with the room temperature set at 22 °C. Every day, after each experiment, the air in the room was purged out by opening windows to clean the air. Figure 9 shows the process of data collection for all five conditions from day 1-22. The first condition was the ambient air environment. The purpose of this experiment was to collect the data of clean air for the room with the assumption that the ambient air was not contaminated. For the first environment, the data collection process took about two days. Thus, at the end of day 2, there were 960 samples collected for ambient air. The second environment was the environment with the presence of chemical substance. In this experiment, a cleaning agent was used as a proxy of the chemical substance. About 100 mL of chemical was put in a beaker and placed inside the room-at the centre of the room. The experiment was repeated for two days and 960 samples were collected during that period. For the third environment, an air freshener was used as a proxy to the fragrance. An automatic air freshener which released fragrance every 15 min was placed inside the room. It was hung up on the wall at a height of 2 m from the floor and about 2 m from the sensing node. The data was collected for two days with 960 samples. For the fourth condition of room environment with human activity, a person smoking a cigarette was chosen as a proxy. A person was asked to smoke in the room so that the real data of a person smoking cigarette in a room could be collected. That person smoked one cigarette at the centre of the room. Every cigarette produced data for approximately 30 min. The experiment was repeated four times a day for seven days. The amount of data collected for environment with human activity was 940 samples. Lastly, for the room environment with the presence of food and beverages, coffee had been selected to represent this category. A cup of coffee was placed in the middle of the room. Each cup of coffee manufactured 30 min of aroma. The experiment was repeated four times a day for seven days with 940 samples collected. Figure 10 below shows the sensors response toward five different environments which are ambient air, human activity, chemical presence, fragrance product (air freshener) and foods and beverages (coffee). The data was taken from one of samples of the experiments. They showed that the values of the sensors of the module give different responses to the environment. The response of the sensors indicated that the sensing module of the system functions accordingly to the varying sample concentration of different environment conditions. From the graphs, it is clear that the sensor node gave different reactions towards different environments. In Figure 10a, the sensors gave a relatively steady reading throughout the time. The sensors' response was as expected since there was no substance which could interrupt the ambient air concentration. On the other hand, in Figure 10b, with the presence of chemical substance, it can be observed that certain gas sensors such as VOC, NO2 and O3 reacted differently as compared to ambient environment. The reading of VOC gas sensor particularly, rose sharply when the chemical was present in the room. Figure 10c represents the response of the sensors when the automatic air freshener released fragrance into the room every 15 min. The fragrance of the air freshener, however, vaporized quickly into the air after it was released. Thus, these changes of high and low concentration of fragrance in the air could be observed from the disturbed graph. Meanwhile, for the last two environments, 30 min of data were recorded instead of 8 h because these two environments had impact on the sensors for a short amount of time. Figure 10d illustrates the effect of the cigarette smoking activity on the sensor while Figure 10e denotes the presence of food and beverages (coffee) in the room. Notably, in all graphs, different sets of gas sensors reacted differently to different environments. Principal Component Analysis (PCA) The PCA was implemented to distinguish different varieties of the samples used in all environments. It is an unsupervised pattern recognition technique used to cluster the data according to groups. The techniques will reduce the size of data variable without losing the information [38]. Each principal component is a linear combination of the original variables as defined by the equation given below: where PCp is the notation for p-th order principal component for the overall n number of data, Wnp is regression coefficient (or weight) determined by PCA while Xn is adjusted matric. The result of the PCA analysis is shown in Figure 11. The scores of the five groups of sources of pollutants are plotted for principal component 2 (PC2) versus principal component 1 (PC1). Both components have two great discrimination of 85.330% and 7.717% (or total cumulative variance of 93.047%). From the PCA plot, the samples can be qualitatively clustered into five different groups based on different sources of pollutant. ANN Analysis In order to identify the source affecting the IAQ, this study used the multilayer feed-forward neural network which consists of three layers: input layer, hidden layer and output layer. As network architecture, a three-layer perceptron model as shown in Figure 12 was used. The first input layer contains the input variables for the network. Figure 12 illustrates that the input layer contains nine neurons of IAQ parameters which are CO2, CO, O3, NO2, O2, VOC, PM10, temperature and humidity. There is one hidden layer used and the number of hidden neurons is to be chosen in the model. The last layer of the model is the output layer which consists of five target outputs. After the system has been trained, it is expected that the system is able to identify the source influencing IAQ based on the five conditions. The ANN analysis was done using LabVIEW Machine Learning Toolkit (MLT) from National Instruments. The toolkit provides various machine learning algorithms in LabVIEW usually classified as supervised or unsupervised method. It is a powerful tool for pattern recognition, cluster identification and visualization of high-dimensional data. Some of classifiers function based on different ANN-algorithms such as multi-layer perceptron (MLP), self-organizing map (SOM), radial basis function (RB) and support vector machine (SVM) are available in this toolkit. For this research, the ANN-algorithm based on MLP with Back Propagation algorithm has been used to evaluate the system performance in the classification source influencing IAQ. The detailed parameter for ANN training is given in Table 4. There are 4760 data collected and these data are gathered in a database as the training set. The database contains two data sets: 80% of data were used in training the network and 20% were designated as a testing set. The optimum structure of ANN model is determined by a trial and error method. The number of input neurons is 9, which corresponds to the number of sensors and the number of outputs is five neurons, which corresponds to the five environmental conditions. The number of hidden neurons was adjusted until the desired performance was achieved. In order to ensure convergence of the model, the input data was normalized within the range of [0, l] based on Equation (3). where Xs is the normalized value, Xmin and Xmax are the minimum and maximum value of input, respectively. The normalized data then was randomized and organized in as matrix input for training. An activation function was used to calculate the output response of a neuron. The sum of the input signal was applied with an activation to obtain the response. There are a number of common activation functions in use with neural networks. For this model, the hyperbolic tangent function was employed as the activation function. For each model, the network was trained 10 times. The mean, minimum and maximum classification rates are observed and recorded. The results for the network model are tabulated and shown in Table 5. From Table 5, it can be observed that the model with network structure 9-15-5 gives the best accuracy with minimum and maximum classification accuracies of 98.8% and 100%, respectively. The result showed that the test classifications highly correlate with the sample data. The ANN model was able to classify the source that influenced IAQ and the classification rate was 99.1%. Table 5. Results for network model. The confusion matrix for the model is shown in Table 6. Rows and columns represent actual and predicted values, respectively. While observing the confusion level of five different source of IAQ pollutant given in Table 6, it is observed that the chemical, fragrance, food and beverages presence has no confusion level. The human activity environment which is the presence of cigarette smoking has the highest confusion level compared to the other sources. Conclusions In this paper, an indoor air quality monitoring system has been developed with the additional function of classifying sources influencing the IAQ based on five different environments such as ambient air, chemical presence, fragrance presence, foods and beverages, and human activity. The data collection has been completed in 26 days in different environment simulations to obtain the desired effects of the five environments. The data for the ANN classification training was collected using a self-developed IAQ monitoring system. Each sensor module in the system contains nine sensors of gases and other parameters that are usually used in IAQ measurement like Carbon Dioxide, Carbon Monoxide, Ozone, Nitrogen Dioxide, Oxygen, Volatile Organic Compound and Particulate Matter, Temperature and Humidity. Based on the results for the network models, the ANN model modelled with network structure 9-15-5 was able to classify the sources influencing the IAQ with a minimum and maximum of 98.8% and 100%, respectively. On average, the system was about 99.1% correct. Overall, it can be concluded that the system delivered a high classification rate based on ANN.
7,536.2
2015-05-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Hyaluronan, Inflammation, and Breast Cancer Progression Breast cancer-induced inflammation in the tumor reactive stroma supports invasion and malignant progression and is contributed to by a variety of host cells including macrophages and fibroblasts. Inflammation appears to be initiated by tumor cells and surrounding host fibroblasts that secrete pro-inflammatory cytokines and chemokines and remodel the extracellular matrix (ECM) to create a pro-inflammatory “cancerized” or tumor reactive microenvironment that supports tumor expansion and invasion. The tissue polysaccharide hyaluronan (HA) is an example of an ECM component within the cancerized microenvironment that promotes breast cancer progression. Like many ECM molecules, the function of native high-molecular weight HA is altered by fragmentation, which is promoted by oxygen/nitrogen free radicals and release of hyaluronidases within the tumor microenvironment. HA fragments are pro-inflammatory and activate signaling pathways that promote survival, migration, and invasion within both tumor and host cells through binding to HA receptors such as CD44 and RHAMM/HMMR. In breast cancer, elevated HA in the peri-tumor stroma and increased HA receptor expression are prognostic for poor outcome and are associated with disease recurrence. This review addresses the critical issues regarding tumor-induced inflammation and its role in breast cancer progression focusing specifically on the changes in HA metabolism within tumor reactive stroma as a key factor in malignant progression. microenvironment that sustains tumor growth and promotes malignant progression. The ECM is composed of proteins and proteoglycans/glycosaminoglycans that provide structural support and facilitate tissue organization. In addition, specific components of the ECM contribute to cell survival, proliferation, migration, angiogenesis, and immune cell infiltration. A major ECM component of the stroma is hyaluronan (HA), a member of the glycosaminoglycan family of polysaccharides. HA is synthesized at the cell surface as a large linear anionic polymer (up to 10 7 Da) by multiple cell types in healing wounds and in tumors. There are three distinct isoenzymes (HA synthases, HAS 1-3) that synthesize HA (see below). Understanding the role that HA plays in contributing to breast carcinoma-induced inflammation has important implications for the design of therapeutic approaches targeting both the tumor cells and the pro-tumorigenic functions of the cancerized stroma (5). A number of studies have demonstrated that HA regulates tumor cell migration and invasion in vitro, and tumor growth and progression in vivo (5)(6)(7). Cell culture studies show that invasive breast cancer cells synthesize and accumulate larger amounts of HA than normal tissue and preferentially express more HAS2 mRNA than less aggressive tumor cells (8). Furthermore, HAS2 promotes breast cancer cell invasion in vitro (9). Overexpression of HAS2 in mammary epithelial cells of MMTV-Neu transgenic mice increases tumor HA production and enhances growth of mammary tumors (10). HAS2-overexpressing tumors exhibit enhanced angiogenesis and stromal cell recruitment. These results demonstrate that increased HA in the tumor microenvironment supports mechanisms of neoplastic progression. Although carcinoma cells synthesize HA, stromal HA levels are increased in breast cancers predicting that stromal cells are also a rich source of this biopolymer. Similar to the contributions of HA fragments in wound healing, HA fragmentation leads to the generation of angiogenic fragments that act on endothelial cells to promote blood vessel formation (11). As described below, HA regulates inflammatory cell functions located within the tumor microenvironment. The combined effects of HA on both tumor and host cells as well as evidence that elevated accumulation of peri-tumor stromal HA is linked to reduced 5-year survival (12), provide strong evidence that HA participates in the generation of a pro-tumorigenic "cancerized" stroma (5). In-depth analysis of the HA staining patterns within tumors shows an enrichment of HA in the stroma at the leading edge of the tumor, and detailed clinical study of these HA levels and localization in patient samples support a relationship between high-stromal HA accumulation and poor patient survival (12). While most HA is likely synthesized by stromal cells, a subset of breast cancers also stain for HA in the tumor parenchyma and this is correlated with lymph node positivity and poor differentiation (12). Furthermore, these tumors tend to be negative for the hormone receptors estrogen receptor (ER) and progesterone receptor (PR) (12). Hyaluronan accumulation has additionally been compared in early and later stage breast tumors, specifically in ductal carcinoma in situ (DCIS), DCIS with microinvasion and invasive carcinoma, to determine if altered HA production is linked to early as well as later stage invasion events in breast cancer. HA levels of DCIS associated with microinvasion and later stage invasive carcinoma are significantly increased when compared to pure DCIS (13). RHAMM/HMMR, which promotes migration and invasion of breast cancer lines, is also elevated in breast cancer, particularly at the invasive front of tumors and in tumor cell subsets (14,15). Collectively, these results suggest that HA performs a number of functions in progressing tumors and in particular contributes to invasion in early and later stage breast cancer. More recently, HA staining and CD44 expression have been examined in HER2-positive breast tumors. High levels of stromal HA staining in this breast cancer subtype have been linked to specific clinical correlates, including lymph node-positive breast cancer and reduced overall survival (16). Elevated CD44 expression, which occurs in tumor parenchyma and to a lesser extent in stromal cells, is associated with HER2-positive breast cancers and linked to reduced overall survival in this breast cancer subtype. A number of studies have also examined expression levels of HA synthases in breast cancer tissues. Expression of all of the HAS isoenzymes (HAS 1-3) have been detected in the tumor parenchyma and stroma of breast tumors (17). Expression of tumor cell HAS1, but not HAS2 or HAS3, was found to correlate with reduced overall survival when breast cancer patients were not sorted into subtypes. In this study, expression of all three HAS proteins in the stroma corresponds with reduced overall survival (17). However, HAS2 expression is particularly linked to triple negative and basal-like breast cancer subtypes and its elevated expression is associated with reduced overall survival of these cancer patients (18). Together, these studies suggest pro-tumorigenic roles for increased levels of HA in breast cancer (19,20) and predict possible mechanisms through which HA might facilitate tumor initiation and progression. For example, the increase in tumor cell HA may provide a self-protective coat, minimizing recognition by immune cells and helping to reduce damage by reactive oxygen and nitrogen species. Increased levels of HA may also facilitate mitosis and invasion of surrounding tissue. However, recent studies demonstrating that fragmentation of HA within damaged tissues alters biological properties of the intact biopolymer and that HA receptors differ in their recognition of HA polymer sizes suggest a much more complex mode of regulation (see below). These more recent studies emphasize the importance of defining both changes in HA levels and the extent of HA fragmentation for understanding mechanisms by which the peri-tumor stroma, in particular the inflammatory status of the tumor-associated stroma, influences breast cancer initiation and progression. Regulation of HA Synthesis and Fragmentation Elevated HA synthesis in adults is most often associated with a response to tissue damage or disease. These increases result from both transcriptional and post-transcriptional control of HA synthesis. HA synthesis is catalyzed by one or more of three HA synthase isoenzymes (HAS 1-3) (21,22), which are unique among other glycosyltransfererases since they are localized at the plasma membrane rather than in the Golgi (23). Primary structures of all three enzymes predict that they span the plasma membrane several times (21). The three isoenzymes contain cytoplasmic catalytic sites that sequentially add the activated UDP-glucuronic acid and UDP-N-acetyl--glucosamine to the growing HA polymer, which is then extruded through pores in the plasma membrane, likely created by formation of HAS oligomers (21,22). Released HA polymers are captured by extracellular HA binding proteoglycans such as versican, along with other ECM protein components and cell-surface receptors (5,6,24). High-molecular weight polymers of HA are thought to function like other ECM components, in part, by providing a multivalent template to organize ECM proteoglycans and to cluster HA receptors thus "organizing" plasma membrane components. Clustering leads to subsequent cytoskeletal re-organization, the efficient assembly and activation of signaling pathways and ultimately changes in the cellular transcriptome (Figure 1). Hyaluronan synthesis is controlled by multiple distinct but overlapping mechanisms. HA synthesis is partially regulated by intracellular levels of the HAS substrates UDP-GlcA and UDP-GlcNAc. These are produced by complex pathways that control their levels and/or availability within cells. For example, the compound 4-methyl-umbelliferone (4-MU) inhibits the synthesis of HA by depleting cytoplasmic UDP-GlcA, (25). Although 4-MU theoretically could limit substrate availability to multiple glycosyltransferases, its inhibitory effect appears to be localized to limiting substrate availability for HAS isoenzymes associated with the inner plasma membrane. By contrast, the majority of glycosyltransferases are resistant to the inhibitory effects of 4-MU since they are located in the Golgi, which is not permeable to 4-MU (23). The genes encoding HAS isoenzymes are located on distinct chromosomes and their expression is regulated by distinct transcriptional and post-transcriptional mechanisms (23,26,27). Numerous wound and tumor-associated cytokines and growth factors promote HA synthesis, including TGFβ, PDGF, FGF2, EGF, and TNFα [Ref. (23) and references therein]. It is important to emphasize that regulation of HAS isozyme expression and activity can be cell and tissue specific. Thus, careful analysis is needed when considering HA-related mechanisms in different pathologies. For example, HAS2 transcription can be regulated via EGFR/STAT3 pathways, PKA/CREB pathways (23), or TNFα or IL-1β-induced activation of NF-κB. The latter pathway is particularly relevant to the impact of up-regulated HA synthesis in the context of inflammation. While transcriptional control is a major mechanism for regulating HA synthesis, post-translational modification of HAS isoenzymes also affects their activity and/or cell-surface localization. For example, ErbB2/ERK1, 2 signaling activates and phosphorylates the three HAS isoenzymes implicating this as a mechanism for up-regulated HA synthesis in HER2/neu-positive tumor subtypes (28). HAS proteins can also be covalently modified by O-GlcNAcylation, which modifies trafficking and/or subcellular localization of the enzymes to the plasma membrane (23). Finally, early evidence from analyzing the naked mole rat genome shows that activating mutations of HAS2 can be selected for, that increase not only the production but also the predominant size of HA synthesized by this HAS isoform (29). Large, native HA polymers clearly participate in the architectural maintenance and hydration of homeostatic adult tissues. However, recent evidence demonstrates that HA fragmentation is a critical contributing factor in the physiology of wounds and cancerized stroma (27,30,31). While larger HA polymers appear to be anti-inflammatory and anti-tumorigenic, HA fragments and oligomers are pro-inflammatory and pro-tumorigenic. This has led to the concept that HA fragmentation is one of the initial "danger signals" sensed by cells to initiate efforts that limit tissue damage through promoting tissue inflammation and repair (5). This cycle of increased synthesis and fragmentation appears to be hijacked by tumor cells and their stromal partners to sustain inflammation, which contributes to malignant progression. The mechanisms by which HA fragmentation contributes to such tissue pathology are not well understood. One proposed function is that LMW fragments alter or disrupt the cellular "organizing" properties of HMW HA by inhibiting the HA-induced clustering of cell-surface receptors such as CD44 and affecting signaling (6,24,32). Direct pull down assays of cellular extracts using beads coupled to HA oligomers have demonstrated that tumor cell and wound RHAMM can bind LMW HA fragments (33). This scenario predicts that cell-surface RHAMM, displayed in response to cellular stress, is one HA receptor that "senses" HA fragmentation and thus serves to initiate cellular responses to tissue damage possibly by affecting CD44 clustering (5). These previous studies point to the importance in determining both the level of HA and the ratio of HMW HA to LMW fragments noted previously (34,35) and in an accompanying manuscript in this issue (36). Hyaluronan fragmentation within tissues results from the increased expression of one or more hyaluronidases (Hyals) and from oxidative/nitrosative damage. Hyals function as endoor exoglycosidases to cleave HA polymers (27,30). Hyal1 and Hyal2 are most often associated with damaged or tumorassociated stroma undergoing remodeling (27). In vitro analysis of hyaluronidases indicates that their activity results in unique fragmentation patterns. For example, although both Hyal1 and Hyal2 can catalyze degradation by cleaving β-(1,4) linkages, they differ in that Hyal1 degrades HA into small fragments (hexasaccharides and tetrasaccharides) whereas Hyal2 appears to produce predominantly larger (i.e., 20 kDa) fragments (37). Both Hyal1 and Hyal2 have pH optima in the acidic range and are associated with processing HA that has been internalized into endocytic vesicles. However, low pH within localized stromal microenvironments facilitates extracellular Hyal-mediated HA degradation (27). Hyaluronan is also fragmented by reactive oxygen and nitrogen species (ROS/RNS) such as hydroxyl radicals (•OH), peroxynitrite/peroxynitrous acid (ONOO − /ONOOH), and hypochlorite anion (OCl − ). Iron, derived from tissue associated heme or ferritin, is one important contributor in catalyzing the formation of both hydroxyl radicals and superoxide anions ( . This mechanism is contributed to by infiltrating polymorphonuclear leukocytes, monocytes, and activated macrophages (38). HA is extensively cleaved by any of these reactive species, and they are therefore important mechanisms for HA fragmentation within inflamed tissues. Although assessment of HA fragmentation by these mechanisms have largely been defined using in vitro analyses (39), it is clear that this degree of HA fragmentation occurs in skin wound tissue and in human milk (34,35). Cellular Receptors for Hyaluronan Although a number of HA receptors have been identified, the two that have been best characterized and are to date most relevant to inflammation and breast cancer are CD44 and RHAMM (5,27). Other receptors implicated in cellular responses to HA, TLR2, and TLR4, are discussed in more detail below. Interactions between HA and CD44 lead to ligand-induced clustering, and activation of intracellular signaling pathways such as ERK1, 2, Akt, and FAK. The binding of HA by CD44 occurs through interactions with an amino terminal "link" domain, similar to those found in several other types of HA binding proteins, in particular, extracellular HA binding proteoglycans such as versican, aggrecan, and link protein. RHAMM binds HA through structurally distinct domains (BX 7 B motifs where B is a basic amino acid residue and X are non-acidic residues) that differ from link domains (5,27). While CD44 expression is ubiquitous, RHAMM is normally not detected in most homeostatic tissues, but expression increases in response to injury and thus seems to be primarily important for restoring homeostasis following injury (5). Null RHAMM mice are viable but exhibit defects in tissue response to injury including vascular damage and excisional wound healing (40). RHAMM may also be required for robust female fertility in mice (41). Interaction of HA with CD44 is often associated with increased cell motility and invasion, although numerous reports have demonstrated that CD44 can also modify growth and therapeutic resistance of tumor cells (6,24). As with CD44, RHAMM is displayed on cell surfaces. However, unlike CD44, RHAMM surface expression is tightly regulated, occurring under conditions of cellular stress. Thus, RHAMM is largely a cytoplasmic protein whose surface localization is regulated by mechanisms similar to other non-conventionally exported cytoplasmic and nuclear proteins and that regulates signaling cascade activation through co-receptor functions with integral receptors such as CD44 (5). Cell surface and intracellular RHAMM are also involved in stimulating cell motility and invasion. Intracellular RHAMM co-distributes with interphase microtubules and a splice variant of human RHAMM has been detected in nuclei (14,42). RHAMM expression increases in G2/M of the cell cycle, associating it with mitosis and modifying cell-surface RHAMM blocks cells in G2M (43). This is consistent with more recent reports indicating that RHAMM is a critical contributor to mitotic spindle formation and regulation of proper chromosomal segregation and genomic stability (44). Both CD44 and cell-surface RHAMM also function as co-receptors for activating transmembrane tyrosine kinases (including EGFR, c-MET, and PDGFR) and ERK1,2 (Figure 1). Both CD44 and RHAMM regulate the intensity and/or duration of such signal transduction pathways as ERK1, 2, which are initiated by growth factors (40,45). Intracellular RHAMM functions as a scaffold protein that directly binds to ERK1 and forms complexes with ERK1, 2, and MEK1. This has been proposed to be one mechanism by which RHAMM helps to increase the intensity and/or duration of oncogenic ERK1, 2 signaling pathways (46,47). One consequence of HA, CD44, RHAMMmediated increases in the duration of ERK 1, 2 activation is the alteration of the transcriptome of cells within the cancerized stroma (Figure 1). These changes in gene expression have an impact on the activation of transduction pathways related to cell migration and the expression and export of inflammatory mediators. In turn, the persistent activation of these pathways in cancerized stroma enhances pro-tumorigenic inflammation and breast tumor progression. Thus, this represents one major mechanism by which biological "information" encoded within HA can lead to pro-tumorigenic or "cancerized" alterations in stroma. Positive paracrine and autocrine feedback loops between tumor and stromal cells can be initiated by inflammatory mediators such as IL-1α and TGFβ that increase HA synthesis and expression of both RHAMM and CD44, which collectively sustain cell migration and invasion within cancerized stroma. Thus, the aberrant upregulation of CD44 or RHAMM in cancerized stroma is a nefarious consequence of sustained ERK 1, 2 activation, further aggravating persistent oncogenic signaling (46,47). Since CD44 and RHAMM functionally cooperate under certain conditions (40), targeting RHAMM may be an effective way to specifically limit the function of CD44 in breast tumors. LYVE-1, another cell-surface HA receptor associated with cancerized stroma (48)(49)(50), was first identified as a surface marker expressed by lymphatic endothelium and has been proposed to serve in HA transport from interstitial tissue to lymph (51). However, studies addressing the obligatory importance of LYVE-1 in promoting normal lymphangiogenesis have yielded conflicting results (52,53). In tumors, the density of stromal LYVE-1 positive lymphatic vessels is a negative prognostic indicator in breast cancer patients with invasive ductal carcinomas (54). Furthermore, in vitro studies suggest that HA and LYVE-1 promote adhesion of breast cancer cells to fibroblasts, predicting these interactions contribute to adhesion or dissemination of tumor cells (55). Nevertheless, a mechanistic role for LYVE-1 in poor prognosis of breast cancer has yet to be demonstrated. One possible mechanism is suggested by the expression of LYVE-1 in cancer-associated macrophages (56) but a causative role for this HA receptor in inflammation has yet to be established. Effects of Hyaluronan on Innate Immune Cells in Cancerized Stroma The generation of a pro-tumorigenic inflammatory environment during breast cancer initiation and progression requires recruitment of inflammatory cells, including neutrophils and macrophages. Once recruited to the tumor site, these cells become activated and secrete factors that are normally involved in proliferation, angiogenesis, and stromal remodeling during tissue repair (1). Macrophages residing within the tumor parenchyma and the tumor reactive stroma are prognostic of poor outcome in breast cancer patients (57). Macrophages in a wound-healing context are characterized as pro-inflammatory (M1) or anti-inflammatory (M2) (58). Pro-inflammatory macrophages are involved in the initial stages of wound healing and are characterized by the expression of NF-κB-regulated pro-inflammatory cytokines, including IL-1β and IL-12 as well as mediators contributing to pathogen destruction, including reactive oxygen species. Antiinflammatory macrophages are important for the resolution phase of the wound-healing process and they are characterized by expression of anti-inflammatory cytokines, including TGFβ and IL-10 as well as factors that promote tissue remodeling including the MMPs. Profiling and functional studies demonstrate that macrophages within the tumor microenvironment express a range of both pro-and anti-inflammatory factors depending upon tumor type and stage. For example, macrophages associated with early stages of tumorigenesis have high levels of NF-κB activation and subsequently express pro-inflammatory factors, such as IL-1β and IL-6 (59). As tumors become increasingly aggressive, tumor-associated macrophages express high levels of immunosuppressive cytokines, such as IL-10 and TGFβ (58). Tumorassociated macrophages also produce factors that are established promoters of breast cancer growth and progression including EGF, VEGF, and MMP-9 (60). Thus, it is clear that tumorassociated macrophages reside in a functional continuum that is regulated by specific factors within the tumor microenvironment. However, the specific factors within the microenvironment that macrophages are responding to and driving these responses are not well understood. A primary function of monocytes and macrophages in woundhealing environments is to produce reactive oxygen intermediates, which contribute to pathogen killing during wound healing (58). High levels of reactive oxygen species, found in both wound healing and tumor environments, are known to fragment HA, which then induce expression of pro-inflammatory genes (38,61,62). Recent studies of human breast cancer samples demonstrate that high numbers of CD163 positive macrophages correlate with increased levels of HA synthases and HA accumulation within tumors (63). Based on the links between HA and macrophages during wound healing, it is likely that HA in the tumor microenvironment may regulate macrophage function. Indeed, HA modulates expression levels of pro-tumorigenic cytokines and chemokines in macrophages. Specifically, HA induces expression of the pro-inflammatory cytokine IL-1β in macrophages (64). Numerous studies have implicated IL-1β in breast cancer initiation and progression. Expression of IL-1β is increased in tumor and stromal cells in 90% of ER negative invasive breast carcinomas (65,66). In addition, high levels of serum IL-1β correlate with recurrence in breast cancer patients (67). Finally, IL-1β may also be involved in premalignant breast cancer based on studies that showing increased IL-1β expression in pre-invasive DCIS (65,68). Mechanistically, increased IL-1β within the tumor microenvironment leads to enhanced expression of cyclooxygenase-2 (COX-2), which contributes to the formation of early stage lesions and is a well-established tumor promoter (69). Increased IL-1β also leads to mammary tumor growth and metastasis in part through inducing regulation of myeloid derived suppressor cells (MDSCs), which promote an immunosuppressive environment (70). Taken together, these studies suggest that modulation of pro-inflammatory cytokines by HA in the tumor microenvironment represents a potential mechanism through which HA might contribute to tumor growth and progression. The precise mechanisms by which elevated levels of stromal HA modulate pro-inflammatory responses are not well understood. Similar to the wound-healing environment, both increased levels of hyaluronidases (71,72) and reactive oxygen or nitrogen species, including nitric oxide are present in breast tumors (39,73), predicting elevated HA fragmentation in the tumor microenvironment. In vitro studies demonstrate that increased HA fragmentation is correlated with elevated hyaluronidase expression by breast cancer cells (74). Studies focusing specifically on hyaluronidase 1 (Hyal1) demonstrate that enhanced expression of Hyal1 in breast cancer cells induces tumor cell proliferation, migration, invasion, and angiogenesis (75). Furthermore, knockdown of Hyal1 in breast cancer cells reduces cell growth, adhesion, and invasion in culture as well as decreased tumor growth in vivo (72). Breast cancer cells lacking ER expression typically produce more hyaluronidases than estrogen positive cells and this correlates with invasion in vitro (15). LMW HA fragments, but not total HA levels, detected in the serum of breast cancer patients also correlates with the presence of lymph node metastasis (74). In addition, Hyal1 expression in non-invasive ductal hyperplasias correlates with subsequent development of invasive breast carcinoma (76). These studies indirectly establish a link between breast cancer and HA fragmentation (Figure 2), although studies analyzing the accumulation of HA fragments in experimental or clinical breast cancer tissues are still lacking. Because HA fragments are pro-inflammatory, it is reasonable to assume that they contribute to production of inflammatory cytokines, chemokines, and proteases by tumor-associated macrophages (27). In contrast to LMW fragments, HMW HA suppresses expression of many of the above pro-inflammatory cytokines in macrophages (77). This opposing function of native HA suggests that both the level and the distribution ratio of different size HA fragments may dictate inflammatory cell phenotypes within cancerized stroma. Development of new technologies to isolate and characterize HA polymers and fragments from tissues will be key for developing a mechanistic understanding of the biological complexities associated with HA metabolism (35). In addition to regulating pro-inflammatory cytokine production, HA can modulate the expression of anti-inflammatory cytokines. Analysis of macrophage responses to tumor cell conditioned media demonstrates that tumor cell-derived HA stimulates production of IL-10 by macrophages (78). IL-10, an anti-inflammatory cytokine, is a potent mediator of immunosuppression in the tumor microenvironment through inhibition of T cell activation (79). Recent studies have demonstrated that increased IL-10 in the breast cancer microenvironment leads to therapeutic resistance through multiple potential mechanisms. For example, increased levels of IL-10 lead to the suppression of CD8 + T cell responses in response to chemotherapy (Figure 2) (4). Furthermore, IL-10 has been found to act directly on breast cancer cells to promote survival in response to chemotherapy involving a STAT3/bcl-2 mechanism (80). Thus, it is possible that HA contributes to immunosuppression and therapeutic resistance through modulation of IL-10 in the tumor microenvironment. Hyaluronan also controls expression of chemokines, including IL-8/CXCL8 (81). Chemokines are pro-inflammatory cytokines that play an essential role in leukocyte recruitment and cell trafficking. These secreted proteins interact with cell-surface Gprotein-coupled receptors to induce cytoskeletal rearrangement, adhesion to endothelial cells, and directional migration of cells to specific tissue sites (82). For example, IL-8 binds its receptors, CXCR1 and CXCR2, to stimulate neutrophil chemotaxis (67). IL-8 is overexpressed in breast cancers and contributes to tumor initiation and growth through promoting migration and invasion of breast cancer cells. More recently, studies have implicated IL-8 in the regulation of breast cancer stem cell invasion (83). Macrophage chemokines that are regulated by HA, including CXCL2 and CXCL12, have similarly been implicated in breast cancer progression (27) and have been shown to promote migration and invasion of these cancer cells (84,85). The CXCL12/CXCR4 axis is particularly important for homing of breast cancer cells to metastatic sites, including bone and lung (86). In another positive feedback loop, HA production is also modulated by pro-inflammatory signaling pathways. For example, both IL-1β and TNFα induce HA production in endothelial cells in an NF-κB-dependent manner (87). We have also demonstrated that HA synthesis is enhanced in tumor cells through an IL-6/STAT3-dependent mechanism (88). Furthermore, inflammatory macrophages express hyaluronidases (89) and ROS (58), which potentially fragment HA into pro-inflammatory polymers. These results predict that HA and pro-inflammatory cytokines act reciprocally to sustain inflammation. Contributions of Hyaluronan Receptors and Binding Proteins to Inflammation A major challenge in the mechanistic understanding of HA in breast cancer-associated inflammation is to link HA metabolism with specific contributions of HA receptors CD44, RHAMM, and LYVE-1, which are all expressed by macrophages (78,(90)(91)(92). CD44 has been examined for its ability to regulate macrophage migration and phagocytosis (93). In the context of modulating macrophage responses to tumor cells, functional studies demonstrate a link between CD44:HA binding and generation of immunosuppressive macrophages. Specifically, blocking the ability of HA to bind to monocytes either through blocking HA:CD44 binding or using an HA-specific blocking peptide inhibits tumor cell conditioned media promoted formation of immunosuppressive macrophages (78). While RHAMM has not been examined specifically in the context of tumor-associated macrophages, recent studies have started to elucidate its potential functions during response to injury. RHAMM expression is induced in macrophages following chemically induced lung injury (94) and in excisional skin wounds (34) and blocking RHAMM function in these injuries reduces the level of tissue macrophages (31,34). Additional studies have demonstrated that RHAMM regulates macrophage chemotaxis in response to TGFβ in the context of surfactant protein Amediated inflammation in the lung (92). While not specifically addressed, these studies predict the potential of RHAMM for promoting HA-mediated macrophage motility and chemotaxis in tumor-associated inflammation. While the contributions of HA interactions with LYVE-1 to macrophage functions are even less well understood, recent interest in LYVE-1 as a marker of tumor-associated macrophages suggests that further studies of these interactions are warranted (90). Given the numerous effects of HA on macrophage recruitment and function, a focus on the roles of HA receptors in mediating tumor-associated macrophage functions will likely dramatically increase understanding of the mechanisms driving macrophages to promote breast tumor progression. Studies have also suggested a link between HA and toll-like receptor (TLR) signaling in macrophages (27,95). Specifically, LMW HA induces expression of pro-inflammatory cytokines and chemokines, mediated in part by TLR2 and/or TLR4 (27,95). Additional published studies using blocking antibodies have suggested that the TLR-mediated effects may require interactions with CD44 (96). TLR signaling has been implicated in breast cancer progression, as TLR4 is expressed at high levels on invasive breast cancer cells and knock-down of TLR4 leads to reduced cell proliferation and survival (97). In vivo studies have suggested that TLR4 agonists can inhibit mammary tumor metastasis (98,99). By contrast, recent studies using a potential TLR4 agonist demonstrated enhanced survival of mice in a model of tumor resection, suggesting that the contributions of TLR4 to breast cancer progression are complex (100). Recent studies have suggested that breast cancer cell-derived exosomes modulate inflammatory cytokines in macrophages potentially involving both TLRs and CD44 (101). While direct interactions between HA and TLRs in breast cancer cells have not been established, additional studies examining HA and TLR signaling in both tumor cells and the microenvironment are warranted. Tumor necrosis factor-stimulated gene-6 (TSG-6), which is an extracellular HA binding protein, is synthesized and secreted at sites of inflammation (102). TSG-6 binds HA with high affinity via a link module and enhances binding of HA to CD44 (103). TSG-6 also contributes to HA cross-linking, which has been implicated in adhesion and rolling of leukocytes (104). In the context of breast cancer, TSG-6 is up-regulated in breast cancer cells following ionizing radiation, suggesting a potential role for TSG-6 when tissue is damaged (105). It will be interesting to determine the contributions of TSG-6 to HA remodeling and function within the breast cancer microenvironment. Effects of Hyaluronan on Adaptive Immune Cells in Cancerized Stroma In addition to innate immune cells, adaptive immune cells are also prevalent within the breast cancer microenvironment. Immune cell profiling studies have demonstrated that breast cancer with high levels of macrophages and Th2 T cells are associated with worse outcome than those with high levels of Th1 cells (106). More recently, studies have demonstrated that the presence of infiltrating T cells and B cells predict better response to neoadjuvant chemotherapy in breast cancer patients (107). Understanding the regulation and function of adaptive immune cells during both tumor progression and therapy is a rapidly growing focus of research in the breast cancer field. While the potential role of HA on tumor infiltrating lymphocytes has not to our knowledge been reported, HA is known to contribute to the regulation of T cell trafficking (Figure 2). Studies have demonstrated that upon activation, T cells adhere to and migrate on native HA (108). Other studies show that HA:CD44 interactions on T cells can contribute to activationinduced T cell death (109). This response occurs following exposure to HMW, rather than LMW HA, suggesting an additional anti-inflammatory role for HMW HA. Finally, HMW HA has also been found to promote the immunosuppressive functions of regulatory T cells (Tregs) (110). Exposure of Tregs to HMW HA leads to prolonged expression of Foxp3, a transcription factor that is required for Treg function. Collectively, these studies predict an important role for HA in the regulation of T cell recruitment and/or function. Targeting HA Metabolism as a Potential Therapeutic Strategy in Breast Cancer Given these links of HA and its receptors with breast cancer progression, targeting HA metabolism represents a potential therapeutic approach for treatment of breast and other cancers. There are multiple potential points in the HA metabolic pathway that could potentially be targeted including HA synthesis, accumulation, degradation, and/or HA:receptor interaction. Use of 4-MU, an inhibitor of HA synthesis, is a common approach for blocking HA synthesis in experimental models of breast cancer and is described in detail in another article in this Research Topic (111). Numerous studies have demonstrated that inhibition of HA synthesis using 4-MU reduces breast cancer tumor cell proliferation and migration (88,112,113). Furthermore, treatment of tumor bearing mice with 4-MU reduces tumor growth (114,115). Treatment of mice bearing bone metastatic lesions with 4-MU reduces HA accumulation and growth of osteolytic lesions (116,117). 4-MU is well-tolerated in both animal models suggesting that blocking HAS catalytic function represents a viable therapeutic strategy. While the efficacy of targeting HA synthesis alone remains to be determined in human cancers, we have recently demonstrated that reducing HA synthesis combined with targeted therapy enhances therapeutic response (88). These studies highlight the importance of combinatorial targeting of both tumor cell specific oncogenic signaling pathways and pro-tumorigenic alterations in the tumor microenvironment in new therapeutic approaches. Elimination of HA in the tumor microenvironment using hyaluronidases has also been explored as a potential therapeutic strategy for some cancers, including pancreatic cancer, and is currently being tested in clinical trials (118)(119)(120). Treatment of breast cancer cells with bacteriophage hyaluronidase inhibits growth, migration, and invasion in culture (121). Recombinant hyaluronidase, which eliminates stromal HA, allows increased drug access to tumor cells (118)(119)(120). Studies suggest that recombinant human hyaluronidase (rHuPH20) improves subcutaneous delivery of antibody-based targeted therapies such as trastuzumab, currently used for treatment of HER2-positive breast cancer (122). HA is a normal component of the breast stroma that provides structural support and contributes to epithelial morphogenesis (123). Whether eradication of HA and/or the generation of fragments due to the hyaluronidase activity negatively affects breast tissue architecture remains to be determined. Additional approaches to inhibiting HA function in tumors include interfering with HA:receptor interactions. CD44 expression correlates with specific subtypes of breast cancer, including triple negative and endocrine resistant breast cancers (124,125). Furthermore, HA-CD44 interactions promote invasion and therapeutic resistance (7,124,125). Thus, developing targeted therapies that specifically inhibit this interaction could lead to viable therapies for treating breast cancer subtypes that currently have limited therapeutic options. Nevertheless, the use of a humanized monoclonal antibody (Bivatuzumab) in clinical trials of patients with squamous cell carcinomas showed early promise. However, it had a dose related toxicity in some patients and caused the death of one patient causing the trial to be terminated prematurely (126) raising concerns about this therapeutic approach. Furthermore, since there are multiple structural variants of CD44, it may be difficult to develop a complete array of humanized antibodies that can target this structurally complex group of proteins. An alternative approach, which may be less toxic than Bivatuzumab will be to develop and utilize HA binding peptides that can specifically block HA-stimulated signaling and inflammation. Early efforts along this line using a 12mer phage display resulted in a peptide termed PEP-1, which was identified by sequential binding of 12mer-displaying phage to immobilized HA (127). PEP-1 has been shown to reduce gastric stem cell proliferation (128) and reduce H. pylori-induced gastric epithelial proliferation in vivo (128). Finally, PEP-1, in combination with the selective activation of the adenosine A2 receptor, inhibits arthritis-associated inflammation (129,130). While the PEP-1 was effective in these studies, it was not demonstrated to inhibit interactions with a specific HA receptor. More recently, we have developed a unique HA binding "RHAMM mimetic" peptide using a 15mer (P-15) based phage display approach (34). This 15mer approach is unique from PEP-1 in several respects. Unlike PEP-1, P-15 contains a BX 7 B HA binding motif found in RHAMM, it binds HA, in particular HA fragments with high affinity, can inhibit HA binding to RHAMM but does not block HA binding to CD44. It inhibits HA-stimulated migration of RHAMM +/+ fibroblasts but has no effect on the migration of RHAMM null fibroblasts. P-15 reduces inflammation, angiogenesis, and fibroplasia of RHAMM +/+ but not RHAMM −/− excisional wounds. Peptides or mimetics similar to P-15 may offer an effective alternative therapy since specific blockade of RHAMM can also limit CD44 signaling. Summary In summary, there is clear evidence that alterations in HA are associated with malignant progression of breast cancer. Based on the known pro-inflammatory properties of HA fragments during wound healing and the increased levels of HA associated with the peri-tumor stroma in breast cancers, it is likely that HA contributes to the generation of a pro-tumorigenic inflammatory environment. This is supported by the recently identified links between HA levels in the tumor stroma and infiltration of macrophages. Analyzing the presence and function of HA fragments within the tumor microenvironment will provide insights into changes in HA metabolism during tumor growth and progression. As described in an accompanying article in this issue (36), advances have been made in the isolation of HA from tissues and analysis of HA fragmentation and addressing these questions is now feasible. Identifying the specific HA receptors involved in mediating recruitment and activation of inflammatory cells, such as macrophages, into the tumor environment and determining how HA regulates adaptive immune cells will lead to a better understanding of how alterations in HA contribute to host immune responses to breast cancer. Agents that limit aberrant HA synthesis, fragmentation, or block specific HA:receptor interactions is very likely to yield advances in the development of new therapies to limit relapse and recurrence in patients receiving tumor cell targeted therapies. Author Contributions KS, MC, PT, ET, and JM contributed to the drafting and revising of this manuscript. All authors approved this manuscript.
8,565.2
2015-06-08T00:00:00.000
[ "Biology", "Medicine" ]
Multicharacterization approach for studying InAl(Ga)N/Al(Ga)N/GaN heterostructures for high electron mobility transistors Multicharacterization approach for studying InAl(Ga)N/Al(Ga)N/GaN heterostructures for high electron mobility transistors G. Naresh-Kumar,1,a A. Vilalta-Clemente,2 S. Pandey,3 D. Skuridina,4 H. Behmenburg,5 P. Gamarra,6 G. Patriarche,7 I. Vickridge,8 M. A. di Forte-Poisson,6 P. Vogt,4 M. Kneissl,4 M. Morales,2 P. Ruterana,2 A. Cavallini,3 D. Cavalcoli,3 C. Giesen,5 M. Heuken,5 and C. Trager-Cowan1 1Dept of Physics, SUPA, University of Strathclyde, Glasgow G4 0NG, UK 2CIMAP UMR 6252 CNRS-ENSICAEN-CEA-UCBN 14050 Caen Cedex, France 3Dipartimento di Fisica Astronomia, Università di Bologna, 40127 Bologna, Italy 4Institute of Solid State Physics, Technical University Berlin, 10623 Berlin, Germany 5AIXTRON SE, Kaiserstr. 98, 52134 Herzogenrath, Germany 6Thales Research and Technology, III-V Lab, 91460 Marcoussis, France 7LPN, Route de Nozay, 91460 Marcoussis, France 8Institut des NanoSciences, Université Pierre et Marie Curie, 75015 Paris, France I. INTRODUCTION InAlN possesses the widest bandgap range in the nitride system and is thus an ideal material for applications in light emitting diodes, laser diodes and solar cells operating from the ultra violet to the near infrared. 1,2 In addition to this, the InAlN in In x Al 1−x N/GaN heterostructures can be either under tensile or compressive strain depending on the In composition; this cannot be implemented in Al x Ga 1−x N/GaN heterostructures. 1,3 The possibility of polarization matching with GaN makes InAlN more attractive for high frequency transistor applications 4 and InAlN can also be lattice a E mail<EMAIL_ADDRESS>matched with GaN when the In composition is ≈ 18% which makes it a strong candidate for high electron mobility transistors (HEMTs). 5 Unlike InGaN/GaN and AlGaN/GaN structures, the production of high quality InAlN/GaN HEMTs presents many growth challenges. In the nitride ternary alloys the optimum growth temperatures of the end compounds are quite different, especially using metal organic vapor phase epitaxy (MOVPE), i.e. AlN (> 1200 • C), GaN (≈ 1000 • C), and InN (< 550 • ). This is also the case for their covalent bond lengths; 6 therefore, the growth conditions need to be very well controlled indeed, such phenomena as the predicted miscibility gap in these alloys 7 may lead to phase separation, 8,9 ordering, 10-12 composition fluctuations 13,14 and even growth disruption. 15 Poor growth conditions can give rise to layers containing high densities of crystallographic defects [16][17][18][19] and even cracks. 20 Recently unintentional Ga incorporation in the InAlN layers has been reported which adds to the list of growth challenges for InAlN thin films. [21][22][23][24] There are two possible explanations given for the unintentional Ga incorporation in the InAlN layers. It can be either due to diffusion of Ga from the GaN buffer layers 22 or due to residual Ga in the growth chamber. 23,24 The latter seems to be more plausible from the recent work of Smith et al 24 and the possibility of Ga incorporation in the HEMT structures was also reported by Leach et al in 2010. 25 Unintentional incorporation of Ga in the InAlN based HEMTs can be detrimental to the control of the alloy composition and the optimization of growth conditions and it therefore becomes difficult to characterize these structures and understand their physical properties. Thus in order to understand the real properties of the unintentional Ga incorporated structures and samples with varying alloy composition; it becomes necessary to use a multitude of characterization techniques. On the optimistic side, the presence of Ga in the barrier (top epilayer) can be an advantage. As an alternative to InAlN or AlGaN as a barrier, quaternary InAlGaN layers have advantages to offer such as independently tailoring the band gap and lattice constant. By using InAlGaN, one can overcome the high strain state observed in the AlGaN barrier and alloy disordering/scattering issues in InAlN barriers. HEMTs with mobility greater than 1700 cm 2 V −1 s −1 and 2-DEG density of 1.8 × 10 13 cm −2 have been reported for InAlGaN barriers. 26 In this present work, we report on Ga incorporation both in the barrier (InAlN) as well in the interlayer (AlN) with a range of thicknesses, for samples grown in both the horizontal as well as close coupled shower head MOVPE reactors. Possible routes to minimize unintentional Ga incorporation and the role of unintentional Ga incorporation on the HEMT properties are also discussed. II. EXPERIMENTAL All the HEMT structures described in this work were grown by MOVPE. Two sets of samples (A and B) were grown using an Aixtron 3 × 2 inch close coupled showerhead reactor while samples (C and D) were grown using an Aixtron 200 RF horizontal reactor. Figure 1 shows the schematic of the HEMT structures. Please note for the sake of clarity most of our discussion will focus on sample-A. We will discuss this sample's structural, compositional and electrical properties. Samples (B-D) were used to demonstrate the unintentional Ga incorporation both in the barrier and in the interlayer for the two different reactor designs. Samples A and B were grown on a 2-inch c-plane sapphire substrate using the standard precursors of trimethylgallium (TMGa), trimethylindium (TMIn) and ammonia (NH 3 ) with H 2 as the carrier gas. For sample-A, the growth was initiated by depositing an AlN nucleation layer with a thickness of 6 nm deposited at 780 • C, followed by 94 nm of AlN and 3 µm of GaN, these buffer layers, were grown at 1250 • C and 1070 • C respectively. Sample-B was grown under the same conditions as sample-A, but without the AlN buffer layer. The growth surface temperature was monitored by an in situ reflectivity measurement tool from LayTec instruments, which simultaneously measures the reflectivity at different wavelengths (276 nm to 775 nm). A thin AlN layer (interlayer of 1 nm for sample-A and 7 nm for sample-B) followed by an InAlN layer (barrier of 33 nm for sample-A and 15 nm for sample-B) were grown at 790 • C. Please note that both the barrier and the interlayer were grown at the same temperature. The reactor pressure was maintained at 70 mbar with a V-III ratio of 5000. Samples-C and D were grown on a 2-inch c-plane sapphire substrate using TMGa, TMIn and NH 3 as precursors with both H 2 and N 2 as the carrier gases. These samples do not have an AlN nucleation layer as in sample-A; however there is similar GaN buffer layer of 3 µm grown at 1150 • C. The AlN interlayer (4 nm for sample-C and 3 nm for sample-D) were grown at 1200 • C with a reactor pressure of 50 mbar with H 2 as a carrier gas followed by the deposition of an InAlN barrier (9 nm for sample-C and 5 nm for sample-D) grown at 865 • C with N 2 as the carrier gas with a reactor pressure of 70 mbar. The V-III ratio was kept at 2200. The target composition of the barrier layers was In 0.18 Al 0.82 N, i.e., the composition which provides a lattice match to GaN. Structural characterization of sample-A was performed using various microscopy techniques. A Digital Instruments Nanoscope III atomic force microscope (AFM), operating under tapping mode using a Si cantilever, was used to image the topography and also to determine the surface roughness. An FEI-Sirion-200 field-emission gun scanning electron microscope (SEM) operating under secondary electron (SE) imaging mode was used to image surface morphology and electron channelling contrast imaging (ECCI) performed in the forescatter geometry was used to image grain boundaries and structural defects. Both SE and ECC images were acquired with an electron beam spot of ≈ 4 nm, a beam current of ≈ 2.5 nA, beam divergence of ≈ 4 mrad and an acceleration voltage of 30 keV. Transmission electron microscopy was used for detailed analysis of layer thicknesses, threading dislocations (TDs) types, and other structural defects. 27,28 A JEOL 2010 transmission electron microscope (TEM) operated at 200 keV was used to carry out plan view as well as cross section analyses. An aberration corrected JEOL 2200 scanning transmission electron microscope (STEM) operated at 200 kV with a probe current of 150 pA and a probe size of 0.12 nm at the full width at half maximum (FWHM) was used for high angle annular dark field (HAADF) imaging at the sub nanometer scale for determining the structure and composition of the interface between the barrier, interlayer and the GaN buffer layer. The convergence half-angle of the electron probe was 30 mrad and the detection inner and outer half-angles for the HAADF-STEM images were 100 and 170 mrad, respectively. The plan view and cross section samples for TEM and STEM-HAADF were prepared by tripod polishing down to around 10 µm, with electron transparency achieved using a Gatan ion polishing system. Ar + beam milling was performed with the sample tilted ≈ 4 • at 5 keV. During the ion milling process, the sample holder was always kept at -150 C in order to minimize the ion beam damage and a final step of milling was carried out at 0.7 keV to remove the amorphous layers. The alloy composition for sample-A was estimated from high resolution X-ray diffraction (HRXRD), Rutherford backscattering spectrometry in the channelling geometry (RBS/C) and X-ray photoelectron spectroscopy (XPS) measurements. Energy dispersive X-ray analysis (EDX) in the STEM provided compositional information on nanoscale. Quantitative measurements of the composition were obtained with EDX from the intensity ratio of the K line of Al (1.486 keV), the K line of Ga (9.770 keV) and L line of In (3.290 keV). The K line of elementary N (0.392 keV) was also taken into account. Each EDX spectrum was acquired for 60 seconds using JEOL 2300D detectors. The k-factors used by the computer software were calibrated using reference samples of AlN and AlGaN/GaN epitaxially grown on a Si substrate. The composition of the (Al,Ga)N alloys used for the calibration was precisely determined by XRD. The k-factors used for In was calibrated using InP, In 0.48 Al 0.52 As and In 0.53 Ga 0.47 As ternary alloys. All the calibration samples were prepared in a Focused Ion Beam SEM (section thicknesses were estimated to be between 60 and 80 nm). HRXRD measurements were performed using an X'Pert MRD triple axis diffractometer equipped with a four-bounce (220)-Ge monochromator and operating at the Cu K α1 wavelength of 1.54056 Å. RBS/C was performed using a 1.6 MeV 4 He + beam with a nominal incidence and a scattering angle of 165 • and the random RBS spectrum was fitted using the SIMNRA software. 29 XPS was performed under ultra-high vacuum (UHV) conditions using a PHOIBOS100 energy analyzer and monochromated Al K α (hv = 1486.9 eV) radiation as an X-ray source with an instrumental resolution set to be ≈ 400 meV. Electrical characterization for sample-A was performed to investigate the 2-DEG related properties. Room-temperature (R-T) Hall measurements were performed in the Van der Pauw geometry and capacitance-voltage (C-V) measurements at R-T were performed using Ti/Al/Ni/Au based ohmic contacts (dots of 0.6 mm diameter) and Ni/Au Schottky diode contacts (dots of 1 mm diameter) at an operating frequency of 1 KHz. Figure 2 shows plan view AFM, SEM and TEM images of the V-pits/defects. The AFM image in Fig. 2(a) shows a smooth surface morphology with rounded hillock features, 30 typical for InAlN thin films grown by MOVPE. 1 The root-mean-square (rms) roughness for a 5µm × 5µm area was estimated to be 0.8 nm (image not shown here). The total V-defect density estimated from AFM, SEM and plan view TEM images are 1.1±0.2×10 9 cm −2 , 2.4±1×10 9 cm −2 and 4.5±2×10 9 cm −2 respectively. The difference in the V-defect density is probably due to the different length scales used in the three different techniques, images with an area of 1 µm 2 (for AFM) 8.5 µm 2 (for SEM) and 0.1 µm 2 (for TEM) were used to estimate the V-defect density. Care has to be taken in estimating the V-defect density as different techniques at dissimilar length scales with different scan areas can provide varying results. In the present case the results obtained were of the same of order of magnitude in spite of the areas sampled varying by two orders of magnitude. More information on using different imaging techniques for the determination of TD densities can be found from the work of Khoury et al. 31 In order to gain further insight on the structural properties, ECCI was performed. More information on the ECCI technique can be found elsewhere. [32][33][34][35] Figure 3(a) shows an electron channelling contrast image revealing low angle tilt or rotation boundaries with V-defects decorating the boundaries and also inside the grains. Dislocations may be located on a grain boundary and the presence of V-defects on the grain boundaries indicates that they could be related to the TDs. The total defect density estimated from ECC images with a scan area of 17 µm 2 was 5±1 × 10 9 cm −2 . III. RESULTS AND DISCUSSION Cross section TEM was performed to estimate layer thickness and identify dislocation types. Fig. 3(b) shows a two beam dark field image taken with (0002) reflections where a large number of TDs can be seen in the AlN and GaN buffer layers. It can be seen clearly that there are more TDs in the AlN buffer when compared to GaN and thus the inclusion of AlN buffer helps in reducing the TD density. TDs propagating to the surface, which are connected to V-defects, were imaged using (0002) and (10-10) reflections respectively as shown in Fig. 3(c) and 3(d). A high magnification image revealing a TD connected to a V-defect is shown in Fig. 3(e). Many areas of the sample were analyzed by cross-section TEM and it was found that there are ≈ 50% pure edge dislocations and ≈ 50% mixed dislocations, no pure screw dislocations were observed in the analyzed areas. High magnification HAADF images were recorded to reveal the crystalline quality of the layers at the atomic scale, the structure of the interfaces and to determine the AlN interlayer thickness. This is clearly shown in Fig. 3(f) which exhibits a smooth interface between the Al(Ga)N and GaN as well as the InAl(Ga)N. These images also reveal some variation in the intensity for different alloys which qualitatively implies that the composition may not be uniform from one column to the next. In order to estimate a realistic alloy distribution, detailed compositional analysis was carried out by several techniques. Figure 4 shows the (000l) HRXRD θ-2θ scan for all the expected peaks of InAl(Ga)N, GaN, AlN and Al 2 O 3 . From the HRXRD, the c-plane lattice constant was found to be 0.507 nm and by using the corrected Vegard's law, 36 the In content was estimated to be 13.25 %. With knowledge of the c-plane lattice parameter and the strain state of the layer, one can accurately estimate the alloy composition for a ternary (e.g. InAlN). However, HRXRD cannot accurately estimate the compositions of quaternaries (e.g. InAlGaN) as the fitting will allow a range of compositions for fully strained layers; hence alternative techniques such as RBS become necessary. RBS allows the determination of the composition profile of thin films with a depth resolution of a few nanometers and additional structural information can be provided when combined with the ion channelling phenomena. Ion channelling takes place when the beam is aligned with a major symmetry direction of the crystal producing an intense reduction of the backscattering signal. The ratio between aligned and random yield is called the minimum yield (χ min ) and can be used as an indicator of the crystalline quality. 37 Figure 5 illustrates <0001> aligned spectra (circles). The In signal is in the 1360-1400 keV energy range in the RBS spectrum whereas the Al signal is at 855-890 keV and the Ga signal at 1270 keV. The In signal is completely separated from the GaN substrate signal in the spectrum, which gives an accurate measurement of the In concentration. The In content in the barrier was estimated to be 12% and no phase separation was evident from RBS measurements. Moreover, in agreement with the TEM observations where no misfit dislocations could be observed, it is clear that these InAl(Ga)N layers have grown coherently on the Al(Ga)N/GaN structures. The inset in Fig. 5 magnifies the spectrum for the energy region between 1320 keV to 1420 keV which corresponds to the backscattered signal from In atoms with a χ min value of 7%. There is a small change in the slope at around 1250 keV which is due to the presence of Ga in the InAlN layer. From RBS/C it is clearly evident that the barrier has Ga incorporation and the Ga composition was estimated to be ≈ 32%, the Al composition was estimated to be ≈ 56 %. RBS has an elemental depth resolution of 5nm -50 nm with an uncertainty in the range of 1 -5% for compositional analysis. In order to confirm the presence of Ga, XPS measurements were performed. The sampling depth can be changed according to the angle between the sample surface and the photoelectron emission, i.e., the take-off angle. Quantitative analysis of the elemental composition was based on the determination of the peak area with corresponding sensitivity factors. Despite the high annealing temperature (650 • C), significant carbon contamination remained on the sample surface as is shown in the spectrum in Fig. 6. The core-level peaks corresponding to electronic states of the elements in the barrier layer are shown clearly in Fig. 6. Besides the Al, In, and N atoms contributing to the layer and surface contaminants (carbon and oxygen), the spectrum contains a single Ga Auger (160.8 eV) and a core-level photoelectron peak (423.2 eV), confirming Ga incorporation in the barrier layer. In order to determine the amount of Ga within the barrier, surface sensitive measurements were performed at take-off angles of 60 • and 80 • . Angle-resolved high resolution spectra of the In 3d, Ga 3s and Al 2s core-level peaks are shown in Fig. 7. At 80 • the Al component dominates the spectra and there is not much contribution from the Ga or In signal. The quantitative calculations revealed a Al 0.68 In 0.07 Ga 0.25 N elemental composition for measurements at 0 • , indicating a Ga contribution of 25% compared to 7% of In. High surface sensitive measurements performed at 60 • and 80 • exhibited alloy compositions of Al 0.70 In 0.06 Ga 0.25 N and Al 0.93 In 0.01 Ga 0.06 N, respectively. Thus, gallium incorporation of ≈ 25% within the barrier seems reasonable apart from the very surface region, where the surface is Al-terminated. Please note that although XPS is a quantitative technique, producing accurate atomic concentrations from the XPS spectra is not straightforward. The intensities measured using XPS from similar samples are repeatable to good precision, but the technique has an accuracy of 10% for performing routine atomic concentrations. From RBS and XPS measurements, the presence of Ga in the barrier is thus irrefutable; it then becomes obvious to question if there is also unintentional Ga in the interlayer. To investigate this, high resolution EDX measurements were performed. The alloy composition as a function of sample depth (along the growth direction) was measured with EDX in a STEM. Figure 8(a) shows a STEM-HAADF micrograph where a three layer structure is clearly observed, the bright contrast layer corresponding to GaN, the faintly dark strip of ≈ 1 nm at the interface corresponding to AlN and the uniform lighter contrast layer towards the surface corresponding to the 33 nm of InAl(Ga)N. The composition of the three layers was analyzed along the growth direction by performing an EDX line scan. The blue line scan in Fig. 8(a) shows the probe position during the EDX acquisition, where twelve points were analyzed across the three layers and the corresponding composition is shown in Fig. 8(b). The first point corresponds to the GaN buffer layer which shows the highest Ga content and the second point corresponds to the ≈1 nm, AlN interlayer which also shows a Ga composition of 84% and a Al composition of 36% which strongly indicates that the interlayer is Ga rich Al(Ga)N. The composition starts to decrease as the line scan moves towards the barrier and thereafter does not change appreciably. We have also mapped the Ga composition along the barrier layer (parallel to the growth direction, 10 nm from the interface) which exhibits a reasonably high Ga content of ≈ 45%. There is a slight fluctuation of In composition between 8% to 13% along the growth direction in the barrier layer. The Indium content for this structure (sample-A) determined by HRXRD and RBS was estimated to be 13.25% and 12% respectively. From the STEM-EDX analysis we deduce that there is ≈ 11% of InN in the barrier. In order to check the reliability of our measurements of Ga incorporation in the AlN interlayer, STEM-EDX was performed on sample-B with a 7 nm AlN interlayer, this is shown in Fig. 8(c) and 8(d). The HAADF-STEM image of sample-B shows four different regions with different grey scale for layers with different compositions as shown in Fig. 8(c). For the interlayer, two distinct layers of AlN are observed in this sample, the reason for the presence of the two layers is not clear. However, the line scan across the growth direction clearly shows the presence of Ga in the interlayer (points 3 to 6). The first two points correspond to the GaN buffer layer and the Ga content starts to decrease from 72% to 48% as the line scan proceeds across the 7 nm interlayer. On the other hand, the Al content starts to increase from 28% to 52% implying the varying of both the Ga and Al composition. Thus from our detailed analysis it becomes clear that there is Ga incorporation even in the interlayer in addition to the barrier. Inside the barrier layer for sample-B, the Al content decreases and at the same time the Ga content increases and both show a composition of 50%. Please note for sample-A, the Ga composition estimated by EDX is higher (≈ 45 %) when compared to RBS (≈ 32 %), this can be explained due to the fact that RBS gives information on a larger area whereas EDX is probing on the atomic scale. In addition there are local composition fluctuations of ≈ 10 % depending on the probed area. The accuracy of the EDX analysis is estimated to be about 1% (except for nitrogen, where the precision is ± 2%). 38 Following the observation of unintentional Ga incorporation in the barrier as well as in the interlayer for samples grown using the close coupled shower head MOVPE reactor, we also performed measurements to see if there are any unintentional Ga incorporation for samples grown using a horizontal MOVPE reactor. Fig. 9 shows the HAADF-EDX results for samples C and D. The line scan along the growth direction reveals the presence of Ga both in the interlayer as well as in the barrier. For sample-C in the interlayer, the Ga composition decreases from ≈ 80% at the GaN interface to ≈ 20% at the top of the interlayer and for sample-D in the interlayer, the Ga composition decreases from ≈ 80% at the GaN interface to ≈ 50% at the top of the interlayer. In the barrier layers we observe a decreasing trend of Ga incorporation along the growth direction and at 5 nm from the interface, the detected highest Ga content was less than 3% for sample-C (see Fig. 9(b)), and 10% for sample-D (see Fig. 9(d)). Hiroki et al proposed a possible explanation for unintentional Ga incorporation in the barrier. 21 According to them, the metallic Ga which can remain on the flow distributor in the reactor may give rise to a chemical reaction between the TMIn which is provided for the InAlN growth. Similar explanations were given by Choi et al 39 and Kim et al 23 where they explain the origin of unintentional Ga is believed to be from the surrounding surfaces in the growth chamber and from the wafer susceptor. An imperfect source-gas switching sequence can also cause contamination in the layers but for the layers investigated in the present work, we may rule out this possibility. In our case at the end of the GaN growth the TMGa was switched off and the temperature and NH 3 flow were then ramped to reach the AlN growth conditions and growth recommenced using the TMAl and NH 3 precursors. The growth of the AlN takes about 2 minutes and at the end of the growth, the TMIn flow was switched on and the temperature decreased to the InAlN growth conditions. In our process, the TMGa is switched off for 6 minutes before beginning the growth of InAlN. Thus we believe all the TMGa is evacuated from the growth chamber before the growth of InAlN. To the best of our knowledge there are no reports on the Ga incorporation in the interlayer and all the published work on Ga incorporation is on samples grown under closed coupled shower head reactors. In our case, we have Ga incorporation in samples grown using both types of reactors. Interrupting the growth and cleaning the reactor prior to growing the interlayer and barrier may be a route to reduce the unintentional Ga incorporation as described by Hiroki et al; 21 however this may not be practicable. Future work is necessary to understand the role of reactor designs to reduce/eliminate unintentional Ga incorporation. To investigate the role of Ga incorporation in the barrier and in the interlayer on the HEMTs characteristics, we carried out electrical measurements for sample-A. The 2-DEG density value was found to be ≈ 3.02 x 10 13 cm −2 and the R-T Hall mobility is ≈ 980 cm 2 /V-s and sheet resistance is ≈ 210 Ohm/sq. Our structure shows reasonably good values related to 2-DEG properties with high density and low sheet resistance which is an indication of a good heterointerface. The apparent free carrier concentration profile and C-V plot measured at R-T are shown in Fig. 10 and in the inset of Fig. 10 respectively. The carrier concentration depth profile was calculated using the procedure proposed in ref. 40. The depth profile evidences the maximum value of the free carriers at the Al(Ga)N/GaN interface, hence indicating the presence of the 2-DEG at the interface. The background carrier concentration related to the GaN layer was also estimated to be of the order of 10 16 cm −3 . It should be noted that the precision of the extracted carrier depth profile could suffer with some error due to the increasing Debye length (λ D ) with decreasing electron density towards the GaN layer. 41 The 2-DEG density value estimated from the C-V plot was found to be ≈ 3.2 x 10 13 cm −2 by using the method described in ref. 42. The obtained values were in good agreement with Hall measurements and are similar to earlier reported results on InAlN/AlN/GaN based structures. 43 In order to understand the impact of the presence of unintentional Ga in the barrier and in the interlayer on the band structure, we have simulated the band diagrams for three structures with and without Ga in the barrier and interlayer by using nextnano simulation software. 44 Fig. 11 shows the schematic representation of the simulated band structure for the ideal case of the lattice matched In 0.18 Al 0.82 N/AlN/GaN structure, 4 for sample-A with high Ga in barrier and interlayer (80%) and finally a structure with low Ga (≈10%) in the interlayer with a quarternary barrier similar to that measured for sample-A, i.e., (In ≈ 12 %, Al ≈ 56 % and Ga ≈ 32 %). We have observed the expected 2-DEG triangular well formation at AlN/GaN interface for all structures but surprisingly the presence of another well was also observed in parallel to 2-DEG well for the structure (sample-A) with high Ga (80%) in interlayer. The existence of a narrow and a weak parallel well for the structure with a high Ga (80%) containing interlayer may be due to a very small band offset between barrier layer and interlayer which bends the conduction band below the Fermi level at InAl(Ga)N/Al(Ga)N interface. The presence of this unexpected well might influence the 2-DEG concentration and mobility values. However for sample-A, we have still achieved a good value for the 2-DEG density and mobility at 300 K, which may be due to the fact this 2 nd well is relatively much weaker than the main 2DEG well and its screening effect might not significantly influence 2-DEG properties. A similar observation with two-channel conduction in InAlN/AlN/GaN structures due to the presence of an unexpected thin parasitic GaN layer has also been reported in the literature. 25 The presence of unintentional Ga could influence the 2-DEG electronic properties of the HEMT structures and also can create some issues on fabricating and operating HEMT devices with good device characteristics. IV. CONCLUSIONS From our experimental results acquired by using various characterization techniques we have verified the presence of unintentional Ga in the barrier as well as in the interlayer for samples grown using both closed coupled and horizontal MOVPE reactors. In spite of sample-A having a defect density in the order of 10 9 cm −2 and high Ga incorporation in the interlayer, the mobility and 2-DEG density are comparable to good quality samples. We surmise that this reasonably high mobility is due to the smooth interfaces between the Al(Ga)N/GaN and InAl(Ga)N as revealed by STEM. The existence of a narrow, weak well parallel to the main 2-DEG well for the structure with high (80%) Ga content in the interlayer (sample-A) was observed in the 1-D Poisson-Schrödinger simulations. No such additional 2-DEG well was observed for structure with 10% of Ga in the interlayer. The existence of unintentional Ga in the HEMT structures does not appreciably affect the 2-DEG properties; however it could be a problem during device processing. Stopping the growth and cleaning the MOVPE reactors prior to growing the interlayer and barrier may help in reducing the unintentional Ga incorporation, but may not be a practical feasible solution. On the other hand producing a HEMT structure with InAlGaN as a barrier and the AlGaN as an interlayer with appropriate alloy composition may be a possible route to optimization as it might be difficult to avoid Ga incorporation while continuously depositing the layers using the MOVPE growth method. The present work shows the importance of using a multicharacterization approach to have a better understanding of materials properties, especially for samples with varying alloy composition and complex physical properties.
7,176.4
2014-12-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Capture Mechanism of Cadmium in Agricultural Soil Via Iron-Modified Graphene : Cadmium (Cd) contamination in agricultural soils has caused extensive concern to re-searchers. Biochar with iron-compound modifications could give rise to the synergistic effect for Cd restriction. However, the related capture mechanism based on physicochemical properties is unclear. In this study, first principles calculations are proposed to explore the adsorption ability and potential mechanism of the ferric hydroxide modified graphene (Fe@G) for capturing CdCl 2 . The simulation results show that the adsorption energy to CdCl 2 could enhance to − 1.60 eV when Fe(OH) 3 is introduced on graphene. Subsequently, analyses of electronic properties demonstrated a significant electron transfer between Cd s-orbital and O p-orbital, thereby leading to strong adsorption energy. This theoretical study not only identifies a powerful adsorption material for Cd reduction in agricultural soils and reveals the capture mechanism of Fe@G for Cd but also provides a foundation and strategy for Cd reduction in agricultural soils. Introduction With recent rapid developments and increased industrial emissions, cadmium (Cd) contamination is becoming a serious concern in east and south Asia, especially in China, India, and Thailand [1][2][3][4]. Here, until 2005, approximately 1.3 × 10 5 ha of agricultural soils in China were reported to be contaminated by Cd [5]. It is well known that Cd is a nonessential substance for plants; however, it is easily accumulated in agricultural crops [6][7][8][9]. For example, paddy rice can uptake Cd easily, and straw is the main accumulation region for Cd. A wide number of rice straws have been reportedly polluted by Cd every year in China [6,10]. In 2016, more than 10% of rice samples from rice markets exceeded the China National Standard for food contamination of Cd (<0.2 mg kg −1 ) [11]. Furthermore, due to the 30 year biological half-life of Cd, it has the potential to cause serious medical conditions such as cancer and Itai-Itai disease [12][13][14]. Thus, it is necessary to reduce the availability of Cd in agricultural crops and consequently to the human system. To date, in situ metal stabilization has raised considerable concern due to its high effect on reducing the bioavailability and toxicity of heavy metals in short periods [15][16][17]. Among them, biochar from pyrolysis with limited oxygen has an irregular aromatic structure, multi-layered accumulation forms, and unique physicochemical properties, which make it a great candidate for soil amendments to inhibit Cd in soils [18][19][20][21]. For instance, Luo et al. demonstrated that corncob biochar could significantly increase the total nitrogen and organic matter content of the soil while stabilizing the availability of arsenic (As) and Cd in soil [22]. Furthermore, iron-based materials could reduce the efficiency of Cd by changing the physicochemical property of Cd [23,24]. Compared with pure biochar treatment, hybrid soil amendments of biochar and inorganic materials containing iron are better for reducing the Cd contamination of agricultural soils [18,25,26]. Qiao et al. reported that zero-valent iron (ZVI)-biochar could reduce Cd contamination in agricultural soils effectively [27]. Yin et al. reported that Fe-modified biochar reduced the accumulation of As and Cd in rice by reducing soil acidification [28]. However, the capture mechanism of Fe-modified biochar on Cd physicochemical systems has rarely been reported. Therefore, clarifying the capture mechanism of Fe-modified biochar on Cd is of paramount importance. In this study, graphene (G) was used as the substrate to emulate the biochar. Graphene and its ferric hydroxide modification (Fe@G) were taken as the research objects to explore the capture mechanism with CdCl 2 by first principles calculations. First, the structural and electronic property differences between G and Fe@G were checked. Then, the adsorption ability of G and Fe@G with CdCl 2 were identified. Moreover, inherent changes in electronic properties and charge transferability were demonstrated. These findings show that there is a significant charge transfer between Fe@G and CdCl 2 , which could be the main reason that Fe@G enhances the reduction of Cd. The calculation results will help to explain the capture mechanism of Fe-modified biochar and guide materials design for Cd reduction in agricultural soils. Results and Discussion Firstly, the structural stability and electronic properties of graphene were considered with modified ferric hydroxide (Fe(OH) 3 ) (Figures 1 and S1). The optimization structures of G and Fe@G were compared. There was a long distance from Fe(OH) 3 to G, which meant that a weak interaction existed between Fe(OH) 3 and G (Figure 1a,b). Furthermore, the electron localization function (ELF) diagram proved that although the electron property of G changed with the Fe(OH) 3 modification, there was no distinct electron exchange region between Fe(OH) 3 and G that would demonstrate that a physical adsorption effect exists (Figure 1c,d). Meanwhile, these results implied that G would not have a negative influence on Fe(OH) 3 to adsorb CdCl 2 . senic (As) and Cd in soil [22]. Furthermore, iron-based materials could redu ciency of Cd by changing the physicochemical property of Cd [23,24]. Comp pure biochar treatment, hybrid soil amendments of biochar and inorganic ma taining iron are better for reducing the Cd contamination of agricultural soils Qiao et al. reported that zero-valent iron (ZVI)-biochar could reduce Cd contam agricultural soils effectively [27]. Yin et al. reported that Fe-modified biochar r accumulation of As and Cd in rice by reducing soil acidification [28]. Howeve ture mechanism of Fe-modified biochar on Cd physicochemical systems has r reported. Therefore, clarifying the capture mechanism of Fe-modified biochar paramount importance. In this study, graphene (G) was used as the substrate to emulate the bio phene and its ferric hydroxide modification (Fe@G) were taken as the research explore the capture mechanism with CdCl2 by first principles calculations. First tural and electronic property differences between G and Fe@G were checked adsorption ability of G and Fe@G with CdCl2 were identified. Moreover, inhere in electronic properties and charge transferability were demonstrated. Thes show that there is a significant charge transfer between Fe@G and CdCl2, whic the main reason that Fe@G enhances the reduction of Cd. The calculation resul to explain the capture mechanism of Fe-modified biochar and guide materials Cd reduction in agricultural soils. Results and Discussion Firstly, the structural stability and electronic properties of graphene were with modified ferric hydroxide (Fe(OH)3) (Figures 1 and S1). The optimization of G and Fe@G were compared. There was a long distance from Fe(OH)3 to meant that a weak interaction existed between Fe(OH)3 and G (Figure 1a,b). Fu the electron localization function (ELF) diagram proved that although the elec erty of G changed with the Fe(OH)3 modification, there was no distinct electron region between Fe(OH)3 and G that would demonstrate that a physical adsorp exists (Figure 1c,d). Meanwhile, these results implied that G would not have influence on Fe(OH)3 to adsorb CdCl2. To further explore the adsorption ability of CdCl 2 on G and Fe@G, the bond length (Cd-Cl), bond angle (Cl-Cd-Cl), and adsorption energy were considered. In previous studies, the long bond length and the large bond angle could facilitate the decomposition of the compound. Meanwhile, the weak adsorption energy could lead to the adsorption being unstable [29][30][31]. Compared with the pure CdCl 2 (179.56 • ) and that adsorbed on G (171.03 • ), the CdCl 2 showed the lowest bond angle (125.01 • ) when adsorbed on Fe@G (Figure 2a-c). Furthermore, in contrast to the pure CdCl 2 and adsorbed on G, the Fe@G possessed a longer bond length (2.95Å, Cd-Cl) near the Fe(OH) 3 , whereas it had a shorter bond length (2.33Å, Cd-Cl) at the distance from Fe(OH) 3. This was substantial evidence that CdCl 2 would not decompose easily on Fe@G. Moreover, adsorption energies of G and Fe@G with CdCl 2 were taken for comparison. According to the computation formula of adsorption energy, the negative value implied a stronger restriction effect of CdCl 2 adsorbed on substrates [32]. Due to the weak physisorption of G and CdCl 2 , G showed a poor adsorption ability (−0.42 eV), while Fe@G showed more negative values (−1.60 eV) when CdCl 2 was adsorbed (Figure 2d). The above results demonstrated that with the introduction of Fe(OH) 3 , the Fe@G showed a considerable effect in restricting the migration of CdCl 2 . To further explore the adsorption ability of CdCl2 on G and Fe@G, (Cd-Cl), bond angle (Cl-Cd-Cl), and adsorption energy were considered. ies, the long bond length and the large bond angle could facilitate the d the compound. Meanwhile, the weak adsorption energy could lead to th ing unstable [29][30][31]. Compared with the pure CdCl2 (179.56°) and tha (171.03°), the CdCl2 showed the lowest bond angle (125.01°) when adsorb ure 2a-c). Furthermore, in contrast to the pure CdCl2 and adsorbed on G sessed a longer bond length (2.95Å, Cd-Cl) near the Fe(OH)3, whereas bond length (2.33Å, Cd-Cl) at the distance from Fe(OH)3. This was sub that CdCl2 would not decompose easily on Fe@G. Moreover, adsorption e Fe@G with CdCl2 were taken for comparison. According to the comput adsorption energy, the negative value implied a stronger restriction eff sorbed on substrates [32]. Due to the weak physisorption of G and Cd poor adsorption ability (−0.42 eV), while Fe@G showed more negative v when CdCl2 was adsorbed (Figure 2d). The above results demonstrated troduction of Fe(OH)3, the Fe@G showed a considerable effect in restricti of CdCl2. Next, the electronic properties of CdCl2 on G and Fe@G were invest understanding of the inhibition mechanism of CdCl2 on G and Fe@G. Init charge density difference of CdCl2 was drawn on G and Fe@G to probe t of charge. There was a clear charge depletion region between Cd and O tinct charge accumulation region between Fe and Cl atoms, which indica icant electron transfer occurred between the CdCl2 and Fe@G (Figure 3b,d phenomenon was only expressed between the CdCl2 and Fe@G interface configuration of CdCl2 on G was no evidence of an obvious charge tran or Cl and G, which may be the main reason for the weak adsorption abil (Figure 3a,c). Next, the electronic properties of CdCl 2 on G and Fe@G were investigated to gain an understanding of the inhibition mechanism of CdCl 2 on G and Fe@G. Initially, the electron charge density difference of CdCl 2 was drawn on G and Fe@G to probe the redistribution of charge. There was a clear charge depletion region between Cd and O atoms and a distinct charge accumulation region between Fe and Cl atoms, which indicated that a significant electron transfer occurred between the CdCl 2 and Fe@G (Figure 3b,d). However, this phenomenon was only expressed between the CdCl 2 and Fe@G interface. The adsorption configuration of CdCl 2 on G was no evidence of an obvious charge transfer between Cd or Cl and G, which may be the main reason for the weak adsorption ability of CdCl 2 on G (Figure 3a,c). The above results are also observed in the ELF diagram ( Figure 4). The C configuration had no distinct electron exchange between the CdCl2 and G ( However, in contrast to G, the CdCl2 on Fe@G configuration had a greater elec tribution than the ELF diagram of pure Fe@G, where the localization degree of t for the -OH group near the Cd atom was reduced (Figure 4b,d). This phenom firmed that CdCl2 had influenced the electronic properties of Fe(OH)3 in Fe@G, implied that a significant charge exchange existed between Fe(OH)3 and CdCl Bader charge analysis method was used to gain a deeper insight into the charges transferred between substrates and CdCl2. Generally, more charges t expresses the higher restriction ability of the substrates to the molecules. Comp pure CdCl2, the Cd site of the CdCl2 on Fe@G configuration had an apparent ch fer (0.141 e) that was higher than the CdCl2 on G configuration (0.008 e) (Fig addition, as shown in Table S1, the two Cl atoms in the CdCl2 on Fe@G configu showed a charge exchange that was 0.05 e and 0.1 e greater than that of a Cl at CdCl2. The reason for the significant difference in charge number between the oms was ascribed to the different distances from Fe(OH)3. Meanwhile, the to number of CdCl2 in the CdCl2 on Fe@G configuration showed a higher char than in other situations, and the results were consistent with Figures 3 and 4. The above results are also observed in the ELF diagram ( Figure 4). The CdCl 2 on G configuration had no distinct electron exchange between the CdCl 2 and G (Figure 4a). However, in contrast to G, the CdCl 2 on Fe@G configuration had a greater electron redistribution than the ELF diagram of pure Fe@G, where the localization degree of the electron for the -OH group near the Cd atom was reduced (Figure 4b,d). This phenomenon confirmed that CdCl 2 had influenced the electronic properties of Fe(OH) 3 in Fe@G, which also implied that a significant charge exchange existed between Fe(OH) 3 and CdCl 2 . Next, the Bader charge analysis method was used to gain a deeper insight into the number of charges transferred between substrates and CdCl 2 . Generally, more charges transferred expresses the higher restriction ability of the substrates to the molecules. Compared to the pure CdCl 2 , the Cd site of the CdCl 2 on Fe@G configuration had an apparent charge transfer (0.141 e) that was higher than the CdCl 2 on G configuration (0.008 e) (Figure 4c). In addition, as shown in Table S1, the two Cl atoms in the CdCl 2 on Fe@G configuration also showed a charge exchange that was 0.05 e and 0.1 e greater than that of a Cl atom in pure CdCl 2 . The reason for the significant difference in charge number between the two Cl atoms was ascribed to the different distances from Fe(OH) 3 . Meanwhile, the total charge number of CdCl 2 in the CdCl 2 on Fe@G configuration showed a higher charge transfer than in other situations, and the results were consistent with Figures 3 and 4. The above data show that the formation of the Cd-O and Fe-Cl bonds benefitted from charges transferred between CdCl 2 and the sorbent. However, in-depth studies must explore the reaction mechanism of CdCl 2 adsorption on Fe@G. Thus, the total density of states (DOS) and partial density of states (PDOS) of CdCl 2 on G and Fe@G configurations were investigated to gain a perspective on the interval states of energy and electron transfer ( Figure 5). Differing from the CdCl 2 on G configuration (Figure 5a), there were prominent energy shifts that could be observed in the DOS of the CdCl 2 on Fe@G configuration, which means that the electron transfer had occurred under the adsorption situation as displayed in Figure 5b. Furthermore, the PDOS of the O and Cd atoms in the CdCl 2 on Fe@G configuration before and after adsorption were also considered. Due to the distinct charge donation of the Cd atom, the s-orbital of Cd exhibited a dramatic energy upshift after adsorption on Fe@G (Figure 5d). In contrast, the downshift of the p-orbital of O was evident, which was attributed to the O gain electrons when Fe@G interacted with CdCl 2 . Meanwhile, these phenomena were also present in the Fe s-orbital and Cl p-orbital of the CdCl 2 on Fe@G configuration ( Figure S2). However, there was no significant energy shift between Inorganics 2022, 10, 150 5 of 8 the C p-orbital and Cd s-orbital of the CdCl 2 on the G configuration (Figure 5c), which was consistent with the results of ELF, charge density difference, and Bader charge analysis. The above data show that the formation of the Cd-O and Fe-Cl bonds benefitted from charges transferred between CdCl2 and the sorbent. However, in-depth studies must explore the reaction mechanism of CdCl2 adsorption on Fe@G. Thus, the total density of states (DOS) and partial density of states (PDOS) of CdCl2 on G and Fe@G configurations were investigated to gain a perspective on the interval states of energy and electron transfer ( Figure 5). Differing from the CdCl2 on G configuration (Figure 5a), there were prominent energy shifts that could be observed in the DOS of the CdCl2 on Fe@G configuration, which means that the electron transfer had occurred under the adsorption situation as displayed in Figure 5b. Furthermore, the PDOS of the O and Cd atoms in the CdCl2 on Fe@G configuration before and after adsorption were also considered. Due to the distinct charge donation of the Cd atom, the s-orbital of Cd exhibited a dramatic energy upshift after adsorption on Fe@G (Figure 5d). In contrast, the downshift of the p-orbital of O was evident, which was attributed to the O gain electrons when Fe@G interacted with CdCl2. Meanwhile, these phenomena were also present in the Fe s-orbital and Cl p-orbital of the CdCl2 on Fe@G configuration ( Figure S2). However, there was no significant energy shift between the C p-orbital and Cd s-orbital of the CdCl2 on the G configuration (Figure 5c), which was consistent with the results of ELF, charge density difference, and Bader charge analysis. Materials and Methods The first principles calculations were carried out using the simulation software Vienna Ab initio Simulation Package (VASP) based on density functional theory (DFT) with Materials and Methods The first principles calculations were carried out using the simulation software Vienna Ab initio Simulation Package (VASP) based on density functional theory (DFT) with the projector augmented wave (PAW) method [33][34][35]. Exchange-correction interactions were parameterized by the Perdew, Burke, and Ernzerhof (PBE) type, and DFT-D3 was used to consider the van der Waals interaction [36,37]. The energy cutoff was set to 500 eV, and convergence criteria for energy and force were set to 10 −5 eV and 0.02 eV Å −1 per atom, respectively. A vacuum layer with a thickness of 15 Å was employed to avoid interactions between the adjacent periodic units. The k-point was meshed by the Monkhorst-Pack method with a 3 × 3 × 1 grid for geometry optimizations and 11 × 11 × 1 grid for DOS calculation [38,39]. The calculations were computed within the DFT + U formalism to describe the localized d electrons, and the U values of Fe and Cd were set to 4 eV and 2 eV, respectively [40,41]. The adsorption energy (E a ) formula was defined as where E Cd , E substrate , and E (Cd-substrate) represent the total energy of the CdCl 2 , the substrates of G or Fe@G, and the energy of the CdCl 2 adsorption on the different substrates. The differential charge density (∆ρ) that was used to describe the charge distribution for the adsorption system is defined with the following formula [42]: where ρ Cd-substrate , ρ substrate , and ρ Cd represent the charge density of the CdCl 2 adsorption on the different substrates, the substrates of G or Fe@G, and the CdCl 2 . The electron localization function (ELF) diagram was drawn by VESTA [43]. Conclusions In summary, we have revealed the capture ability and mechanism of CdCl 2 with Fe(OH) 3 modified graphene via first principles calculations. The Fe(OH) 3 modified carbon did not have a negative influence on adsorption of CdCl 2 , and it could increase the chemisorption effect when the Fe(OH) 3 was introduced. This led to an adsorption energy improvement from −0.42 eV (G) to −1.60 eV. The adsorption behavior of Fe@G would then result in a large change in bond length (Cd-Cl) and bond angle (Cl-Cd-Cl) when CdCl 2 was adsorbed onto Fe@G. Furthermore, the electron charge density difference, ELF, and Bader charge analysis were used to confirm the charge transfer capacity. The Bader charge analysis displayed a 0.141 e charge transfer between Cd and Fe@G, which showed a greater amelioration over the CdCl 2 on G configuration. In addition, the electron transfer process was revealed by DOS and PDOS analysis. The PDOS proved that the Cd s-orbital and the O p-orbital exhibited a dramatic energy shift when the Fe@G interacted with CdCl 2 , which provided powerful evidence for the capture mechanism of CdCl 2 with Fe(OH) 3 modified carbon. This study identified a powerful adsorption material for Cd reduction in agricultural soils and revealed the capture mechanism of Fe@G for Cd addition, showing potential as a strategy for Cd reduction in agricultural soils. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/inorganics10100150/s1, Figure S1: ELF diagram of the top view of Fe@G; Figure S2: The partial density of states of CdCl 2 adsorbed on Fe@G; Table S1: The number of charge transfers of the pure CdCl 2 , and adsorbed on substrate. Author Contributions: X.W., J.D. and Z.F.: conception and design of the study, approval of the version of the manuscript to be published; H.W. performed the study, analyzed the data, and wrote the manuscript; J.H., W.Z. and P.L. provided guidance on the data analysis; Y.W., F.Z. and X.M. revised the manuscript critically for important intellectual content. All authors have read and agreed to the published version of the manuscript.
4,891.6
2022-09-22T00:00:00.000
[ "Environmental Science", "Materials Science", "Agricultural and Food Sciences", "Chemistry" ]
Spin and Susceptibility Effects of Electromagnetic Self-Force in Effective Field Theory The classic Abraham-Lorentz-Dirac self-force of point-like particles is generalized within an effective field theory setup to include linear spin and susceptibility effects described perturbatively, in that setup, by effective couplings in the action. Electromagnetic self-interactions of the point-like particle are integrated out using the in-in supersymmetric worldline quantum field theory formalism. Divergences are regularized with dimensional regularization and the resulting equations of motion are in terms only of an external electromagnetic field and the particle degrees of freedom. The classic Abraham-Lorentz-Dirac self-force of point-like particles is generalized within an effective field theory setup to include linear spin and susceptibility effects described perturbatively, in that setup, by effective couplings in the action.Electromagnetic self-interactions of the point-like particle are integrated out using the in-in supersymmetric worldline quantum field theory formalism.Divergences are regularized with dimensional regularization and the resulting equations of motion are in terms only of an external electromagnetic field and the particle degrees of freedom. Self-force describes the fascinating phenomenon of an object being accelerated by a force generated by itself.The well-known Abraham-Lorentz-Dirac (ALD) equation [1][2][3][4][5] describes this effect for the most basic pointlike charged particles and the resulting back-reaction balances radiation of energy described by the Larmor formula.The physical objects of interest generally have finite extent and properties such as angular momentum (spin) and dipole susceptibilities.For spin, adequate generalizations of the Lorentz force and corresponding ALD self-force have been considered by many authors [6][7][8][9][10][11][12][13][14].One motivation for this line of work is the classical description of the electron [15] which may e.g.be modeled as a charged sphere for which several self-force results are known [16,17]. Recently, an analogous problem in gravity of describing the early inspiral of two point-like compact bodies and their radiation have gained importance for the data analysis of gravitational wave signals observed on earth [18,19].Here, one sets up an effective field theory (EFT) capturing the body degrees of freedom by worldline fields with the most basic field given by the worldline parametrization z µ (τ ) [20,21].Spin and finite size effects are then described by effective couplings whose value may in each case be determined from a matching to the physical object of interest.Such a worldline EFT has had great success in describing compact bodies in gravity [22][23][24] but may also be applied to electromagnetic interactions [17,[25][26][27][28]. In worldline EFT, the relativistic angular momentum of the point-like particle is described by an antisymmetric worldline tensor field S µν (τ ).Half of its degrees of freedom are constrained by requiring symmetry of the action under small shifts of the worldline trajectory [58] so that the dynamics involves only a spatial spin vector.At the level of the action, one must usually introduce a co-moving frame in order to describe the spin kinematics [57,59,60].This, however, is avoided by expressing the spin tensor in terms of anticommuting Grassmann vectors ψ µ (τ ) which, inspired by previous work [12,[61][62][63][64][65][66][67][68][69][70], was first proposed in this context in the framework of WQFT [35,36].Here, the worldline shift symmetry becomes a supersymmetry (SUSY). Self-interaction of point-like particles generally leads to divergent expressions which, however, from the perspective of EFT is not surprising as the small scale physics has been integrated out.Instead, the EFT must be regularized and in the present case we will use dimensional regularization.Thus, also in the classical setting, eventual divergences must be absorbed into counterterms of the action [20,[71][72][73][74]. In this letter, we compute novel spin and susceptibility corrections to the electromagnetic self-force of pointlike particles described by a worldline EFT.The computational method innovates on earlier work and presents a very streamlined approach for deriving self-force corrections in worldline EFT.In particular, computations are carried out diagrammatically using the in-in SUSY WQFT formalism and reduce to the evaluation of a number of tree-level Feynman diagrams.A major motivation for this innovation is its future generalization and application to the gravitational setting and, in particular, the perturbative self-force expansion of extreme mass ratio binaries [45,[75][76][77]. EFT of Point-Like Particles.-Our system will be described by the following action S: The first two terms will describe kinematics and electromagnetic (EM) interactions of the point-like particle.The third term is the kinetic action of the EM potential in Lorenz gauge, with arbitrary dimension d for the use of dimensional regularization and field strength tensor where square brackets denote averaged antisymmetrization.We use units such that the speed of light and vacuum permittivity and permeability are all unity c = ǫ 0 = µ 0 = 1.Finally, the last term of Eq. ( 1), S ext , describes external sources of the EM potential.We do not make any assumptions on S ext which could for example be given by a second copy of the worldline action in which case we would describe the relativistic EM two-body problem. Let us first consider the interaction terms of the pointlike particle which we model as follows: Here, z µ = z µ (τ ) is the worldline of the point-like particle with total charge q and mass m and we use dots to denote differentiation with respect to τ and the shorthand | ż| = √ ż2 with factors of | ż| ensuring explicit time reparametrization invariance.The particle has (intrinsic) relativistic angular momentum S µν (τ ) with Pauli-Lubanski vector S µ = 1 2 ǫ µνρσ S νρ żσ /| ż|.The electric and magnetic fields E µ (z) and B µ (z) are defined implicitly by a decomposition of the field strength tensor F µν (z), where the vectors are assumed to be orthogonal to the body frame (B • ż = E • ż = 0).Here, and in the following, we often leave time dependence of worldline fields implicit.In Eq. ( 3), the spin-induced magnetic field is measured by the g-factor g and susceptibility effects by c B and c E describing magnetization and electric polarization, respectively.The interactions Eq. (3) are invariant (at leading order in spin and susceptibility) under small shifts of the trajectory δz µ where the spin tensor transforms as δS µν = 2mδz [µ żν] /| ż| and the Pauli-Lubanski vector is invariant.Here, and after, for the use of dimensional regularization, all Levi-Civita symbols may be avoided by working with S µν and F µν as discussed explicitly in the supplementary material [78]. If one assumes L ∼ q 2 /m to be the only scale of the point-like particle, one finds S µ ∼ Lm and c E/B ∼ L 3 although, generally, additional intrinsic scales may be relevant.The EFT framework assumes this scale to be small compared with a relevant external scale and effective couplings are further suppressed by it.The inclusion of higher order spin or susceptibility corrections or other finite size effects in the EFT is an interesting problem with much work done in the gravitational context [60,[79][80][81][82]. Let us turn to the kinetic action S kin which, as discussed in the introduction, is conveniently written in terms of anticommuting (Hermitian) Grassmann vectors ψ µ (τ ) related to the the spin tensor as S µν = −imψ µ ψ ν .Using also the Polyakov form of the point mass action, we get [36,39,41]: At this point, the shift symmetry becomes a SUSY with δz µ = iηψ µ and δψ µ = −η żµ and global Grassmann parameter η.We will gauge-fix the SUSY with the covariant spin supplementary condition S µν żν = 0 and time reparametrization invariance with proper time ż2 = 1 and assume these constraints in the following. Worldline Equations of Motion.-The equations of motion (EOMs) are derived from the principle of stationary action and for the trajectory we find the force f σ = mz σ to be: Here, we use a projector η µν ⊥ = η µν − żµ żν and note that proper time implies ż • f = 0. We define the body frame cross product of any two vectors u µ 1 and which implies ǫ 1230 = 1.We will focus on the (SUSY invariant) Pauli-Lubanski vector S µ (τ ) as the physical spin variable which is given in terms of the Grassmann vectors by S µ = −i m 2 (ψ×ψ) µ .Using the chain rule and principle of stationary action for the Grassmann vectors one arrives at the following spin precession for S µ (the BMT equation [8,28]): Here, we introduced the torque T µ and focused only on the spatial components as the time component of Ṡµ in the direction of żµ is straightforwardly determined from differentiation of the constraint S • ż = 0. Worldline Quantum Field Theory.-The WQFT formalism offers a streamlined diagrammatic approach to solving the classical EOMs ( 6) and ( 8) [33][34][35][36][37][38][39][40][41][42].The central idea is that the classical dynamics described by the worldline EFT may be considered as the tree-level contributions ( → 0) of a quantum field theory defined from the (worldline) action S where both the EM potential and the worldline fields are promoted to quantum fluctuating fields. For the EM potential, we define the fluctuating field ∆A µ = A µ − A µ ext in a background expansion around the external potential A µ ext (x) sourced by the current of S ext such that We collect the worldline fields in a single superfield Z µ = {z µ , ψ µ } and expand it around an arbitrary time τ , with fluctuation ∆Z σ = {∆z σ , ∆ψ σ } and boundary conditions ∆z(τ ) = ∆ ż(τ ) = ∆ψ(τ ) = 0.This expansion will be used at times near to τ and implies no assumptions on the global character of the trajectory. The key observation of the WQFT formalism is that its one-point functions in the → 0 limit are identical to the solutions of the classical EOMs: Here, the blobs represent the WQFT one-point functions with wiggly and solid lines identifying photons ∆A µ and superfields respectively.Conveniently, we work in momentum and frequency space indicated by k µ and ω and defined by d-dimensional and one-dimensional Fourier transforms, respectively.In order to consider arbitrary times τ , the corresponding frequency must be kept offshell (the on-shell limit ω → 0 is related to the global change in momentum or spin [33,41]). The WQFT Feynman rules are straightforwardly determined from the action [33,36,41] and have the following three important properties.First, the background expansion introduces one-point vertices which lead to an infinite series of tree diagrams.Second, the interaction of one-dimensional superfields with d-dimensional photons conserves only one component of the photon momenta and the unconstrained integration on the remaining (spatial) components leads to loop-like integrations within the tree diagrams.Third, in order to arrive at causal dynamics, retarded propagators are used exclusively and all point toward the single outgoing line which, formally, is imposed by the in-in formalism [38]. A simple example of a vertex rule is given by the inter-action of a photon with a worldline trajectory fluctuation, with the ellipsis indicating spin and susceptibility corrections.Generally, the vertex rules have up to two photon legs and any number of superfield legs.They conserve energy and depend on the worldline background variables z σ (τ ), żσ (τ ) and ψ σ (τ ), on the external EM potential A µ ext and on the momenta and frequencies of the incoming and outgoing fields.Because of the background expansion around A µ ext , the photons ∆A µ (x) interact only with the point-like particle (and not the external current). The classical EOMs now take the form of off-shell, recursive Berends-Giele like relations [40,41,83]: The first line corresponds to the worldline EOMs where the first term represents the force (or torque) evaluated on the external EM fields and the next two terms have one or two insertions of the fluctuation ∆A µ (x).This force is expanded in the worldline fluctuations around the background time τ which explains the presence of any number n of fluctuations.When evaluated at the background time itself in time domain, only finitely many terms in the sum on n are non-zero.Such an evaluation at τ will be our goal after integrating out ∆A µ (x) below. The second line of Eq. ( 13) describes the coupling of ∆A µ to the current of the point-like particle.Integrating Out Self-Interactions.-Self-interactions are now straightforwardly integrated out by eliminating ∆A µ (x) from the system of equations ( 13) leading to the following regulated EOM: Here, the sum extends over all numbers n, m and l of superfields.The goal will be to evaluate the right-handside in time domain at the background time τ .Its general structure is a sum of (j + 1)-point WQFT diagrams connected with j superfields where only photons ∆A µ (x) propagate within the diagrams.The first term corresponds to the force (or torque) evaluated on the external EM fields and the three next terms are self-force corrections. A generic multi-point WQFT diagram with (j + 1) superfield legs takes the schematic form, (ω 1 , .., ω j ) , (15) with amplitude M which depends only on the frequencies and worldline background parameters.Here, the big solid blob signifies any of the multi-point WQFT diagrams of Eq. ( 14) where we have amputated all (incoming) superfields and external propagators.In order to keep the discussion simple, we ignore the case of the external EM potential A µ ext in the schematic form, though its inclusion is straightforward. Let us consider the contribution of the multi-point WQFT diagram Eq. ( 15) to the regulated EOM Eq. ( 14) in time domain evaluated at τ .We thus integrate the multi-point diagram against j superfield fluctuations and integrate on ω 0 with a Fourier factor exp(−iω 0 τ ) at which point all frequencies become derivatives of the time domain superfields: The amplitudes M may easily be computed and turn out to be polynomial in their arguments and finite in d = 4. In this case the contribution ( 16) simply becomes a sum of j superfields ∆Z σ (τ ) multiplied together and each differentiated a number (possibly zero) of times.Crucially, since ∆z σ (τ ) = ∆ żσ (τ ) = ∆ψ σ (τ ) = 0, the contribution is non-zero only if each field is differentiated a minimum number of times.Higher derivatives of ∆Z σ are simply identical to derivatives of Z σ itself.We will not carry out power counting of the vertex rules explicitly but one finds that for a sufficient number j of superfield legs, there are not enough differentiations to make the contribution ( 16) non-zero.In particular, one needs at most one incoming fluctuation in the first term of Eq. ( 14) (i.e.n ≤ 1), at most three fluctuations in the second (n + m ≤ 3) and at most five fluctuations in the third and fourth (n + m + l ≤ 5). At this point we must only show that the amplitudes M are polynomial in the frequencies and finite in d = 4. Non-trivial dependence on the frequencies and eventual divergencies can arise only from the loop-like integrations on the photon momenta.The relevant integrals factorize into one-loop massive tadpoles: Here, k µ is the exchanged photon momentum and ω is the total energy flowing in or out of the self-interaction.As dictated by the in-in formalism, the photon propagator is retarded with positive infinitesimal ǫ. The massive tadpole Eq. ( 17) is easily computed within dimensional regularization.Importantly, any trace η µ1µ2 I µ1µ2...µn is zero because the contraction cancels the denominator and removes any scales of the integral.With this regularization, the integral is finite and assuming all divergences to appear from self-interactions, they have thus been removed.We can then let d → 4 and work in four spacetime dimensions.The dependence on ω of the tadpole can be determined from dimensional analysis with I µ1...µn ∼ ω n+1 .illustrative example is given by the leading order self-force contribution where, neglecting spin and susceptibility corrections, one gets: ) When inserted in Eq. ( 16), the corresponding amplitude gives rise to the ALD self-force. Self-Force Equations of Motion.-The computation of the regulated EOM Eq. ( 14) evaluated at τ may now be carried out and though there are many diagrams an automatized evaluation is easily carried out with computer algebra.The regulated EOM results in a regulated force for the worldline trajectory and a regulated torque for the Pauli-Lubanski vector.Because of the arbitrariness of τ , we simply replace it by τ . For the trajectory z µ (τ ) we find the schematic form, ) with a µ = zµ and the ellipsis indicating terms of quadratic order in spin and susceptibility effects.Here, the first term f µ ext is the original force ( 6) evaluated on the external EM fields and the square brackets give selfforce corrections with the ALD force in the first term, spin and magnetization effects in the second and electric polarization effects in the final two terms.For the self-force corrections we find, with the magnetic moment M µ = gq 2m S µ + c B B µ ext (z).The forces f µ M and f µ E are due to one exchange of ∆A µ (second term of Eq. ( 14)) and f µ Eq due to two exchanges (third and fourth terms).Thus double radiation magnetization effects are zero at this order.We note that the time derivatives of the cross product in the first and third lines also act on the frame (see Eq. ( 7)). For the torque on S µ we find that the self-force corrections vanish at this order such that η µ ⊥ν Ṡν is given simply by the original torque Eq. ( 8) evaluated on the external (magnetic) field. Assuming causal boundary conditions, the regulated EOMs are, to leading order in spin and susceptibility, exact and consistent predictions of the worldline EFT framework and their validity is thus limited only by that of the EFT framework.See also Refs.[84,85] for a discussion of the validity of the ALD equation. The self-force results Eqs.(20) are to the best of our knowledge new results.In the reviews [11,14], the case of spin is described with worldline EOMs similar to Eqs. ( 6) and (8) (see e.g.Eqs.(337) and (338) in [14]) and a radiative propagator prescription is suggested as regularization but not carried out explicitly.In fact, dimensional regularization used here is identical to this prescription as we show (and define) in the supplementary material [78].This identification provides some intuition of our results: The leading order radiative magnetic field vanishes, B µ rad = O(S, c E/B ), which explains the vanishing of double radiation magnetization effects and leading order self-force torque effects. Let us briefly mention the following non-trivial checks of our results with a more detailed discussion in the supplementary material [78].First, our results are in complete agreement with expressions for the EM fields for a generic dipole moment given in Ref. [9].Second, we have applied our methodology to the finite size coupling a•E(z) considered in Ref. [17] and reproduce the results therein except for a relative sign.Finally, our results are consistent with the instantaneous radiative loss of four-momentum for spin given in Ref. [10]. Outlook.-We have shown how one may systematically eliminate electromagnetic self-interactions in the worldline EFT of point-like particles deriving, in particular, novel spin and susceptibility corrections to the ALD self-force.Straightforward generalizations and perspectives include the addition of higher order spin and finite size effects, self-force in arbitrary spacetime dimensions [86-91] and classical non-Abelian selfinteraction [14,52,[92][93][94][95]. Furthermore, it would be of great interest to apply this framework to the gravitational setting where a weak-field expansion would lead to diagrams similar to the electromagnetic ones considered here except for self-interactions in the bulk giving rise to tail-effects [71,96].A generalization to curved space would be equally exciting and allow for applications to the self-force expansion of extreme mass ratio binaries [45,[97][98][99][100]. ACKNOWLEDGMENTS I would like to thank Alexander Broll, Gustav Mogull, Jung-Wook Kim, Raj Patil, Jan Plefka, Muddu Saketh, Jan Steinhoff and Justin Vines for very useful discussions.I am also grateful to Raj Patil, Jan Plefka and Jan Steinhoff for comments on an earlier draft of this work.I would also like to thank the anonymous referee for useful comments.GUJ's research is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Projektnummer 417533893/GRK2575 "Rethinking Quantum Field Theory". SUPPLEMENTARY MATERIAL Worldline Action in General Dimensions.-All fourdimensional Levi-Civita symbols are eliminated using the standard formula: Using this formula we find the following d-dimensional expression for the worldline interaction terms: the EOMs of the main text may also be generalized to d dimensions by eliminating the Levi-Civita symbols.In that case, however, one cannot easily work with the spin vector as in Eq. ( 8) but must instead work with either the spin tensor or the Grassmann vectors. Radiative EM Potential.-We define the radiative propagator prescription (first considered in relation to the ALD equation by Dirac in Ref. [3]) as the difference of the retarded and advanced potentials.In analog to our dimensionally regularized integral family I µ1...µn DimReg (ω) from Eq. ( 17), we define an integral family with the radiative prescription: Here, the complex conjugation denoted by an asterisk turns the retarded propagator into an advanced one.By swapping signs in the integrand of Eq. ( 17), k µ → −k µ , ω → −ω, one may relate the complex conjugate of the integral to itself, and inserting Eq. ( 24) into Eq.(23) shows that in d = 4 the two different prescriptions are identical: I µ1...µn rad = I µ1...µn DimReg + O(d − 4).Note, however, that if subleading parts in (d−4) would play a role, this identification would no longer hold. An alternative way of deriving the regulated EOMs of the main text would be to split F µν (z) into two parts, with analogous splits for E µ and B µ .The first external part F µν ext is the field strength of A µ ext introduced in Eq. ( 9) and the second radiative part F µν rad is the field strength of the fluctuation ∆A µ (computed either with the dimensionally regularized integrals or the identical radiative ones).This split of the EM fields may be inserted directly into the initial EOMs Eqs. ( 6) and ( 8) giving rise to the regulated EOMs (Eqs.(19) and (20) for the trajectory).As an example, for the electric polarization terms where the initial EOM is quadratic in E µ we get three kinds of terms in the regulated EOMs with zero, one or two insertions of E µ rad corresponding to f µ ext , f µ E and f µ respectively.Below, we will describe how this alternative method was used to check our results. Radiative Field Strength of Generic Dipole.-In Ref. [9] F µν rad was computed for an arbitrary dipole moment Q µν (τ ) of a point-like particle.Thus, considering a point particle current, the radiative field strength tensor as reported in Eqs. ( 7) and ( 24) of Ref. [9] read (converted to our conventions): This result Eq. ( 27) provides an independent check of the regulated EOMs of the main text.Thus, the spin and susceptibility effects considered there may all be characterized by the following dipole moment, with magnetic moment ∂U/∂B µ = gq 2m S µ + c B B µ (z) and electric moment ∂U/∂E µ = c E E µ (z).We have verified that upon insertion of this dipole moment in Eq. ( 27) for F µν rad , we reproduce the regulated EOMs of the main text from the initial EOMs by insertion of the split Eq. ( 25).We note that since the dipole moment depends on the EM fields, one must also insert the split there. Self-Force due to Effective Coupling a • E. -In order to compare with the results of Ref. [17], we consider selfforce effects due to the finite size coupling a • E(z) (with a µ = zµ ).We define the following action, with finite size coupling c and explicit time reparametrization invariance of the interaction terms in order to use proper time.This action gives rise to the following equation of motion (with ma µ = f µ ): Using the same methodology as in the main text we derive regularized equations of motion with, where the external force f µ ext is given by Eq. ( 30) evaluated on the external EM fields and the second term is the ALD self-force.The third term gives the self-force corrections due to the finite size coupling c and reads: This result should be compared with Eq. ( 24) of Ref. [17]. The two results (Eq.( 32) and Eq. ( 24) of Ref. [17]) are in complete agreement except for an overall factor of 4π (due to different electromagnetic units) and a relative sign of the second term. As with the main results of this letter, the radiative field strength tensor of Ref. [9] reported in Eq. ( 27) provides a strong independent check of the self-force result Eq. ( 32).In the case of the finite size term ca • E, the dipole moment reads Q µν = 2ca [µ żν] .Going through the same process described below Eq. ( 28) we have independently checked the result Eq. (32). Instantaneous Loss of Four-Momentum.-In the appendix of Ref. [10] one finds the instantaneous, radiative loss of four-momentum of a spinning, charged point-like particle which, following Ref.[10], we denote Ṗ µ .The spinning part of the self-force f µ M is consistent with the expression for Ṗ µ given there in the sense that (up to a certain term to be discussed) they differ only by a total time derivative.Thus, the spinning part of the self-force may written as follows, where the right-hand-side of the first line agrees exactly with Ṗ µ given in Ref. [10].Apart from the correction term of the left-hand-side of the first line, this agreement is expected since the total radiated momentum is the time integral of the self-force and thus the self-force and Ṗ µ can differ only by a total time derivative which can be neglected under integration.Let us then discuss the presence of the term proportional to E rad in Eq. (33).Essentially, in the regulated EOM of the main text we have cancelled a self-force contribution against an external contribution to the total force.This is most easily understood by considering the initial force, Eq. ( 6), where, in the main text, we have eliminated a term which vanishes upon using the EOMs but which initially is present when one derives the EOMs from the action.This term proportional to S • (a × E) is easily seen to be zero (at leading order in spins) using the EOMs but its individual external and radiative parts do not vanish by themselves.It is thus the radiative part of that term, which has to be added to f µ M in order to recover Ṗ µ .
6,267.8
2023-11-07T00:00:00.000
[ "Physics" ]
A Constructive Sharp Approach to Functional Quantization of Stochastic Processes We present a constructive approach to the functional quantization problem of stochastic processes, with an emphasis on Gaussian processes. The approach is constructive, since we reduce the infinite-dimensional functional quantization problem to a finite-dimensional quantization problem that can be solved numerically. Our approach achieves the sharp rate of the minimal quantization error and can be used to quantize the path space for Gaussian processes and also, for example, Lévy processes. Introduction We consider a separable Banach space E, • and a Borel random variable X: Ω, F, È → E, B E with finite rth moment X r for some r ∈ 1, ∞ . For a given natural number n ∈ AE, the quantization problem consists in finding a set α ⊂ E that minimizes e r X, E, • , α e r X, E, α : min is called the nth L r -quantization error of X in E, and any n-quantizer α fulfilling e r X, E, α e n,r X, E 1.3 is called r-optimal n-quantizer.For a given n-quantizer α one defines the nearest neighbor projection where the Voronoi partition {C a α , a ∈ α} is defined as a Borel partition of E satisfying The random variable π α X is called α-quantization of X.One can easily verify that π α X is the best quantization of X in α ⊂ E, which means that for every random variable Y with values in α we have e r X, E, α X − π α X r 1/r ≤ X − Y r 1/r .1.6 Applications of quantization go back to the 1940s, where quantization was used for the finite-dimensional setting E Ê d , called optimal vector quantization, in signal compression and information processing see, e.g., 1, 2 .Since the beginning of the 21st century, quantization has been applied for example in finance, especially for pricing path-dependent and American style options.Here, vector quantization 3 as well as functional quantization 4, 5 is useful.The terminology functional quantization is used when the Banach space E is a function space, such as E L p 0, 1 , • p or E C 0, 1 , • ∞ .In this case, the realizations of X can be seen as the paths of a stochastic process. A question of theoretical as well as practical interest is the issue of high-resolution quantization which concerns the behavior of e n,r X, E when n tends to infinity.By separability of E, • , we can choose a dense subset {c i , i ∈ AE} and we can deduce in view of that e n,r X, E tends to zero as n tends to infinity.A natural question is then if it is possible to describe the asymptotic behavior of e n,r X, E .It will be convenient to write a n ∼ b n for sequences a n n∈AE and b n n∈AE if In the finite-dimensional setting Ê d , • this behavior can satisfactory be described by the Zador Theorem see 6 for nonsingular distributions È X .In the infinite dimensional case, no such global result holds so far, without some additional restrictions.To describe one of the most famous results in this field, we call a measurable function ρ : s, ∞ → 0, ∞ for an s ≥ 0 regularly varying at infinity with index b ∈ Ê if for every c > 0 Theorem 1.1 see 7 .Let X be a centered Gaussian random variable with values in the separable Hilbert space H, •, • and λ n , n ∈ AE the decreasing eigenvalues of the covariance operator C X :H → H, u → u, X X (which is a symmetric trace class operator).Assume that λ n ∼ ρ n for some regularly varying function ρ with index −b < −1.Then, the asymptotics of the quantization error is given by where ω x : 1/xρ x . Note that any change of ∼ in the assumption that λ n ∼ ρ n to either º, ≈ or ² leads to the same change in 1.9 .Theorem 1.1 can also be extended to an index b 1 see 7 .Furthermore, a generalization to an arbitrary moment r see 8 as well as similar results for special Gaussian random variables and diffusions in non-Hilbertian function spaces see, e.g., 9-11 have been developed.Moreover, several authors established a precise link between the quantization error and the behavior of the small ball function of a Gaussian measure see, e.g., 12, 13 which can be used to derive asymptotics of quantization errors.More recently, for several types of Lèvy processes sharp optimal rates have been developed by several authors see, e.g., 14-17 .Coming back to the practical use of quantizers as a good approximation for a stochastic process, one is strongly interested in a constructive approach that allows to implement the coding strategy and to compute at least numerically good codebooks. Considering again Gaussian random variables in a Hilbert space setting, the proof of Theorem 1.1 shows us how to construct asymptotically r-optimal n-quantizers for these processes, which means that sequences of n-quantizers α n , n ∈ AE satisfy e r X, E, α n ∼ e n,r X, E , n −→ ∞. 1.10 These quantizers can be constructed by reducing the quantization problem to a quantization problem of a finite-dimensional normal distributed random variable.Even if there are almost no explicit formulas known for optimal codebooks in finite dimensions, the existence is guaranteed see 6, Theorem 4.12 and there exist a lot of deterministic and stochastic numerical algorithms to compute optimal codebooks see e.g., 18, 19 or 20 .Unfortunately, one needs to know explicitly the eigenvalues and eigenvectors of the covariance operator C X to pursue this approach. If we consider other non-Hilbertian function spaces E, • or non-Gaussian random variables in an infinite-dimensional Hilbert space, there is much less known on how to construct asymptotically optimal quantizers.Most approaches to calculate the asymptotics of the quantization error are either non-constructive e.g., 12, 13 or tailored to one specific process type e.g., 9-11 or the constructed quantizers do not achieve the sharp rate in the sense of 1.10 e.g., 17 or 20 but just the weak rate e r X, E, α n ≈ e n,r X, E , n −→ ∞. 1.11 In this paper, we develop a constructive approach to calculate sequences of asymptotically r-optimal n-quantizers in the sense of 1.10 for a broad class of random variables in infinite dimensional Banach spaces Section 2 .Constructive means in this case that we reduce the quantization problem to the quantization problem of a Ê d -valued random variable, that can be solved numerically.This approach can either be used in Hilbert spaces in case the eigenvalues and eigenvectors of the covariance operator of a Gaussian random variable are unknown Sections 3.1 and 3.2 , or for quantization problems in different Banach spaces Sections 4 and 5 . In Section 4, we discuss Gaussian random variables in C 0, 1 , • ∞ .This part is related to the PhD thesis of Wilbertz 20 .More precisely, we sharpen his constructive results by showing that the quantizers constructed in the thesis also achieve the sharp rate for the asymptotic quantization error in the sense of 1.10 .Moreover, we can show that the dimensions of the subspaces wherein these quantizers are contained can be lessened without loosing the sharp asymptotics property. In Section 5, we use some ideas of Luschgy and Pagès 17 and develop for Gaussian random variables and for a broad class of Lévy processes asymptotically optimal quantizers in the Banach space L p 0, 1 , • p . It is worth mentioning that all these quantizers can be constructed without knowing the true rate of the quantization error.This means precisely that we know a rough lower bound for the quantization error, that is, e n,r X, E ² C 1 log n −b 1 and the true but unknown for the optimal but still unknown constants C 2 , b 2 .The crucial factors for the numerical implementation are the dimensions of the subspaces, wherein the asymptotically optimal quantizers are contained.We will calculate the dimensions of the subspaces obtained through our approach, and we will see that for all analyzed Gaussian processes, and also for many Lévy processes we are very close to the known asymptotics of the optimal dimension in the case of Gaussian processes in infinitedimensional Hilbert spaces. We will give some important examples of Gaussian and Lévy processes in Section 6, and finally illustrate some of our results in Section 7. Notations and Definitions If not explicitly differently defined, the following notations hold throughout the paper. i We denote by X a Borel random variable in the separable Banach space E, • with card supp È X ∞. ii • will always denote the norm in E whereas • L r È will denote the norm in L r Ω, F, È . iii The scalar product in a Hilbert space H will be denoted by •, • .iv The smallest integer above a given real number x will be denoted by x .v A sequence g j j∈AE ∈ E AE is called admissible for a centered Gaussian random variable X in E if and only if for any sequence ξ i i∈AE of independent N 0, 1distributed random variables it holds that ∞ i 1 ξ i g i converges a.s. in E, • and as m → ∞.A precise characterization of admissible sequences can be found in 21 . vi An orthonormal system ONS h i i∈AE is called rate optimal for X in the Hilbert space H if and only if as m → ∞. Asymptotically Optimal Quantizers The main idea is contained in the subsequent abstract result.The proof is based on the following elementary but very useful properties of quantization errors.V m x . 2.5 Condition 2. There exist linear isometric and surjective operators φ m : Condition 3.There exist random variables Z m for m ∈ J in E with Z m d X, such that for the as m → ∞ along J. Remark 2.2.The crucial point in Condition 1 is the norm one restriction for the operators V m . Condition 2 becomes Important when constructing the quantizers in Ê m equipped with, in the best case, some well-known norm.As we will see in the proof of the subsequent theorem, to show asymptotic optimality of a constructed sequence of quantizers one needs to know only a rough lower bound for the asymptotic quantization error.In fact, this lower bound allows us in combination with Condition 3 to choose explicitly a sequence m n ∈ J, n ∈ AE such that Theorem 2.3.Assume that Conditions 1-3 hold for some infinite subset J ⊂ AE.One chooses a sequence m n n∈AE ∈ J AE such that 2.7 is satisfied.For n ∈ AE, let α n be an r-optimal n-quantizer Then, φ −1 m n α n n∈AE is an asymptotically r-optimal sequence of n-quantizers for X in E and as n → ∞. Proof.Using Condition 3 and the fact that e n,r X, E > 0 for all n ∈ AE since card supp È X ∞, we can choose a sequence m n n∈AE ∈ AE AE fulfilling 2.7 .Using Lemma 2.1 and Condition 2, we see that φ −1 m n α n is an r-optimal n-quantizer for V m n Z m n in F m n .Then, by using Condition 1, 2.7 , and Lemma 2.1 we get 2.9 The last equivalence of the assertion follows from 1.6 . Remark 2.5.We will usually choose Z m X for all m ∈ AE, with an exception in Section 3 and J AE. Remark 2.6.The crucial factor for the numerical implementation of the procedure is the dimensions m n n∈AE of the subspaces F m n n∈AE .For the well-known case of the Brownian motion in the Hilbert space H L 2 0, 1 it is known that this dimension sequence can be chosen as m n ≈ log n , n → ∞.In the following examples we will see that we can often obtain similar orders like log n c for constants c just slightly higher than one. We point out that there is a nonasymptotic version of Theorem 2.3 for nearly optimal n-quantizers, that is, for n-quantizers, which are optimal up to > 0. Its proof is analogous to the proof of Theorem 2.3. Proposition 2.7. Assume that Conditions 1-3 hold. Let and for n ∈ AE one sets ξ n : φ m V m Z m .Then, it holds for every n ∈ AE and for every r-optimal n-quantizer Gaussian Processes with Hilbertian Path Space In this chapter, let X be a centered Gaussian random variable in the separable Hilbert space H, •, • .Following the approach used in the proof of Theorem 1.1, we have for every sequence ξ i i∈AE of independent N 0, 1 -distributed random variables where λ i denote the eigenvalues and f i denote the corresponding orthonormal eigenvectors of the covariance operator C X of X Karhunen-Loève expansion .If these parameters are known, we can choose a sequence d n n∈AE such that a sequence of optimal quantizer α n for X n d n i 1 λ i f i ξ i is asymptotically optimal for X in E. In order to construct asymptotically optimal quantizers for Gaussian random variables with unknown eigenvalues or eigenvectors of the covariance operator, we start with more general expansions.In fact, we just need one of the two orthogonalities, either in L 2 È or in H. Before we will use these representations for X to find suitable triples V m , F m , φ m as in Theorem 2.3, note that for Gaussian random variables in H fulfilling suitable assumptions we know that 1 Let h i i∈AE be an orthonormal basis of H. Then a.s.. 3.2 Compared to 3.1 we see that h i , X are still Gaussian but generally not independent. 2 Let g j j∈AE be an admissible sequence for X in H such that 3.3 Compared to 3.1 the sequence g i i∈AE is generally not orthogonal. e n,2 X, H ≈ e n,s X, H , n −→ ∞ 3.4 for all s ≥ 1; see 13 .Thus, we will focus on the case s 2 to search for lower bounds for the quantization errors. Orthonormal Basis Let h m m∈AE be an orthonormal basis of H.For the subsequent subsection we use the following notations. 3 Define the linear, surjective, and isometric operators φ m by where e i denotes the ith unit vector in Ê m , 1 ≤ i ≤ m. Theorem 3.1.Assume that the eigenvalue sequence λ j j∈AE of the covariance operator C X satisfies λ j ≈ j −b for −b < −1, and let > 0 be arbitrary.Assume further that h j j∈AE is a rate optimal ONS for X in H.One sets m n log n 1 for n ∈ AE.Then, one gets for every sequence α n n∈AE of r-optimal n-quantizers as n → ∞. Proof.Let f j j∈AE be the corresponding orthonormal eigenvector sequence of C X .Classic eigenvalue theory yields for every m ∈ AE 3.7 Combining this with rate optimality of the ONS h j j∈AE for X, we get 3.8 Using the equivalence of the r-norms of Gaussian random variables 23, Corollary 3.2 , and since X − V m n X is Gaussian, we get for all r ≥ 1 With ω as in Theorem 1.1, we get by using 3.4 and Theorem 1.1 the weak asymptotics and the assertion follows from Theorem 2.3. Admissible Sequences In order to show that linear operators V m similar to those used in the subsection above are suitable for the requirements of Theorem 2.3, we need to do some preparations.Since the covariance operator C X of a Gaussian random variable is symmetric and compact in fact trace class , we will use a well-known result concerning these operators.This result can be used for quantization in the following way. Lemma 3.2.Let X be a centered Gaussian random variable with values in the Hilbert space H and X X 1 X 2 , where X 1 and X 2 are independent centered Gaussians.Then 3.13 The covariance operator of a centered Gaussian random variable is positive semidefinite.Hence, by using a result on the relation of the eigenvalues of those operators see, e.g., 24, page 213 , we get inequalities 3.12 . Let g i i∈AE be an admissible sequence for X, and assume that ∞ i 1 ξ i g i X a.s.In this subsection, we use the following notations. for j ≤ m and V m f j : 0 for j > m, where λ j and f j denote the eigenvalues and the corresponding eigenvectors of C X and λ m j and f m j the eigenvalues and the corresponding eigenvectors of C X m , with X m defined as Furthermore, it is important to mention that one does not need to know λ j and f j explicitly to construct the subsequent quantizers, since we can find for any m ∈ AE a random variable Z m d X such that V m Z m m i 1 ξ i g i see the proof of Theorem 3.3 , which is explicitly known and sufficient to know for the construction. 3 Define the linear, surjective, and isometric operators φ m by 3.17 where e i denotes the ith unit vector of Ê m for 1 ≤ i ≤ m. Theorem 3.3.Assume that the eigenvalue sequence λ j j∈AE of the covariance operator C X satisfies as n → ∞. Proof.Linearity of V m m∈AE follows from the orthogonality of the eigenvectors.In view of the inequalities for the eigenvalues in Lemma 3.2 and the orthonormality of the family f i i∈AE , we have for every h Note next that for every m ∈ AE there exist independent N 0, 1 -distributed random Then, we choose random variables 1≤i<∞ is a sequence of independent N 0, 1 -distributed random variables.We set and get by using rate optimality of the admissible sequences g j j∈AE and λ j f j j∈AE where rate optimality of 3.23 Using the equivalence of the r-norms of Gaussian random variables 23, Corollary 3.2 , and since X − V m n X is Gaussian, we get for all r ≥ 1 3.24 With ω as in Theorem 1.1, we get by using 3.4 and Theorem 1.1 the weak asymptotics e n,r X, H ≈ ω log n −1/2 ≈ log n − 1/2 b−1 , n → ∞.Therefore, the sequence m n n∈AE satisfies 2.7 since and the assertion follows from Theorem 2.3. Comparison of the Different Schemes At least in the case r 2, we have a strong preference for using the method as described in Section 3.1.We use the notations as in the above subsections including an additional indexation i 1, 2 for and m, n ∈ AE, where α i n , for i 1, 2, are defined as in Theorems 3.1 and 3.3.Note that for this purpose the size of the codebook n and the size of the subspaces dim F m m can be chosen arbitrarily i.e., m does not depend on n .The ONS h i i∈AE is chosen as the ONS derived with the Gram-Schmidt procedure from the admissible sequence g j j∈AE for the Gaussian random variable X in the Hilbert space H, such that the definition of F m coincides in the twosubsections. 3.26 Proof.Consider for X the decomposition X pr F ⊥ m X pr F m X .The key is the orthogonality of m X , which gives the two equalities in the following calculation: 3.27 The inequality * follows from the optimality of the codebook φ Gaussian Processes with Paths in In the previous section, where we worked with Gaussian random variables in Hilbert spaces, we saw that special Hilbertian subspaces, projections, and other operators linked to the Gaussian random variable were good tools to develop asymptotically optimal quantizers based on Theorem 2.3.Since we now consider the non-Hilbertian separable Banach space C 0, 1 , • ∞ , we have to find different tools that are suitable to use Theorem 2.3. The tools used in 20 are B-splines of order s ∈ AE.In the case s 2, that we will consider in the sequel, these splines span the same subspace of C 0, 1 , • ∞ as the classical Schauder basis.We set for x ∈ 0, 1 , m ≥ 2, and 1 ≤ i ≤ m the knots t m i : i − 1 / m − 1 and the hat functions For the remainder of this subsection, we will use the following notations. 1 As subspaces F m we set F m : span{f m j , 1 ≤ j ≤ m}. 2 As linear and continuous operators V m : C 0, 1 → F m we set the quasiinterpolant where Journal of Applied Mathematics 3 The linear and surjective isometric mappings φ m one defines as It is easy to see that For the application of Theorem 2.3, we need to know the error bounds for the approximation of X with the quasiinterpolant V m X .For Gaussian random variables, we can provide the following result based on the smoothness of an admissible sequence for X in E. Proposition 4.1.Let g j j∈AE be admissible for the centered Gaussian random variable X in C 0, 1 , • ∞ .Assume that Then, for any > 0 and some constant C < ∞it holds that for every r ≥ 1. Proof.Using of 25, Theorem 1 , we get for an arbitrary 1 > 0, some constant C 3 < ∞, and every k ∈ AE.Thus, we have Using of 26, Chapter 7, Theorem 7.3 , we get for some constant where the module of smoothness ω f, δ is defined by For an arbitrary f ∈ C 2 0, 1 we have by using Taylor expansion Combining this, we get for an arbitrary 2 > 0 and constants C 5 , C 6 , C 7 < ∞, using again the equivalence of Gaussian moments, 4.10 To minimize over k, we choose k k m m 0,8 .Thus, we get for some constant C < ∞ and an arbitrary > 0 4.11 Now, we are able to prove the main result of this section.for some > 0. Then, for every sequence α n n∈AE of r-optimal n-quantizers as n → ∞. Proof. For every since {f m i , 1 ≤ i ≤ m} are partitions of the one for every m ∈ AE, so that V m op ≤ 1. We get a lower bound for the quantization error e n,r X, C 0, 1 from the inequality for all f ∈ C 0, 1 ⊂ L 2 0, 1 .Consequently, we have e n,r X, C 0, 1 ≥ e n,r X, L 2 0, 1 . 4.15 From Theorem 1.1 and 3.4 we obtain where ω is given as in Theorem 1.1.Finally, we get by combining 4.16 and Proposition 4.1 for sufficiently small δ > 0 4.17 and the assertion follows from Theorem 2.3. Processes with Path Space L p 0, 1 , • p Another useful tool for our purposes is the Haar basis in L p 0, 1 for 1 ≤ p < ∞, which is defined by 5.1 This is an orthonormal basis of L 2 0, 1 and a Schauder basis of The Haar basis was used in 17 to construct rate optimal sequences of quantizers for mean regular processes.These processes are specified through the property that for all 0 ≤ s ≤ t ≤ 1 where ρ : Ê → 0, ∞ is regularly varying with index b > 0 at 0, which means that lim for all c > 0. Condition 5.2 also guarantees that the paths t → X t lie in L p 0, 1 . For our approach, it will be convenient to define for m ∈ AE and 1 ≤ i ≤ m 1 the knots and the operators Note that for f ∈ L 1 0, 1 , m 2 n 1 , and n ∈ AE 0 e 0 , f e 0 n i 0 For the remainder of the subsection, we set the following. 5.8 Theorem 5.1.Let X be a random variable in the Banach space E, • L p 0, 1 , • p for some p ∈ 1, ∞ fulfilling the mean pathwise regularity property 5.9 for constants C, a > 0 and t > s ∈ 0, 1 .Moreover, assume that K log n −b º e n,r X, E for constants K, b > 0.Then, for an arbitrary > 0 and m n : log n b/a it holds that every sequence of r-optimal n-quantizers α n n∈AE for φ m n ,p V m n X in Ê m n , • p satisfies e n,r X, L p 0, 1 as n → ∞. Proof.As in the above subsections, we check that the sequences V m and φ m,p satisfy Conditions 1-3.Since V m f λ f | F m , where F m is defined by we get for f ∈ L p 0, 1 , with f p ≤ 1 and p ∈ 1, ∞ by using Jensen's inequality, 12 and thus V m op ≤ 1.The operators φ m,p satisfy Condition 2 of Theorem 2.3 since 5.13 For Condition 3, we note that for t ∈ 0, 1 5.14 Using the inequalities C p∨r m a p∨r . 5.16 Therefore, we know that the sequence m n n∈AE satisfies 2.7 since we get with 5.16 as n → ∞, and the assertion follows from Theorem 2.3. Examples In this section, we want to present some processes that fulfill the requirements of the Theorems 3.1, 3.3, 4.2, and 5.1.Firstly, we give some examples for Gaussian processes that can be applied to all of the four Theorems, and secondly we describe how our approach can be applied to Lévy processes in view of Theorem 5.1. Examples 6.1. Gaussian Processes and Brownian Diffusions (i) Brownian Motion and Fractional Brownian Motion Let X H t t∈ 0,1 be a fractional Brownian motion with Hurst parameter H ∈ 0, 1 in the case H 1/2 we have an ordinary Brownian motion .Its covariance function is given by Note that except for the case of an ordinary Brownian motion the eigenvalues and eigenvectors of the fractional Brownian motion are not known explicitly.Nevertheless, the sharp asymptotics of the eigenvalues has been determined see, e.g., 7 . In 28 the authors constructed an admissible sequence g j j∈AE in C 0, 1 that satisfies the requirements of Proposition 4.1 with θ 1/2 H. Furthermore, the eigenvalues λ j of C X H in L 2 0, 1 satisfy λ j ≈ j − 1 2H , see, for example, 7 , such that the requirements for Theorem 4.2 are satisfied.Additionally, this sequence is a rate optimal admissible sequence for X H in L 2 0, 1 , such that the requirements for Theorem 3.3 are also met.Constructing recursively an orthonormal sequence h j j∈AE by applying Gram-Schmidt procedure on the sequence g j j∈AE yields a rate optimal ONS for X H in L 2 0, 1 that can be used in the application of Theorem 3.1.In Section 7 we will illustrate the quantizers constructed for X H with this ONS for several Hurst parameters H.Note that there are several other admissible sequences for the fractional Brownian motion which can be applied similarly as described above; see, for example, 29 or 30 .Moreover, we have for s, t ∈ 0, 1 the mean regularity property and the asymptotics of the quantization error is given as for all r, p ≥ 1 see 13 , such that the requirements of Theorem 5.1 are met with a b H.Note that in 11 the authors showed the existence of constants k H, E for E C 0, 1 and E L p 0, 1 independent of r such that Therefore, the quantization errors of the sequences of quantizers constructed via Theorems 3.1, 3.3, 4.2, and 5.1 also fulfill this sharp asymptotics. (ii) Brownian Bridge Let B t t∈ 0,1 be a Brownian bridge with covariance function B s B t min s, t − st. 6.5 Since the eigenvalues and eigenvectors of the Brownian bridge are explicitly known, we do not have to search for any other admissible sequence or ONS for B t t∈ 0,1 to be applied in H L 2 0, 1 .This the eigenvalue-eigenvector admissible sequence also satisfies the requirements of Theorem 4.2.The mean pathwise regularity for the Brownian bridge can be deduced by for any p ≥ 1. Combining 31, Theorem 3.7 and 13, Corollary 1.3 yields for all r, p ≥ 1, such that Theorem 5.1 can be applied with a b 1/2. (iii) Stationary Ornstein-Uhlenbeck Process The stationary Ornstein-Uhlenbeck process X t t∈ 0,1 is a Gaussian process given through the covariance function with parameters α, σ > 0. An admissible sequence for the stationary Ornstein-Uhlenbeck process in C 0, 1 and L 2 0, 1 can be found in 21 .This sequence that can be applied to Theorems 3.3 and 4.2 and also by applying Gram-Schmidt procedure to Theorem 3.1. According to 13 we have e n,r X, L p 0, 1 for all r, p ≥ 1.Furthermore, it holds that and therefore we can choose a b 1/2 to apply Theorem 5.1. (iv) Fractional Ornstein-Uhlenbeck Process The fractional Ornstein-Uhlenbeck process X H t t∈ 0,1 for H ∈ 0, 2 is a continuous stationary centered Gaussian process with the covariance function In 22 the authors constructed an admissible sequence g j H j∈AE for H ∈ 0, 1 that satisfies the requirements of Proposition 4.1 with θ 1/2 H/2.Since the eigenvalues λ j of C X H in L 2 0, 1 satisfy λ j ≈ j −1 H , we get again that the assumptions of Theorem 4.2 are satisfied.Similarly, we can use this sequence in Theorems 3.3 and 3.1. (v) Brownian Diffusions We consider a 1-dimensional Brownian diffusion X t t∈ 0,1 fulfilling the SDE where the deterministic functions b, σ : 0, 1 × Ê → Ê satisfy the growth assumption Under some additional ellipticity assumption on σ, the asymptotics of the quantization error in L p 0, 1 , • p is then given by e n,r X, L p 0, 1 Furthermore, by the Lévy-Ito decomposition, X can be written as the sum of independent Lévy processes where X 3 is a Brownian motion with drift, X 2 is a Compound Poisson process, and X 1 is a Lévy process with bounded jumps and without Brownian component. Firstly, we will analyze the mean pathwise regularity of these three types of Lévy processes to combine these results with lower bounds for the asymptotical quantization error. 1 Mean Pathwise Regularity of the 3 Components of the Lévy-Ito Decomposition: i According to an extended Millar's Lemma 17, Lemma 5 , we have, for all Lévy processes with bounded jumps and without Brownian component, that there is for every p ≥ 2 a constant C < ∞ such that for every t ∈ 0, 1 ii We consider the Compound Poisson process where K denotes a standard Poisson process with intensity λ 1 and U k k∈AE is an i.i.d sequence of random variables with Z 1 L p È < ∞.Then, one shows that for some constant C ∈ 0, ∞ , and W denotes a Brownian motion.We consider the Lévy-Ito decomposition X X 1 X 2 X 3 and assume that for Therefore, we receive the mean pathwise regularity for X, all p, r ≥ 1, and some constant C < ∞ ρ p x : Cx Thus, we can choose ρ x Cx 1/α for any p ≥ 1 and constants C p < ∞.The asymptotics of the quantization error for X is given by e n,r X, L p ≈ log n −1/α , n −→ ∞ 6.31 for r, p ≥ 1 14 , such that we meet the requirements of Theorem 5.1 by setting a b α. Numerical Illustrations In this section, we want to highlight the steps needed for a numerical implementation of our approach and also give some illustrating results.For this purpose, it is useful to regard an nquantizer α n as an element of E n again denoted by α n instead of being a subset of E. 0 for i / j.Then, the distortion function is differentiable at every admissible n-tuple α a 1 , . . ., a n (i.e., a i / a j for i / j) with where {C a i α : 1 ≤ i ≤ n} denotes any Voronoi partition induced by α {a 1 , . . ., a n }. Remark 7.2.When r 1, the above result extends to admissible n-tuples with È X {a 1 , . . ., a n } 0. Furthermore, if the norm is just smooth on a set A ∈ B E with È X A 1, then the result still holds true.This is, for example, the case for E, • Ê d , • ∞ and random variables X with È X H 0 for all hyperplanes H, which includes the case of normal distributed random variables. Classic optimization theories now yield that any local minimum is contained in the set of stationary points.So let n ∈ AE, m m n ∈ AE, r ≥ 1, X, V m , and φ m be given.The procedure looks as follows. Step 1. Calculation of the Distribution of the Ê m -Valued Random Variable ζ : φ m V m X .This step strongly depends on the shape of the random variable X and the operators V m . Being in the setting of Section 3.1 one starts with an orthonormal system h i i∈AE in H providing where e i 1≤i≤m denote the unit vectors in Ê m .Thus, the covariance matrix of the random variable ζ admits the representation ζζ ⊥ h i , X h j , X 1≤i,j,≤m C X h i , h j 1≤i,j,≤m , 7.4 with C X being the covariance operator of X. Similarly, we get for Gaussian random variables in the framework of Section 3.2 in the setting of Section 4 and in the setting of Section 5 with f m j associated with 0,1 •f m j s ds.If one considers in the latter framework a non-Brownian Lévy process, for example, and a compound Poisson process we use the notations as in Examples 6.2 1 ii , the simulation of the gradient leads to the problem of simulating which is still possible. Step 2. Use a stochastic optimization algorithm to solve the stationarity equation for ζ φ m V m X .For this purpose, the computability of the gradient 7.2 is of enormous importance.One may either apply a deterministic gradient-based optimization algorithm e.g., BFGS combined with a Quasi Monte-Carlo approximation for the gradient, such as the one used in 20 , or use a stochastic gradient algorithm, which is in the Hilbert space setting also known as CLVQ competitive learning vector quantization algorithm see, e.g., 19 for more details .In both cases, the random variable È ζ needs to be simulated, which is the case for the above described examples. Step 3. Reconstruct the quantizer β b 1 , . . ., b n for the random variable X by setting for 1 ≤ i ≤ n with α a 1 , . . ., a n being some solution of the stationarity 7.9 . Illustration For illustration purposes, we will concentrate on the case described in Section 3.1 for r 2. Examples for quantizers as constructed in Section 4 can be found in 20 .The quantizers shown in the sequel were calculated numerically, by using the widely used CLVQ-algorithm as described in 19 .To achieve a better accuracy, we finally performed a few steps of a gradient algorithm by approximating the gradient with a Monte Carlo simulation.Let X H be a fractional Brownian motion with Hurst parameter H.We used the admissible sequence as described in 28 : where c H is given as 12 J 1−H and J −H are Bessel functions with corresponding parameters, and x n and y n are the ordered roots of the Bessel functions with parameters −H and 1 − H.After ordering the elements of the two parts of the expansion in an alternating manner and applying Gram-Schmidt's procedure for orthogonalization to construct a rate optimal ONS, we used the method as described in Section 3.1.We show the results we obtained for n 10, m 4 and the Hurst parameters H 0.3, 0.5, and 0.7 Figures 1, 2, and 3 .To show the effects of changing parameters, we also present the quantizers obtained after increasing the size of the containing subspace m 8 Figures 4, 5, and 6 and in addition the effect of increasing the quantizer size n 30 Figures 7, 8, and 9 .Since X H is for H 0.5 an ordinary Brownian motion, one can compare the results with the results obtained for the Brownian motion by using the Karhunen-Loève expansion see, e.g., 18 . α ⊂ E with card α ≤ n.Such sets α are called n-codebooks or n-quantizers.The corresponding infimum e n,r X, E, • e n,r X, E : inf α⊂E,card α ≤n e r X, E, α 1.2 Theorem 4 . 2 . Let X be a centered Gaussian random variable and g j j∈AE an admissible sequence for X in C 0, 1 fulfilling the assumptions of Proposition 4.1 with θ b/2, where the constant b > 1 satisfies λ j ² Kj −b with λ j , j ∈ AE denoting the monotone decreasing eigenvalues of the covariance operator C X of X in H L 2 0, 1 and K > 0. One sets m n : log n5/4 1 We set for m ∈ AE the subspaces F m : span{f m 1 , . . ., f m m }. 2 Set the linear and continuous operator V m to Lemma 2.1 see 22 . Let and let > 0 arbitrary.Assume that g j j∈AE is a rate optimal admissible sequence for X in H.One sets m n log n 1 for n ∈ AE.Then, there exist random variables Z m , m ∈ AE, with Z m d X such that for every sequence α n n∈AE of r-optimal n-quantizers for φ m n Π contains constants a ∈ Ê, σ ≥ 0, and a measure Π on Ê \ {0} satisfying Ê 1 ∧ x 2 Π dx < ∞.By definition, we know that Theorems 13, 14 and 17, Proposition 3 for a constant κ ∈ 0, ∞ .Thus, the sequence m n n∈AE has to grow faster than in the examples above.To fulfill Proposition 7.1 see The differentiability of the distortion function was treated in 6 for finite-dimensional Banach spaces what is sufficient for our purpose and later in 33 for the general case.6, Lemma 4.10 .Assume that the norm • of Ê d is smooth.Let r > 1, and assume that any Voronoi diagram {V Then, r-optimality of an n-quantizer α n for the random variable X in the separable Banach space E reads
9,315.6
2010-12-09T00:00:00.000
[ "Mathematics" ]
Cumulants asymptotics for the zeros counting measure of real Gaussian processes We compute the exact asymptotics for the cumulants of linear statistics associated with the zeros counting measure of a large class of real Gaussian processes. Precisely, we show that if the underlying covariance function is regular and square integrable, the cumulants of order higher than two of these statistics asymptotically vanish. This result implies in particular that the number of zeros of such processes satisfies a central limit theorem. Our methods refines the recent approach by T. Letendre and M. Ancona and allows us to prove a stronger quantitative asymptotics, under weaker hypotheses on the underlying process. The proof exploits in particular the elegant interplay between the combinatorial structures of cumulants and factorial moments in order to simplify the determination of the asymptotics of nodal observables. The class of processes addressed by our main theorem englobes as motivating examples random Gaussian trigonometric polynomials, random orthogonal polynomials and the universal Gaussian process with sinc kernel on the real line, for which the asymptotics of higher moments of the number of zeros were so far only conjectured. Introduction The study of the number of zeros of smooth Gaussian processes has a long history and is in particular motivated by the pioneering works of Kac and Rice, see e.g.[10] for a general introduction to this topic.The asymptotics for the expectation and the variance of the number of zeros of a stationary Gaussian process on an interval growing interval [0, R] as R grows to infinity has been known since [16], where a central limit theorem (CLT) for the number of zeros is also proved.The variance asymptotics is there established using the celebrated Kac-Rice method and the CLT is proved using approximation by an m-dependent process. With similar methods, the variance of the number of zeros of random Gaussian trigonometric polynomials with large degree has been studied in [20], as well as the associated CLT.Later on, the machinery of Wiener chaos expansion was then successfully used in order to compute the variance asymptotics as well as establishing CLTs for the number of zeros of various models of stochastic processes, see for instance [9,7,18].Central limit theorems for the number of real roots of random algebraic polynomials have also been investigated, see for example [24] and the references therein. In the recent paper [19], focusing on the asymptotics of the Kac density rather than on the full integral Kac-Rice formula, the author managed to avoid some of the technical computations inherent to the use of Kac-Rice method.This allowed him to get a unifying point of view, make explicit the needed decorrelation estimates and then deduce the variance asymptotics for the number of zeros of many models of Gaussian processes.It has been then conjectured that the same heuristics could be applied to treat the asymptotics of the higher central moments of the number of zeros of a Gaussian process, which is the goal of the present paper. Up to now, very few results about the asymptotics of higher central moments are known.The best result so far is the one by M. Ancona and T. Letendre [3], where it has been proved that the p-th central moment, when properly rescaled, converges towards the p-th moment of a Gaussian random variable, under restrictive condition that the covariance function and their derivatives decreases faster than x −4p .This last result then yields another proof of the CLT for the number of zeros by the method of moments, for the processes whose covariance function is in the Schwartz class of regular and rapidly decreasing functions.Their proof is based on a series of articles [1,2] whose purpose was to tackle the CLT for the number of roots of Kostlan polynomials and its real algebraic extension.Note that the sinc process, i.e. the Gaussian process with sinc covariance function, which plays a central role in probability theory and mathematical physics, is ruled out from their framework, due to the slow decay of the sinc kernel.In the more general context of point processes, higher moments of geometric statistics have also been studied under the hypothesis of fast decreasing correlation [14]. In this paper, we prove the exact asymptotics of the higher central moments of the number of zeros of a large class of Gaussian processes, under the only hypothesis, apart from regularity, that the covariance function as well as its derivatives are square integrable.Our results apply in particular to Gaussian trigonometric and orthogonal polynomials, as well as the stationary process with sinc kernel and other Gaussian stationary processes on the real line with possibly slow decaying kernels.We prove in fact a more general theorem by computing the exact asymptotics of the cumulants of linear statistics associated with the zeros counting measure of the underlying processes.The use of cumulants instead of central moments simplifies the rather intricate combinatorics involved when estimating higher order moments via the Kac-Rice method. Our result in turn implies the convergence of associated moments of any order and thus a CLT, with an exact rate of convergence.As a corollary, we deduce a polynomial concentration of any order of the number of zeros, and by a Borel-Cantelli argument, the almost sure convergence of the number of zeros.Note that these last facts cannot be deduced from chaos expansion methods. More generally, in the context of linear statistics, we prove the almost sure equidistribution of the zeros set at the limit for a large class of smooth Gaussian processes. Cumulants asymptotics and central limit theorems In the following, all the random variables considered are defined on a common abstract probability space (Ω, F, P) and E will denote the associated expectation.In the sequel, W stands for a standard Gaussian random variable, i.e. centered with unit variance.We denote by κ p (Z) the p-th cumulant of a random variable Z, given by the expression where the sum is indexed by the set P p of all the partitions of the finite set {1, . . ., p}.We refer to [27,25] and the paragraph 2.1.2below for more details on the cumulants of a random variable. In the following, iid stands for independent and identically distributed.The following theorem describes the asymptotics of all the cumulants of the number of zeros of a Gaussian trigonometric polynomial with independent coefficients.Theorem 1.1.Let (a k ) k≥0 and (b k ) k≥0 be two iid sequences of standard Gaussian variables.Let Z n be the number of zeros on [0, 2π] of the process a k cos(kx) + b k sin(kx). For p a positive integer, there is an explicit finite constant γ p such that The constants γ 1 and γ 2 are positive (i.e.> 0).The above theorem implies in particular that lim n→+∞ Var(Z n ) n = γ 2 and ∀p ≥ 3, ( Given the expression of the central moments in terms of cumulants and the fact that the cumulants of a Gaussian random variable are zeros for p ≥ 3, the asymptotics (2) imply in fact that for every positive integer p, Note that the exact asymptotics of the cumulants given by Theorem 1.1 is in nature stronger than the cruder bound given by ( 2) and thus the central moment asymptotics (3). As a consequence, we are able to reprove the central limit theorem for the number of zeros, as well as a polynomial concentration to any order of the number of zeros around its mean by Markov inequality. Corollary 1.2. As n goes to infinity, we have the convergence in distribution For all p ≥ 2, there is a constant C p such that for all integer n and positive constant ω, Note that the variance estimate in Equation (2) and the associated CLT were first established in [20] by the Kac-Rice method and in [9] by the Wiener chaos expansion.So far the exact asymptotics of the p-th central moment or cumulants of Z n has never been computed for p ≥ 3. Theorem 1.1 shows that it asymptotically behaves like the p-th moment of a Gaussian random variable, which is expected from the already existing central limit theorem for the random variable Z n .The polynomial concentration of the number of zeros and a Borel-Cantelli argument implies the almost sure convergence lim n→+∞ Z n n = γ 1 a.s, a result already known from [6], using a derandomization method.Exponential concentration has been established in [23] for this particular model but the proof is of very different nature and strongly use the trigonometric nature of the random process h n .Our proof only uses the fact that the process is of class C ∞ and is adaptable to many other models. The error term in (3) is new and implies a rate of convergence towards the Gaussian random variable of order 1/ √ n for the moment metric.It is reminiscent of the Berry-Essen bound for the more classical CLT.Note that the Wiener chaos expansion method can yield (slower) speeds of convergence for more classical distances, namely Kolmogorov or Wasserstein. The independence hypothesis above on the Gaussian random coefficients can be relaxed.Namely, we can extend the previous Theorem 1.1 to the case where the Gaussian sequences (a k ) k≥0 and (b k ) k≥0 are independent and stationary. We assume that the spectral measure associated with the correlation function ρ has a continuous positive density on the torus T. Let Z n be the number of zeros on [0, 2π] of the process Then the conclusion of Theorem 1.1 and its Corollary 1.2 holds. The expectation of the number of zeros in this model has been studied in [4,5] and the variance in [19].The above Theorem 1.3 gives the asymptotics of every cumulant and therefore, as discussed above in the independent case, it proves a central limit theorem for the number of zeros, which is a new result in this dependent framework, as well as concentration around the mean and a quantification of the rate of convergence. In another direction, one can replace the functions cos and sin by more general functions.A standard framework is then the following model of random orthogonal polynomials, for which we can give a similar statement.Theorem 1.4.Let (a k ) k≥0 be an iid sequence of standard Gaussian variables.Let (P k ) k≥0 be a sequence of orthogonal polynomials associated with a measure µ on the line, and let [a ′ , b ′ ] be an interval.We assume that the measure µ and the interval [a ′ , b ′ ] satisfies the hypotheses of [18,Thm. 1.1].Let Z n be the number of zeros on [a ′ , b ′ ] of the process Then the conclusion of Theorem 1.1 and its Corollary 1.2 holds. The expectation, the variance and a central limit theorem for this model have been very recently studied with the Wiener chaos expansion method in [18].Here again, we extend this result by determining the asymptotics of higher cumulants and thus higher moments.As already discussed after the previous statements above, from the cumulants asymptotics established in Theorem 1.4, we can also deduce concentration around the expected number of zeros and as well as a rate of convergence in the associated CLT for the (non-standard) metric of moments. At last, we extend known results about the number of zeros of a stationary Gaussian process on a growing interval, establishing in particular a CLT under the sole square integrability of the associated correlation function and its derivatives.Theorem 1.5.Let f be a stationary Gaussian process with C ∞ paths and covariance function r.For R > 0 we define Z R to be the number of zeros on [0, R] of the process f . • If the covariance function r and its derivatives are in L 2 (R), then for all p ≥ 2 lim • If the covariance function r and its derivatives are in L q (R) for all q > 1, then for p a positive integer, there is an explicit finite constant γ p such that As mentioned above, the CLT which is obtained from the above moments asymptotics by the method of moments is already known in the particular case of stationary Gaussian processes with covariance function belonging to the Schwartz class, see [3].Here the assumption on the decay of the correlation function is greatly relaxed and we only need to assume the square integrability of the covariance kernel as well as its derivatives.It implies a polynomial concentration around the mean to any order for the number of zeros, which appears to be a new result for regular process with slow decaying covariance functions (note that exponential concentration has been proved in [12] under some analyticity assumption). As a particular and representative case, Theorem 1.5 covers the example of the stationary Gaussian process f with sinc kernel, which is a completely new result.This process plays a central role in the study of determinantal point processes, and appears as the limit of the local statistics of various random models, from eigenvalues of random matrices to random integer partitions.For this particular process, the asymptotic of the expectation and the variance of Z R , as well as a CLT were known since the pioneering works of [16] and the references therein.But so far, the exact asymptotics of higher central moments or cumulants of Z R remained unknown. Under the stronger hypothesis that the covariance function and its derivatives are in L p for all p > 1, we deduce the exact asymptotic of the cumulants of any order.This integrability hypothesis in particular holds true for processes whose covariance functions r and their derivatives satisfy the bound which is the case for a stationary Gaussian process with sinc covariance function. A more general and unifying statement In fact Theorems 1.1, 1.3, 1.4 and 1.5 are all corollaries of a single, more general statement given below.In order to state it, we need to introduce first a few notations that will be used for the rest of the paper. Let U be a non-empty open interval of the real line R or of the one-dimensional torus T, endowed with their canonical distance If n is finite then nU is a non-empty open subset of R or of the one-dimensional torus nT of length n.For n = +∞ we use the convention (+∞)U = R.This setting allows us to give a unified exposition for processes defined on the torus (e.g.random trigonometric polynomials) and on the real line (e.g. the sinc process). Let N be an unbounded subset of R * + and N = N ⊔ {+∞}.For each n ∈ N, we consider a centered Gaussian process f n defined on nU , and we assume that the process f ∞ is a non-zero stationary centered process on R. Note that for n ∈ N the process h n = f n (n .) is a Gaussian process on U .For n ∈ N and s, t ∈ nU we define the covariance function If the process f n is of class C k (U ) for k ≥ 0 then the covariance function r n is also of class C k in each variable, and one has for u, v ≤ k and x, y ∈ nU For n ∈ N we define the random counting measure on Z n .Note that (ν n ) n∈N is a family of measures on U .Assume for now (it will be a consequence of Bulinskaya Lemma) that for each n ∈ N the set Z n is almost surely locally finite.For a bounded function φ : U → R, with compact support in U , we define the bracket Note the Kac-Rice formula implies that in expectation, the counting measure has a density with respect to the Lebesgue measure.It means that one can compute, when it is defined, expectations of linear statistics for test functions defined almost everywhere.For q ≥ 1 we define the two following hypotheses. • H 1 (q) : The sequence of processes (f n ) n∈N is of class C q (U ), and there is a uniformly continuous function ψ on U , bounded below and above by positive constants, such that for u, v ≤ q, the following convergence holds uniformly for x ∈ U and locally uniformly for • H 2 (q) : There is a function g, even, bounded and going to zero near infinity, such that for u, v ≤ q, n ∈ N and s, t ∈ nU , and for some positive constant ω the function g ω is in L 2 (R), where Theorem 1.6.Let p ≥ 2 and q = 2p − 1.We assume that the sequence of processes (f n ) n∈N satisfies hypotheses H 1 (q) and H 2 (q) defined above.Then for every function Assume moreover that g ω ∈ L p p−1 (R).Then there is an explicit constant γ p depending only on the process f ∞ , such that The assumption H 1 (q) characterizes the convergence of the family of processes (f n ) n∈N towards a limit stationary process in C q norm.This hypothesis is natural and arises in many models.For instance the covariance function of random trigonometric polynomials converges towards the sinc function.The regularity of the process f n ensures the well-definiteness of the p-th moment, see for instance [10,Thm 3.6].The convergence towards a non-degenerate stationary process ensures the uniform non-degeneracy on the process f n , as well as the explicit asymptotics for the cumulants. The decay assumption in H 2 (q) is greatly relaxed compared to the one present in [3], where the authors require a function g that decrease like x −4p (though they need only to take q = p − 1 in Theorem 1.6).Here we show that the asymptotics of higher moments is independent of the rate of decay of the covariance function, and must only satisfy some uniform square integrability condition.The number of finite moments (and their asymptotics) that one can obtain is directly related to the regularity of the process. Let us briefly now show that the unifying Theorem 1.6 indeed implies the collection of theorems of the previous subsection.First, Theorem 1.1 and 1.3 are a consequence of Theorem 1.6, by setting U = T, N = N * , φ = 1 T and Let ψ be the spectral density of the correlation function ρ of the stationary Gaussian sequences (a k ) k≥0 and (b k ) k≥0 , which is assumed to be continuous and positive on T. Assumptions H 1 (q) and H 2 (q) are proved for all q > 0 for this model in the paper [19] with limit process having sinc covariance function, and where the exponent α can be taken in ]1/2, 1[.Note that Theorem 1.1 is a particular case of Theorem 1.3 with ψ = 1 (in that case, one can take α = 1 above). Similarly, Theorem 1.4 is a consequence of Theorem 1.6.Let µ be a measure with compact support on the real line.We set U a subinterval of R such that µ has a positive continuous density on U .It is proved in [18] under mild assumption on the measure µ that for the model of random orthogonal polynomials with respect to the measure µ, the assumption H 2 (q) holds true for all q > 0 and Let ω the density of the equilibrium measure of the support of µ, which is continuous and positive on U and ψ the inverse of the density of the measure µ.Then a variation of hypothesis H 1 (q) holds true for all q > 0 with limit process having sinc covariance function, where (5) statement should be instead The proof of Theorem 1.6 adapts verbatim to this setting and the conclusion is Note that if supp µ = [0, 1], then after a change of variable, the equilibrium measure is simply the Lebesgue measure on the torus T and hypothesis H 1 (q) then exactly holds true. Asymptotics for the linear statistics Let ν ∞ denote the Lebesgue measure on the interval U .Theorem 1.6 implies a strong law of large number and a central limit theorem for the sequence of random measure (ν n ) n∈N .The two following Corollaries 1.7 and 1.8 extend the results of [3,Sec. 1.4] to our framework, and we refer to this paper for a more thorough discussion. Corollary 1.7 (Law of large numbers).Assume that the hypotheses H 1 (q) and H 2 (q) are satisfied for all q ≥ 1, and either N = N * , or N = R * + and for n ∈ R * + , f n = f ∞ .Then we have the following almost-sure convergence for the vague topology Corollary 1.7 shows that zeros of the process f n (n .) tend to be equidistributed on the set U as n goes to +∞.When N = N * , the proof follows from an application of the Borel-Cantelli Lemma.When N = R * + and ∀n ∈ R * + , f n = f ∞ , we can apply the Borel-Cantelli Lemma to prove the almost sure convergence on a polynomial subsequence.It is then a standard fact that the monotonicity of Z n ensures the almost sure convergence of the whole sequence. Corollary 1.8 (Central limit theorem).Assume that the hypotheses H 1 (q) and H 2 (q) are satisfied for all q ≥ 1.Then we have the following convergence in distribution Corollary 1.8 implies that the fluctuations around the mean of the counting measure ν n is comparable to a Gaussian white noise. Outline of the proof Before giving a complete and detailed proof of Theorem 1.6, let us sketch its main ingredients and arguments.The proof follows a similar strategy as in [3] but with considerable refinements.It mainly relies on a careful analysis of the Kac-Rice formula, which asserts that for a test function φ, where and T n (x 1 , . . .x p ) is the density at zero of the Gaussian vector (f n (x 1 ), . . ., f n (x p )).The extra terms appearing in Equation ( 7) are of combinatorial nature and can be treated the exact same way as the first term, so we will omit them in the following heuristics.The function ρ p,n is called the Kac density of order p associated with the process f n .Observe that the function ρ p,n is ill-defined when two of its arguments collapse.This issue is solved by using the technique of divided differences, that appeared in [17] and was subsequently developed in [1,2,3].Let us give an example with p = 2.The idea is to replace in (8) the quantity If the variables x 1 and x 2 collapse, the second expression becomes f n (x 1 ) = f ′ n (x 1 ) = 0.The regularity of the process f n given by the assumption H 1 (q) and the non-degeneracy of the limit process f ∞ implies that the Gaussian vector (f n (x), f ′ n (x)) is non-degenerate and gives an alternative non-singular expression of the function ρ 2,n near the diagonal.For higher integers p, the reasoning is the same.For each partition I of the set {1, . . ., p}, we will thus give an alternative and non-singular expression of the density ρ p,n , that extends by continuity on points (x 1 , . . ., x p ) such that x i and x j are equal if i and j belong to the same cell of the partition I.This procedure is explained in Section 2.4. From now the proof is considerably refined compared to [3], where we rather use the powerful combinatorics of cumulants to simplify and enhance the results.Developing the expression of the cumulant of order p as a function of the moments, we get where the sum indexed by J runs over all the partitions of the set {1, . . ., p}, and with Now let I be a partition of {1, . . ., p}, and assume that for i and j belonging to two different cells of the partition I, the variable x i and x j are far from each other.Then the decay hypothesis H 2 (q) implies that the Gaussian random variable f n (x i ) and f n (x j ) are almost independent, and from the definition of the Kac density ρ p,n we deduce that for A ⊂ {1, . . ., p}, Note that the function ρ p,n depends on f n only through the covariance matrix of the vector (f n (x 1 ), . . ., f n (x p ), f ′ n (x 1 ), . . ., f ′ n (x p )).This matrix representation allows us to give a precise error term in (10), proportional to the square of the magnitude of r (u,v) n (x i , x j ), where i and j belong to different cells of the partition I.We refer to Section 2.3 for matrix notations and to Section 3.2 for the matrix representation of the Kac Density. The combinatoric properties of cumulants and (10) imply that as soon as the variables (x i ) 1≤i≤p are clustered with respect to some partition I with at least two cells.A refinement of Taylor expansion using graph theoretic arguments (see Section 3.3), gives a much more precise error in (11) than the approach taken in [3], where it is roughly shown that a similar approximation as in (11) holds true only when one single variable is far from all the others (this reasoning also appears in different articles that treats cumulant asymptotics, see for instance [22,14]).We then show that far from the deep diagonal (x, . . ., x) the function F p,n is small and will have sufficiently nice integrability properties on (nU ) p in order to show in (9) that for p ≥ 3, Given the link between cumulants and central moments, this fact leads to the convergence of the central moment of order p to the central moment of a Gaussian random variable.If moreover, the function g ω is in L p p−1 (R) then the function F p,n (0, x 2 , . . ., x p ) is integrable on (nU ) p−1 , uniformly for n ∈ N.This fact leads to the exact asymptotics of the p-th cumulant of the random variable ν n , φ . Despite its apparent simplicity, the detailed proof is quite technical and the diversity of arguments used justifies the following section, which introduces several notions and associated notations for the rest of the paper.In particular, the notion of partition of a finite set plays a central role in this article.From a combinatoric point of view, it appears in the Kac-Rice formula when expressing moments of the factorial power counting measure in terms of moments of the usual power measure, but also from the interpretation of cumulants in the context of Möebius inversion in the lattice of partition.The interplay between these last two combinatoric facts leads to an elegant expression of the cumulants of the zeros counting measure (given by Proposition 3.5), and simplifies the approach taken by the authors in [3], where they computed directly the asymptotics of central moments. A novelty of this paper is also the intensive use of the matrix representation of the Kac density which allows us to dissociate the probabilistic setting, and facts concerning pure matrix analysis.We believe that this approach, already taken by the author in [19] to treat the asymptotic of the variance, greatly simplifies the exposition of proofs using Kac-Rice formulas. Basics and notations We define a few notations that will be of use and simplify the exposition.In the following, A is a non-empty finite set.The letter a, b, . . .denote elements of A. The letters B, C, . . .denote subset of A. The letters I, J , . . .denote subsets of the power set of A. Set theory We denote by |A| the cardinal of the set A and P(A) the power set of A. For a set E, we define to avoid any confusion when elements of A are also sets.For a function f : Let φ A = (φ a ) a∈A be functions from E to R. We define At last, we denote by The set 2A should be seen as the disjoint union of A and a copy of itself.For an element The lattice of partitions and cumulants The material of this paragraph is very standard, we refer to [27,25] for a nice introduction on this topic.We define P A as the set of partitions of A. The partition of A into singletons will be denoted A. In the following, B is a subset of A and I is a partition of A. The partition I induce a partition on the set 2A via the relation and we will still denote by I this partition. The set P A has a natural structure of a poset (partially ordered set).Given I and J two partition of A, we say that I is finer than J (or that J is coarser than I) and we denote it I J (or J I), if ∀I ∈ I, ∃J ∈ J such that I ⊂ J. Note that two partitions I and J have a greatest lower bound and a least upper bound for this partial order, which turns (P A , ) into a finite lattice.Let (m B ) B⊂A and (κ B ) B⊂A be two families of numbers.In our case of interest, the Möebius inversion on this particular lattice takes the form We will make use of the following cancellation property of the cumulants.Lemma 2.1.Let (m B ) B⊂A and (κ B ) B⊂A be two families of numbers related by one of the equivalent formulas in (16).Assume the existence of a partition I = {A} such that Proof.See [27]. If (X a ) a∈A is a family of random variables, we can define for a subset The quantity m B ((X b ) b∈B ) (resp.κ B ((X b ) b∈B )) is the joint moment (resp.cumulant) of the family of random variables (X b ) b∈B .The previous Lemma 2.1 translates in the following property for the cumulant.If there is a partition I with at least two cells, such that the collection of random variables (X i ) i∈I for I ∈ I are mutually independent, then the joint cumulant of the family (X a ) a∈A is zero. The joint cumulants are a convenient tool in the Gaussian framework, since for a Gaussian vector (X a ) a∈A , the joint cumulant κ A ((X a ) a∈A ) cancels as soon as |A| ≥ 3. Conversely, a random variable X such that κ p (X, . . ., X) = 0 for all p ≥ 3 is Gaussian. Diagonal set and factorial power measure We will see in Section 3 that the Kac-Rice formula gives an integral formula for the p-th factorial power measure of the zero set of a Gaussian process.The expression of the Kac density degenerates near the diagonal and it motivates the introduction of a few notations for the diagonal of a set and factorial power measure.In the following, A is a finite set and (E, d) is a metric space.This section is largely inspired by [1,Sect. 4.3] and [3,Sect. 6.1], in particular for the quick and efficient description of the diagonal clustering. Diagonal set and diagonal inclusion We define the (large) diagonal of E A as the subset Let I be a partition of the set A. We define From this definition, one has the following decomposition of the space where A is the partition of A in singletons.We also define Enlargement of the diagonal set We fix a number η ≥ 0 and x A ∈ E A .We define the graph G η (x A ) with set vertices A, and where two vertices a and b are connected by an edge if d(x a , x b ) ≤ η.Denote by I η (x A ) the partition of A induced by the connected components of G η (x A ).It allows us to define the subset If η = 0 then ∆ I,0 = ∆ I .In the case where η > 0 we have ∆ I,η ⊂ ∆ I + .As in the case η = 0, we also have The fundamental property of this construction is the following.Let a, b ∈ A and Note that any partition of the space E A indexed by all the possible partitions of A that satisfies these two properties would also work.The one described above is a quick and efficient way to prove the existence of such partition. The factorial power measure We define the diagonal inclusion For instance, if I = {{1, 3}, {2}} then ι I (x, y) = (x, y, x).A direct consequence of this definition is that the mapping ι I is a bijection between E I \ ∆ and ∆ I . Let Z be a locally finite subset of the metric space E. We set ν := x∈Z δ x the counting measure on Z, and The measure ν A (resp.ν [A] ) is the power (resp.factorial power) measure of the measure ν. Both measures are linked by the following lemma. Lemma 2.2. With the notations as above, one has Proof.We have Using the fact that the mapping ι I is a bijection between E I \ ∆ and ∆ I , one gets Matrix notations The Kac density (see Section 3 and Lemma 3.6) is expressed in term of the covariance matrix of the underlying Gaussian process and its derivatives.This fact allows us to consider the Kac density as a function defined on the set of positive definite matrices, evaluated in some covariance matrix related to our underlying Gaussian process.To this end, we introduce a few useful notations Basic matrix notations In the following, we define M A (R), S A (R) and S + A (R) respectively the sets of square, symmetric and symmetric positive definite matrices acting on the space R A equipped with its canonical basis.If B is another finite set, we define M B,A (R) the set of matrices from Let Σ ∈ M 2A (R).If the matrix Σ 11 is invertible, we define the matrix Σ c ∈ M A (R) to be the Schur complement of Σ 11 in Σ : This matrix arises from the identity Id 0 Covariance matrix and Gaussian conditioning Let X A = (X a ) a∈A and Y A = (Y a ) a∈A two sequences of jointly centered Gaussian vectors.We assume that the Gaussian vector (X A , Y A ) is non-degenerate.We define and Proof.We define the Gaussian vector Compactness in matrix sets The following lemmas give equivalent conditions to being compact in several matrix spaces. Lemma 2.4. A set K is relatively compact in S + A (R) if and only if one can find positive constants c K and C K such that Proof of the lemmas.The proof of Lemma 2.4 is a direct consequence of the continuity of the determinant.For Lemma 2.5, note that a matrix The conclusion follows again from the continuity of the determinant.As for Lemma 2.6, let Σ ∈ S + A (R) and Q ∈ M * B,A (R).A direct computation shows that the matrix QΣ T Q is positive definite.The conclusion then follows from the continuity of the application (Q, Σ) → QΣ T Q. Block diagonal matrix with respect to a partition Let B and C be subsets of A, and Γ ∈ M B,C (R).For I and J subset of A we define Γ I,J = (Γ i,j ) i∈I∩B,j∈J∩C and Γ I = Γ I,I . ( Now let Σ ∈ M 2B,2C (R).We define similarly and Σ I = Σ I,I . For a partition I of the set A and a matrix Γ ∈ M B,C (R) we define Γ I to be the block diagonal matrix with blocks (Γ I ) I∈I .Similarly, for a matrix Σ ∈ M 2B,2C (R) we define but the equality is not true in all generality. Power product space We introduce a technical matrix space, that will be central in the alternative expression of the cumulant Kac density in Section 3.2.3.We define the sets The space At last, if I is a partition of A, we define Divided differences We now introduce the notion of divided differences.Classically used in interpolation theory, we use it in order to give a non-degenerate expression of the Kac density near the diagonal.This approach was first taken in [17] in order to give a necessary and sufficient condition for the finiteness of the p-th moment of the number of zeros on an interval, and has been extensively used in [3].The results of this section is standard material about interpolation, but we will recall basic properties of divided differences. Definition and basic properties Definition Let f be a function defined on an open interval U of R or T. We use the notations of Section 2.2. In particular, we consider ∆ the large diagonal of U A .Let A be a finite set and x A ∈ U A \ ∆.We define R A [X] the space of polynomials of degree |A| − 1.The polynomial interpolates the function f at the point (x a ) a∈A .It is the only polynomial in R A [X] with this property, since the difference of two such polynomials cancels at least |A| times and thus must be zero.The application is a projector onto the space R A [X], whose kernel is the space of functions that cancels on x A . We then define the divided difference of f as the coefficient of degree For instance, and so on.The following lemma is analogous of Taylor expansion theorem, in the context of divided differences. Lemma 2.7.For a ∈ A, one has Proof.For a ∈ A, the polynomial interpolates the points (x b ) b =a and is of degree |A| − 2. By uniqueness of the interpolating polynomial, it coincides with the polynomial L[f ; x A\{a} ].Hence the first statement.An application of this formula with interpolating points A ∪ {x} yields the second statement. Continuity property of the divided differences Assume from now on that the function f is of class C |A|−1 .Define C A to be the standard simplex of dimension |A| − 1 : We can equip the simplex C A with the induced rescaled Lebesgue measure dm, so that m(C A ) = 1 (|A|−1)! .We then have the following integral representation for the divided differences, which is analogous to the integral remainder in Taylor expansion in the context of divided differences. Lemma 2.8 (Hermite-Genocchi formula).We have a∈A t a x a dm(t A ). In particular, the application x A → f [x A ] continuously extends to the whole space U A . This proposition allows us to extend by continuity the functions from U A \ ∆ to the whole space U A .For instance, if x a = y for all a ∈ A, then This expression coincides with the Taylor polynomial of order |A| − 1 of the function f at the point y.The continuity property of the divided differences allows us to extend Lemma 2.7 and 2.8 by taking x A in the whole space U A and x ∈ R. Divided difference of a polynomial In this section, P denotes a polynomial.The Hermite-Genocchi formula 2.8 directly implies the following lemma. Lemma 2.9.The quantity P [x A ] is a polynomial expression of the coefficients x A . If deg P < |A| then L[P ; x A , x] = P , and the coefficient of degree |A| of this polynomial is zero, which implies that P [x A , x] = 0.In the following we assume that deg P ≥ |A|. Lemma 2.10.The polynomial x → P [x A , x] is of degree deg P − |A|, and the leading coefficient this polynomial is the leading coefficient of the polynomial P . Proof.From the definition of the divided differences, , which implies that the polynomial x → P [x A , x] is of degree deg P −|A|, and its leading coefficient is the leading coefficient of P . Iterated divided differences Let B be a subset of A and x B ∈ U B .We define the function and Proof.Consider the two polynomials x A\B ] and They both interpolate the points . By Lemma 2.10, the polynomial P 2 is also in By uniqueness of the interpolating polynomial, these two polynomials are equal, hence the first statement. The coefficient of degree and the one in P 2 is f [x A ] according to Lemma 2.10.By the previous equality these two coefficients are equal, which yields the second formula.The last formula is a direct application of Lemma 2.7 applied to the function Matrix viewpoint of the divided differences In order to describe the divided differences of a Gaussian process and the induced transformation on the covariance matrix, we rewrite the operation of taking divided differences from a matrix viewpoint.From now on, we equip A with an arbitrary total order ≤ and we introduce the notation Thus, x a|A = (x b ) b≤a . Basis of polynomials adapted to the divided difference For x A ∈ R A we define the polynomial For any subset B of A, the family (P b x B ) b∈B is a basis of the space R B [X] and we will always equip the space R B [X] with this basis. Remark 2.12.The family (P a x A ) a∈A is a family of monic polynomials of increasing degrees.Thus, if we equip A with another total order, the underlying transformation matrix is of determinant 1 and the coefficients are polynomial quantities in (x i − x j ) i,j∈A .It justifies in the following why the order can be chosen arbitrarily. A direct induction based on Lemma 2.7 implies the following lemma. Lemma 2.13. Let x The finite differences of f are thus the coefficients of the Lagrange interpolation polynomial in the basis (P a x A ) a∈A .We define Then Lemma 2.13 rewrites matricially For instance, Divided differences with respect to a partition In the following, I is a partition of the set A. We define We can perform the divided difference independently on each cell of the partition.That is, we can write for all where M I (x A ) is the block diagonal matrix with blocks (M (x I )) I∈I .For instance, if . From Equation ( 27), Lemma 2.14.Let x A ∈ ∆ I,η .There is a constant C(η) such that For a, c ∈ I, one has |x a − x c | ≤ |A|η.The conclusion follows. For x A ∈ U A we consider the mapping It is well defined for x A ∈ U A , since a polynomial is infinitely differentiable.For a subset I of A, we equip R I [X] with the basis of polynomials (P i x I ) i∈I .Let Q I (x A ) be the matrix of the application π I x A in that basis.For instance, when ) is invertible and from Equation ( 26) and ( 28), one has and thus It implies that det By continuity with respect to x A of the application π I x A this formula remain true for x A ∈ U A .We deduce that the application π I x A is invertible for x A ∈ ∆ I + .Now let J be another partition of A, finer than the partition I.For x A ∈ U A we consider the mapping π I,J x A : Restricted to R I [X], the application π I,J x A coincides with π J I x I .Let Q I,J (x A ) be the matrix of the application π I,J x A , so that Lemma 2.15. In particular, the application π I,J x A is invertible in the case where x A ∈ ∆ J + . Proof.We have from Equation (29) And this expression does not cancel when x A ∈ ∆ J + . Divided difference on a subset In this section, B is a subset of A and I is a partition of A. From Equation ( 26) and ( 28), one has , where for a matrix M , the matrix M B,A is defined in (21).For x A ∈ U A we consider the mapping π I,B x A : and let Q I,B (x A ) be the matrix of this application. Lemma 2.16.The set and the conclusion follows.If not, we change the order on I, so that B ∩ I = b|I for some b ∈ B ∩ I.According to Remark 2.12, the change-of-basis matrix is of determinant 1 and the coefficients are polynomial expressions in (x i − x j ) i,j∈I .Since x A ∈ ∆ I,η , there are bounded, and the matrix Q {I},B∩I (x I ) is of maximal rank.The conclusion in the general case follows from Lemma 2.6. Doubling divided differences In the paragraph, we consider the divided difference on the set 2A defined in (13).We equip the set 2A with the lexicographic order inherited from the order on {1, 2} and the arbitrary order on A. Note that The interest of doubling divided differences is to consider simultaneous interpolation of the function f and f ′ on prescribed points (x a ) a∈A .This part coincides with the classical Hermite interpolation and we will assume that f is of class C 2|A|−1 . Let I be a partition of A (and thus of 2A, following the notation ( 14)).As a consequence of Definition (31), one has We define Recall the definition of M A (R) and M * A (R) in Paragraph 2.3.5.One has the following key proposition. Proposition 2.17. Let x Moreover, the set Proof.The first three assertions directly follow from the definition of M I and Q I .As for the second proposition, let x A ∈ ∆ I,η .According to Lemma 2.14, the coefficients of the matrix M I B (x B ) are bounded by a constant C(η).We also have x A,A ∈ ∆ I,η .Lemma 2.16 applies, and the set of matrices (Q I,2B (x A,A )) x A ∈∆ I,η is relatively compact in M * 2B,2A (R).The conclusion follows. Divided differences of a Gaussian process At last we describe the covariance matrix of the divided difference vector of a Gaussian process.The integral representation given by the Hermite-Genocchi formula gives a convenient expression for the coefficients of the covariance matrix. Let f be a Gaussian process of class C |A|−1 on the interval U .We denote r the covariance function of f , which is differentiable |A| − 1 times in each variable.We define Lemma 2.18.Let I, J ∈ I, a ∈ I and b ∈ J. Then Proof.The Hermite Genocchi formula 2.8 asserts that Kac-Rice formula for Gaussian processes For now on, A is a finite set and f is a centered Gaussian process defined on an interval U of R or T, with covariance function r.We assume for this section that the process f is of class C 2|A|−1 , and for every partition I of A and y I ∈ U I \ ∆, the following non-degeneracy condition holds It has been shown for instance in [10,Thm. 3.6] that this condition ensures the finiteness of the |A|-th moment of the number of zeros of a Gaussian process on a bounded interval. Kac density and cumulants of the zeros counting measure In this section we give the expression of the p-th factorial moment and cumulant of the zeros counting measure.The first step is to lift the degeneracy of the Kac-Rice formula near the diagonal. Non-degeneracy of the Kac density near the diagonal We apply the method of divided differences to lift the degeneracy of the Kac density and give an alternative formula near the diagonal.The material is this section is quite standard, see for instance [10,3].Only Lemma 3.4 is new and allows us to express the Kac density as a function of a non-degenerate Gaussian vector. We fix for the rest of this paragraph a partition I of A. Lemma 3.1. The Gaussian vectors f Proof.We prove first the non-degeneracy of f I [x A ].The process f is of class C |A|−1 , and the Gaussian vector f I [x A ] is well-defined.Let x A ∈ ∆ I + .By definition of the set ∆ I + , there is a partition J finer than the partition I and such that x A ∈ ∆ J .We write x A = ι J (y J ) for some y J ∈ R J \ ∆.For J ∈ J , we have from equation ( 24) , which is non-degenerate by the hypothesis (35).Now from Equation (30), According to Lemma 2.15, the matrix Q I,J (x A ) is invertible when x A ∈ ∆ J , which implies that the Gaussian vector f I [x A ] is non-degenerate for x A ∈ ∆ I + .Now for the Gaussian vector f I [x A,A ], the process f is of class C 2|A|−1 and the Gaussian vector f I [x A,A ] is well-defined.Moreover, if x A ∈ ∆ I + then x A,A ∈ ∆ I + and we can apply the previous case to the set 2A to deduce the non-degeneracy of the Gaussian vector We define the random set By Bulinskaya lemma (see [10,Lem. 1.20]) and the assumption on f , the subset Z is almost surely a closed discrete subset of U and we can define the random measure ν to be the counting measure of Z.The Kac-Rice formula (see [10,Thm. 3.2] and [3,Prop. 3.6]) asserts that for a measurable function Φ : U A → R bounded with compact support, one has, following the notations of Section 2.2, where The function ρ is only defined for x A ∈ U A \ ∆.Along the diagonal ∆, the function N is illdefined and the function D cancels.The first step consists in giving an alternative non singular formula for ρ in a neighborhood of the diagonal ∆.Let x A ∈ ∆ I + .We define and Lemma 3.1 implies that the three quantities above are well defined. Remark 3.2. If One has the following relations. Lemma 3.3.Let J be a finer partition than I. Then for x A ∈ ∆ J + one has It implies that the function ρ J can be extended by continuity from ∆ J + to ∆ I + via these relations.By taking I = {A} and J = A, it implies that the function ρ can be extended by continuity to the whole space U A and is bounded.Moreover, one has thus the function ρ vanishes on the diagonal ∆. Proof. Let x We deduce that The quantities N I A (x A ) and N J A (x A ) are well defined.The Gaussian vectors f I [x A ] and f J [x A ] are equal up to a linear invertible change of variable, and they cancel simultaneously.In other words, one has Let I ∈ I, J ∈ J I and j ∈ J. From Lemmas 2.11 and 2.13, conditionally on f I [x I ] = 0 one has from which we deduce We deduce the alternative expression for ρ J from (37) and (38). When the points (x a ) a∈A collapse on the diagonal ∆ I the vector (f [x I , x i ]) I∈I,i∈I becomes degenerate, which makes unpractical the analysis of regularity of the function ρ I in a neighborhood of the diagonal ∆ I .The following lemma gives another expression of the quantity N I (x A ) that depends fully on a non-degenerate Gaussian vector.Recall the definition of the function f [x B ] for a subset B of A in (25). Lemma 3.4. One has Proof.Let I ∈ I.According to formula (26), The conclusion follows from the definition of N I (x A ). Expression of the cumulants of the zeros counting measure We are now ready to give the expression of the cumulant of order |A| of the linear statistics associated to zeros counting measure.Let (φ a ) a∈A be a collection of bounded functions with compact support.We define the joint cumulant of the family of random variables ( ν, φ a ) a∈A .We define the cumulant Kac density associated with the set A to be the function The following Proposition 3.5 express the cumulant of order |A| of the linear statistics associated to zeros counting measure.It is key step in towards proof of Theorem 1.6, and reveals the elegant interplay between the factorial power counting measure and the combinatorics of cumulants.The formula is quite standard in the study of k-point function of point processes, see for instance [22,Claim 4.3]. Proposition 3.5.We have Proof.We have, using the expression of cumulants in terms of moments given by ( 17) and the notation ( 12) The link between the power measure and factorial power measure given by Lemma 2.2 implies that The bijection given by ( 15) then implies The Kac-Rice formula then asserts that from which we deduce that where the last equality follows from the bijection given by (15). For instance if |A| = 2 then the second order cumulant coincides with the variance and Matrix representation of the Kac density and factorization property In this section we prove a matrix representation for the Kac density and the cumulant Kac density.It allows us to dissociate the analysis of the covariance matrix of divided differences associated with the Gaussian process f , and of the Kac density seen as a functional of the covariance matrix. Matrix representation of the Kac density We define the mapping Recall from definition (34) that The following lemma gives an alternative expression of ρ as a function of the covariance matrix Σ I (x A,A ), and the matrix of divided differences M I (x A ) defined in (28). Proof.Note first that Remark 3.2 and Lemma 3.3 implies that Let I ∈ I.In virtue of Lemma 2.11, one has Following the notations of Section 2.3 it implies that From Equation ( 19), one has Using the alternative expression of N I given by Lemma 3.4 and the conditional formula of Lemma 2.3, we deduce The conclusion follows. Lemma 3.7. Let x Proof.Recall that the partition I B of B is the partition induced by the partition I on the subset . We can thus apply the previous Lemma 3.6 to get From Equation (32), Given two open subsets Ω 1 and Ω 2 of finite dimensional vector spaces, we define the function space C 0,∞ (Ω 1 , Ω 2 ) to be the set of functions from Ω 1 × Ω 2 to R, that are infinitely differentiable with respect to the second argument and such that the partial derivatives (with respect to the second argument) are jointly continuous.We endow this space with the usual topology of uniform convergence of second partial derivatives to any order on compact subsets of Ω 1 × Ω 2 . Proof. Let h(Σ, y The function Σ → h(Σ, y A ) is infinitely differentiable on S + 2A (R) and its partial derivatives are also exponentially decreasing with respect to the variable y A .By differentiability under the integral, it implies that ρ belongs to C 0,∞ (M A (R), S + 2A (R)). Factorization of the Kac density and error term In this section, we show that the function ρ satisfies a nice factorization property.This is a rigorous statement of the approximation (10) stated in introduction.For the rest of this section, I is a fixed partition of the set A. Proof.Since the matrix Σ is block diagonal with respect to the partition I, Similarly, for y The matrix M I is also block diagonal with respect to the partition I and The conclusion follows from the definition of ρ. We now want to describe the error term in Lemma 3.9 after perturbation of the block-diagonal matrix Σ.We start with the following lemma. Lemma 3.10.Let K be a compact subset of M A (R) × S + 2A (R).There is a constant C K such that for all (M, Σ) ∈ K with M = M I , one has Proof.We set H = Σ − Σ I .The matrix H is symmetric and satisfies H I = 0.It implies that One has from identity ( 20) where 22 and the big-oh is uniform on the compact K.By (40), one has (H Σ ) I = 0. Differentiation under the integral sign gives where For each i, j of the sum we make the change of variable Since M and Σ c are block diagonal matrices , one has for a ∈ A and and the conclusion follows. We can now state the following proposition that gives the error in Lemma 3.9 when the matrix Σ is not block diagonal with respect to the partition I.Note that the following Lemma gives a quadratic error in the matrix coefficients of Σ, where in the somehow analogous Proposition [3, prop. 6.43] only proves a square root error.The difference resides in Lemma 3.4, which allows us to bypass the lack of regularity of Gaussian integrals near the boundary of the cone of symmetric definite matrices. Corollary 3.11. Let B be a subset of A and K be a compact subset of M Proof.Let Π = QΣ T Q. Lemma 2.6 asserts that the couple (M, Π) lives in a compact set of By Lemma 3.8, the application ρ belongs to C 0,∞ (M B (R) × S + 2B (R)).Lagrange remainder formula asserts the existence of a constant C K such that Finally the conclusion follows from Lemma 3.9 since Matrix representation of the cumulant Kac density Similarly to the Kac density, we can derive a matrix representation for the cumulant Kac density defined in (39).Note that the divided differences do not behave well when we consider them on a subfamily of interpolations points (x A ) a∈A .It explains why we introduced in Paragraph 2.3.5 the somehow complicated set M * A (R).We introduce the function F A defined by Let I be a partition of A. The following lemma gives an alternative expression to the function Lemma 3.12.For x A ∈ ∆ I + one has According to lemma 3.7, for a subset J of A one has The first statement follows from the definition (33) of M I (x A ) and Given the definition of the function F A , one can translate the cancellation property of Lemma 2.1 to this function.It is the object of the following lemma. Lemma 3.13.Let I be a partition of A, with Proof.For a subset B be a subset of A, we set Then from Lemma 3.9 one has Corollary 3.11 translates directly into the following bound for the function F A .In the following, K is a compact subset of M * A (R) × S + 2A (R). Lemma 3.14.Let I be a partition of A, with I = {A}.Then there is a constant and Proof.From Lemma 3.13, one has Since the function F A is a polynomial expression in the functions ρ, the error term given by Corollary 3.11 translates directly for the function F A to the desired estimate. Decay of the cumulant Kac density The goal of the following section is to improve the quadratic bound given by Lemma 3.14.We will do so, thanks to a refinement of Taylor expansion for regular functions that cancel on given affine subspaces.The next key Lemma 3.22 bounds the function F A by a sum over a collection of graphs.We recall first a few definitions and propositions from graph theory. Graph setting , where E(G) is the set of vertices of the graph G and V (G) the collection of edges of G.For our purposes, a graph G has no loop, but two different edges can have the same endpoints.The multiplicity of an edge {a, b} is the number of edges in the graph that are equal to {a, b}.We say that a graph G is 2-edge connected if the multiplicity of any edge is at most two, and the graph G remains connected when any one of its edges is removed.We define G A to be the set of 2-edge connected graphs with set of vertices A. Notice that this set has finite cardinal. Let I be a partition of A and let G be a graph with set of vertices A. We define the graph G I on the set of vertices I to be the quotient graph with respect to the partition I.That is, the multiplicity of the edge {I, J} (with Proof.For each I ∈ I, we replace in H the vertex I by the cycle (i) i∈I .The neighbors of I are arbitrary linked to vertices of this cycle.The obtained graph with set of vertices A satisfies G I = H and is 2-edge connected. An ear of a graph G is a path in G such that its internal vertices all have degree two.Note that a cycle is a particular instance of an ear.An ear decomposition of the graph G is a union (P 1 , . . ., P k ) such that P 1 is a cycle, and for i ≥ 2, P i is an ear such that its endpoints belong to ∪ j<i P i .We states the following standard fact for 2-edge connected graphs (see [28]).The proof is a simple induction on the number of ears. Lemma 3.16. A 2-edge connected graph admits an ear decomposition. The number of ears is necessary the circuit rank of the graph G. Moreover, the starting cycle can be chosen arbitrarily among the cycles of G. It implies the following lemma.Lemma 3.17.Let G be a 2-edge connected graph.There is a family (T a ) a∈A of spanning trees of G such that for every edge e ∈ E(G), one can find an element a e ∈ A such that e is not an edge of the spanning tree T ae . Proof.Let P 1 be a largest cycle in G, with vertices B, and (P 1 , . . ., P k ) be an ear decomposition of G.For i ≥ 1, we define E i the set of edges of the path P i .One has |B| ≥ |E i |, so that one can find a surjection τ i : B ։ E i . For a / ∈ B, we define T a to be any spanning tree of G.For a ∈ B, we define T a to be the graph G where we removed, in each path E i , the edge τ i (a).The number k is the circuit rank of the graph G.By construction, the graph T a is connected, so it must be a spanning tree of the graph G. Every edge e ∈ E i is the image of some element a e ∈ B by the surjection τ i , so that the edge e does not belong to the tree T ae . Crossed Taylor formula In this paragraph we prove an enhancement of the Taylor remainder estimates for regular functions that cancel on affine subspaces.A simple observation of this phenomenon is the following.Assume that F (x, y) is a regular function such that in a neighborhood of zero, Then for some constant C, one has in a neighborhood of zero that which improves by a square factor the trivial bound |xy|.We wish to extend this observation to the more complicated function F A that satisfies the bounds given by Lemma 3.14 for several partitions I of A. In the following, we give a general statement for this phenomenon. Let Ω be an open subset of a finite dimensional vector space V ≃ R E , F an infinitely differentiable function on Ω, and y E be a vector in V .We fix an integer d ∈ N. The following lemma states equivalent conditions for a regular function F to cancel on an affine subspace with order of cancellation at least d.Lemma 3.18.Let B be a subset of E. Then the three following conditions are equivalent. For all w Proof.We can assume that Ω is a product of open intervals.The general case follows by a partition of unity.We denote by Ω B the projection of Ω to R B . • (2) ⇒ (1) follows from the boundedness of the functions H n B on compact subsets K of Ω. • (1) ⇒ (3) is a consequence of the uniqueness of the polynomial approximation given by Taylor expansion. • (3) ⇒ (2), we distinguish two cases.If y B ∈ Ω B , then the implication a direct consequence of Taylor expansion with integral remainder of the function F on the segment between points x E and (x E\B , y B ).If y B / ∈ Ω B , then there is an index b ∈ B such that y b / ∈ Ω {b} .We can then define Now we extend the previous Lemma 3.18 to a collection B of (not necessarily disjoints) subsets of E. For a fix positive integer d we define Proof.Once again, we can assume that the Ω is a product of open intervals, and for a subset B of E we denote by Ω B the projection of Ω to R B .The proof is a induction on the size of the set B. If B = {B}, this exactly the hypothesis on F (second characterization in Lemma 3.18).Now let B ∈ B and suppose that the lemma is true for the family B \ {B}.We have As in the proof of 3.18, we distinguish two cases.Assume first that y B ∈ Ω B , and let w E ∈ Ω such that w B = y B .For every multi-index according to the third characterization in Lemma 3.18.Let w E = (x E\B , y B ).On cannot directly use Lemma 3.18 to the functions H n E because it is not guaranteed that they satisfy one of the equivalent propositions, but it will be the case if we subtract to H n E its Taylor expansion. For n E ∈ C B\{B} and x E ∈ Ω we define the quantity Now we compute For instance, let E = {1, 2, 3}.Let F be an infinitely differentiable function such that for (x, y, z) in any compact subset Then the function F satisfies the hypotheses of Proposition 3.19 with B = {{1, 2}, {2, 3}, {1, 3}} and d = 2.It implies the existence of a constant C K such that for (x, y, z) ∈ K, Remark 3.21.Let Ω 1 be an open subset of a finite dimensional vector space and assume that F ∈ C 0,∞ (Ω 1 , Ω) (this function space is defined before Lemma 3.8).Then Proposition 3.19 remains true if one replace C ∞ (Ω) by C 0,∞ (Ω 1 , Ω) and the proof is in all points similar.We now apply the previous Corollary 3.20 to the function F A .Lemma 3.22.Let I be a partition of A and K be a compact subset of M A (R) × S + 2A (R).Then there is a constant C K such that for all Proof.The proposition is trivial if I = {A}, and we can assume that I = {A}.The proof is an application of Corollary 3.20.To this end, we define for subsets B, C of A the set Then the set V = S 2A (R), endowed with its canonical basis, can be naturally identified with R A•A .For J ∈ P A with J I we define and The set B J encodes the indices of the coefficients in the off-diagonal blocks with respect to the partition J .As a consequence of Lemma 3.8 the function ).Let J be a partition such that I J ≺ {A}.The assumption on M and Q imply that According to Lemma 3.14 there is a constant C K (that may change from line to line) such that The function F then satisfies the hypotheses of Corollary 3.20 with d = 2 and family of subsets B I , from which we deduce the existence of a constant C K such that To every multi-index n ∈ C B I (which can be seen as a symmetric matrix of size |A| with coefficient in {0, 1, 2}), we can associate a graph G n with set of vertices I, and where the multiplicity of the edge {I, J} is given by the number |n I•J |.From the definition of the set C B I , any partition into two disjoints subsets of the vertices I in the graph G n must be linked with at least two edges.it follows that the graph G n is two-edge connected and subsequently belongs to the set G I .Following inequality (43), one has Σ I,J . Asymptotics of the cumulants of the zeros counting measure We are now in position to study the asymptotics of the cumulants of the zeros counting measure associated with a sequence of processes (f n ) n∈N .We first prove that the non-degeneracy condition (35) holds uniformly for n ∈ N, which allows us to translate the previous Lemma 3. 22 to the cumulant Kac density F A,n associated with the sequence (f n ) n∈N . Uniform non-degeneracy of the covariance matrix Up to now, we assumed that the generic process f that we considered satisfied the non-degeneracy condition (35).For stationary Gaussian processes, this non-degeneracy condition is true under very mild assumptions on the process.For non-stationary processes there seems to be no simple conditions that ensure the validity of (35).Nevertheless in our case of interest, we consider a sequence of Gaussian processes that converges in distribution towards a stationary Gaussian process and we are able to prove some uniform non-degeneracy condition in this setting. In this subsection, A denotes a finite set and I a partition of A. For n ∈ N, we consider f n a Gaussian process defined on nU .We will use the notations introduced in Section 3. In the following, we fix a positive number η and we consider the subsequent partition (∆ I,η ) I∈P A of (nU ) A .In particular we consider the quantities ρ A,n , F A,n , Σ I n (x A ), etc. associated with the process f n . We assume for now that the sequence (f n ) n∈N satisfies hypotheses H 1 (q) and H 2 (q) defined in ( 5) and ( 6), with q = |A| − 1.In particular the quantity Σ ) is well defined.Since the function g of hypothesis H 2 (q) decreases to zero, then for ε > 0, one can find a constant The main proposition of this section is the following. Proposition 4.1.In the above setting, there is a compact set K η of S + A (R) such that for all n ∈ N large enough, and x A ∈ ∆ I,η , the matrix Σ We prove first Proposition 4.1 for the limit stationary process f ∞ . Lemma 4.2.In the above setting, there is a compact set K η of S + A (R) such that for all x A ∈ ∆ I,η , the matrix Σ I ∞ (x A ) lives in K η . Proof.According to Lemma 2.4, one must show the existence of positive constants C η and c η such that From the Hermite Genocchi formula 2.8 and Lemma 2.18, we observe that the coefficients of the matrix Σ I ∞ (x A ) are bounded by g ∞ .It remains to prove the uniform positiveness of det Σ I ∞ (x A ) on ∆ I,η .The covariance function of f ∞ decreases to zero by assumption.Since for Gaussian vectors, decorrelation implies independence, one see that the process f ∞ is weakly mixing, which in turn implies ergodicity.By Maruyama theorem (see [21]), the spectral measure µ ∞ of f ∞ has no atoms.It is then a standard fact that this observation implies the non-degeneracy condition (35), and Lemma 3.1 implies that the Gaussian vector (f ∞ ) I [x A ] is also non-degenerate for x A ∈ ∆ I,η . We prove the uniform lower bound for x A ∈ ∆ I,η by induction on the size of the set A. If |A| = 1 it reduces to the fact that the process f ∞ is non-degenerate.Assume that the property is true for every strict subset B of A. Let J be another partition of A such that J I, and ε > 0. Following Equation (18) we introduce We can assume that T ε ≥ |A|η.In that case, one has ∆ I,η ⊂ J I ∆ J ,Tε . In the case J = {A}, one has for all a, b ∈ A and x A ∈ ∆ {A},Tε The set ∆ I,η ∩ ∆ {A},Tε is not compact, but it is compact by translation in the sense that it is compact if one fixes one of the coordinates.This compactness property plus the stationarity of the process f ∞ implies that one can find a positive constant c η,ε such that Since the determinant is a smooth function of the matrix coefficients and the matrix Σ I ∞ (x A ) is bounded, we deduce the existence of a constant C η such that for For all J ∈ J , the set J is a strict subset of A. Moreover, if x A ∈ ∆ I,η then x J ∈ ∆ I J ,η . By induction hypothesis, one can find a positive constant c η depending only on η such that det Σ Taking ε small enough and gathering the case J = {A} and J = {A}, the conclusion follows. Proof of Proposition 4.1. Proof. A reformulation of hypothesis H The function ψ is uniformly continuous by hypothesis and we can define ω ψ its uniform modulus of continuity.By hypothesis, there are positive constants c ψ and C ψ such that for all x ∈ U , Let n ∈ N. If s, t ∈ nU and |t − s| > T ε then hypothesis H 2 (q) implies that for u, v ≤ |A| − 1, Gathering ( 45) and (46), there is n ε ∈ N such that for n ≥ n ε , and s, t ∈ nU Let n ∈ N with n ≥ n ε and x A ∈ ∆ I,η .For I, J ∈ I, a ∈ I and b ∈ J one has from Lemma 2.18 Inequality (47) implies where For i ∈ I, one has |x a − x i | ≤ |A|η.It implies that for any convex combination y of the variables (x i ) i∈I one also have |x a − y| ≤ |A|η.We deduce that and thus coming back to inequality (48), It implies the existence of a constant C η such that for n large enough and We deduce that det Σ The conclusion follows from the previous Lemma 4.2 covering the stationary case, and taking ε small enough. As a consequence of the previous Proposition 4.1, we deduce the following corollary about convergence of the Kac density associated with the process f n .Corollary 4.3.Assume that the sequence of processes (f n ) n∈N satisfies hypotheses H 1 (q) and H 2 (q) defined in (5) and (6), with q = 2|A| − 1.Then there is a compact set K η of S + 2A (R) such that for all n ∈ N large enough and x A ∈ ∆ I,η , In particular we have the following convergence, uniformly for x ∈ U and t A in compact subsets of Proof.The first assertion is a direct application of Proposition 4.1 with the set 2A, using the fact that x A,A ∈ ∆ I,η when x A ∈ ∆ I,η .As for the second one, the proof of Proposition 4.1, and in particular equation (49), implies that for all partition I of A, one has the following convergence, uniformly for x ∈ U and t A in a bounded subset of ∆ I,η The conclusion follows from the alternative expression for ρ n given by Lemma 3.6.Note that the function ρ ∞ does not depends on the function ψ(x), by a change of variable. Asymptotics of the cumulants Let A be a finite set of cardinal p.We assume that the sequence of processes (f n ) n∈N satisfies hypotheses H 1 (q) and H 2 (q) defined in ( 5) and ( 6), with q = 2p − 1.We then choose η = ω 2p where ω is the parameter of hypothesis H 2 (q), so that Decay of the cumulant Kac density Let us now translate Lemma 3.22 to the cumulant Kac density F A,n .The previous Corollary 4.3 ensures that the matrix Σ I n (x A,A ) lives in a compact subset of S + 2A (R) when x A ∈ ∆ I,η and n is large enough. Lemma 4.4. There is a constant C such that for all x Proof.Let I be a partition of A and x A ∈ ∆ I,η .According to Corollary 4.3, the matrix Σ I n (x A,A ), for n large enough depending only on η, lives in a compact subset of S + 2A (R) depending only on the parameter η.By Proposition 2.17, the element ( M I (x A ), Q I (x A )) also lives in a compact subset of M * A (R) that depends only on η.We can then apply Lemma 3.22 with Σ = Σ I n (x A,A ) and ( M , Q) = ( M I (x A ), Q I (x A )).Given the representation formula for F A given by Lemma 3.12, we deduce the existence of a constant C η such that for all x A ∈ ∆ I,η , Let We deduce the existence of a constant C η such that for x A ∈ ∆ I,η , g ω (x a − x b ). The inequality is true for every partition I of A and the conclusion follows. Convergence of the error term towards zero Recall from Definition (4) that ν n is the random counting measure of the zero set of the Gaussian process f n (n .) defined on U .The previous Lemma 4. 4 and the formula for the p-th cumulant given by Proposition 3.5 shows that the convergence of the cumulant reduces to the behavior of the quantity where G is a 2-edge connected graph with set of vertices A and set of edges E(G), (φ a ) a∈A are bounded functions with compact support and (g e ) e∈E(G) are even functions in L 2 ∩ L ∞ . The quantity I n (G), in the context of cumulants asymptotics, is somehow reminiscent of a theorem of Szegő (see [8] and the references therein), where this kind of integral received a thorough treatment and Hölder bounds that depends on the structure of the graph were given.It has been for instance used conjointly with diagram formulas and Wiener Chaos expansion techniques to prove the Gaussian asymptotics of non-linear functional of random measures, see for instance [25]. Nevertheless our setting is not exactly the same, and we were able to give a very short and self contained argument, that relies only on a basic interpolation inequality for Hölder norms, which proves a tight Hölder type bound for the quantity I n (G) when G is 2-edge connected.We recall the following fundamental theorem about Hölder inequality proved by Barthe in [11,Sec. 2], which is a particular instance of the Brascamp-Lieb inequality (a good survey reference is for instance [13]).Theorem 4.5 (Hölder interpolation).Let m, n positive integers, and v 1 , . . ., v m be nonzeros vectors which span the Euclidean space R n .We denote by Q the subset of [0, 1] m such that q ∈ Q if there is a finite constant C q such that for every measurable functions ψ 1 , . . ., ψ m from R to R, Then Q is convex. The above theorem implies the following theorem about the integral quantity I n (G).Recall that p = |A|.Since for all e ∈ E(G), there is a tree T ae that does not contain the edge e, one must have p e ≥ p p−1 , and the first part of the lemma follows.For the second part, note that we also have the crude bound We can once again interpolate this inequality with inequality (51) and convex combination It remains show that the left hand side of (52) converges towards zero for p ≥ 3.If the functions (g e ) e∈E(G) are bounded and compactly supported, then inequality (51) implies the convergence towards zero of the left hand side of (52) when p ≥ 3.In the general case, one can take, for every e ∈ E(G), a sequence of bounded and compactly supported functions that converge towards g e in L qe .The Hölder bound given by (52) and the triangular inequality imply the desired result. The above Lemma 4.6 implies that the space of test functions (φ a ) a∈A can be extended to L p (U ).The previous Lemma 4.6 and the convergence of the Kac density given by Corollary 4.3 imply the following lemma.Proof.According to Lemma 4.4, there is a constant C such that where I n (G) is defined in (50) with functions g e = g ω .The first part of the corollary is an immediate consequence of the second part of Lemma 4.6.Assume first that the functions (φ a ) a∈A are continuous and compactly supported.In that case, pick a 0 ∈ A. We define y a 0 = 0 and we make the change of variables x a 0 = ny and ∀a ∈ A \ {a 0 }, x a = ny + y a . Then we have the following uniform convergence for y ∈ U and y A in compact subsets of R A The conclusion then follows from the dominated convergence theorem.In the general case, we consider for all a ∈ A a sequence of continuous and compactly supported functions that converges towards φ a in L p .The Hölder bound given by Lemma 4.6 and another application of dominated convergence theorem imply the desired result. Given the expression of cumulants given by Proposition 3.5 and the previous Lemma 4.7, we then deduce the following theorem concerning the convergence of cumulants associated with the linear statistics of the zeros counting measure of the sequence of processes (f n ) n∈N .We define the Stirling number of the second kind In particular, one has and It has been shown for instance in [26] that under our assumptions on the process f ∞ , the constant γ 2 is positive, from which follows the central limit theorem for the linear statistic associated with the zeros counting measure. H ∈ G I .According to Lemma 3.15, there is a graph G ∈ G A such that G I = H.If we remove the edges {a, b} of G such that [a] I = [b] I , then there is a bijection between the edges of G and the edges of H given by the mapping {a, b} −→ {[a] I , [b] I }.Let {a, b} an be edge of the graph G.• If [a] I = [b] I then |x a − x b | ≤ |A|η.We deduce that 0 < r ∞ (0) ≤ g(0) ≤ g ω (x a − x b ).•If [a] I = [b] I then from the Hermite-Genocchi formula and Lemma 2.18,(Σ I n (x A,A )) I,J ≤ sup |s|≤2ηp g(x a − x b + s) ≤ g ω (x a − x b ).We deduce that {I,J}∈E(H)(Σ I n (x A,A )) I,J ≤ {a,b}∈E(G) [a] I =[b] I g ω (x a − x b ) ≤ C {a,b}∈E(G)g ω (x a − x b ). Lemma 4 . 6 . Assume that for all e ∈ E(G), g e ∈ L p p−1 .Then for every e ∈ E(G), there is a number p e ≥ p/(p − 1) such that Assume that p ≥ 3 and g e ∈ L 2 ∩ L ∞ .Thenlim n→+∞ 1 n p/2 I n (G) = 0.Proof.Let (T a ) a∈A be the family of spanning trees of G given by Lemma 3.17.For fixed index a ∈ A, the linear mappingx A −→ (x a , (x b − x c ) {b,c}∈E(Ta) )is volume preserving.For e / ∈ E(T a ) we bound the term g e (x i − x j ) in I n (G) by g e ∞ , and for b = a, the function φ b by φ b ∞ .By a change of variable, we getI n (G) ≤ n φ a 1This inequality is true for all a ∈ A. By Theorem 4.5, one can interpolate this collection of Hölder inequalities indexed by the set A and convex combination (1/p, . . ., 1/p) to obtainI n (G) ≤ Cn a∈A F A,n ny + y A = F A,∞ (y A ). Theorem 4 . 8 .R k− 1 F 1 F {I ∈ P A | |I| = k} .Let p ≥ 2 and assume that the sequence of processes (f n ) n∈N satisfies the hypotheses H 1 (q) and H 2 (q) with q= 2p − 1.Let φ ∈ L 1 ∩ L p 2 .If p ≥ 3 then lim n→+∞ 1 n p/2 κ p ( ν n , φ ) = 0. Moreover when g ω ∈ L p p−1 , lim n→+∞ 1 n κ p ( ν n , φ ) = U φ p (y)dy p k=1 p k k,∞ (0, x)dx .Proof.Let p ≥ 3. Recall from Proposition 3.5 thatκ p ( ν n , φ ) = I∈P A (nU ) I I∈I φ x I n |I| F I,n (x I )dx I .Since φ ∈ L 1 ∩ L p 2 , then for every partition I of {1, . . ., p} and I a subset of I one has that the function φ |I| is in L |I| .According to the previous Lemma 4.7, one has1 n p/2 |κ p ( ν, φ )| ≤ I∈P A 1 n p/2 (nU ) I I∈I φ x I n |I| F I,n (x I )dx I −→ n→+∞ 0,which proves the first assertion.As for the second assertion, it is again a consequence of Lemma 4.7, which implies thatlim n→+∞ 1 n κ p ( ν n , φ ) = U φ p (y)dy I∈P A R |I|−|I|,∞ (0, x)dx .The proof of the main Theorem 1.6 is a reformulation of the previous Theorem 4.8, with ∀p ≥ 1, γ p = p k=1 p k d} E ∀B ∈ B, |n B | ≥ d .Assume that for every B ∈ B, the function F satisfies the equivalent statements of Lemma 3.18.Then there exists finitely many non-zero functions (H n E ) n E ∈C B ∈ C ∞ (Ω) and such that then by Taylor expansion with integral remainder (or directly by (3) ⇒ (2) of Lemma 3.18), there exists functions (H n E , p B ) |p B |=d−|n B | such that p B (x E ) One then have |n B + p B | ≥ d and thus the multi-index (n E\B , n B + p B ) belongs to C B .The conclusion follows.If y B /∈ Ω B , we can argue as in the proof of (3) ⇒ (2) in Lemma 3.18 to get an expression for H n E (x E ) similar to (42) and the conclusion direclty follows.The previous Proposition 3.19 directly implies the following corollary.Let K be a compact subset of Ω ⊂ R E .If the function F satisfies the hypotheses of Proposition 3.19, then one can find a constant C K such that for all x E in K,
20,773.2
2021-12-16T00:00:00.000
[ "Mathematics" ]
Sphingolipid Catabolism and Glycerophospholipid Levels Are Altered in Erythrocytes and Plasma from Multiple Sclerosis Patients Multiple sclerosis (MS) is an autoimmune, inflammatory, degenerative disease of the central nervous system. Changes in lipid metabolism have been suggested to play important roles in MS pathophysiology and progression. In this work we analyzed the lipid composition and sphingolipid-catabolizing enzymes in erythrocytes and plasma from MS patients and healthy controls. We observed reduction of sphingomyelin (SM) and elevation of its products—ceramide (CER) and shingosine (SPH). These changes were supported by the detected up-regulation of the activity of acid sphingomyelinase (ASM) in MS plasma and alkaline ceramidase (ALCER) in erythrocytes from MS patients. In addition, Western blot analysis showed elevated expression of ASM, but not of ALCER. We also compared the ratios between saturated (SAT), unsaturated (UNSAT) and polyunsaturated fatty acids and suggest, based on the significant differences observed for this ratio, that the UNSAT/SAT values could serve as a marker distinguishing erythrocytes and plasma of MS from controls. In conclusion, the application of lipid analysis in the medical practice would contribute to definition of more precise diagnosis, analysis of disease progression, and evaluation of therapeutic strategies. Based on the molecular changes of blood lipids in neurodegenerative pathologies, including MS, clinical lipidomic analytical approaches could become a promising contemporary tool for personalized medicine. Introduction Multiple sclerosis (MS) is an immune-mediated, neurodegenerative, demyelinating, chronic disease of unknown etiology with a possible genetic predisposition and effect of certain environmental factors [1,2]. This neurological disease with a wide variety of symptoms and clinical manifestations often leads to serious damage of the motor activity, paresis and paralysis, disturbed vision, disorders in the function of the pelvic organs, etc. [3,4]. Recent reports highlighted the potential usefulness of lipid markers in prediction or monitoring the course of MS particularly in its progressive stages, which is still insufficiently addressed [5]. Changes in lipid metabolism and separate lipid molecular species have been suggested to play important roles in multiple sclerosis pathophysiology and pathogenesis, The characteristics of the patients and the controls are presented in Table 2. Besides the 9 male and 9 female patients, the study involved as well as 9 male and 9 female control individuals. Phospholipid Analysis of Erythrocyte Membranes and Plasma from Multiple Sclerosis Patients In this work we studied the alterations in the phospholipid (PL) composition of erythrocyte membranes (ghosts) and plasma form patients with MS ( Figure 1). We were interested especially in the changes of the level and metabolism of SM and its metabolites, because these lipids are implicated in the pathophysiology and progression of MS [7]. Figure 1A shows the changes in the PL composition (presented as percentage participation in the total PL) of erythrocyte membranes isolated from blood samples of MS patients and control individuals. SM was decreased by 12% in erythrocyte membranes of MS patients, compared to controls. In addition, the other choline-containing PL, phosphatidylcholine (PC), was also reduced in MS erythrocytes but this reduction was not statistically significant. It should be noted that the amino PLs phosphatidylethanolamine (PE) and phosphatidylserine (PS) were slightly elevated MS ghosts, but again the observed differences were statistically insignificant. ested especially in the changes of the level and metabolism of SM and its metabolites, because these lipids are implicated in the pathophysiology and progression of MS [7]. Figure 1A shows the changes in the PL composition (presented as percentage participation in the total PL) of erythrocyte membranes isolated from blood samples of MS patients and control individuals. SM was decreased by 12% in erythrocyte membranes of MS patients, compared to controls. In addition, the other choline-containing PL, phosphatidylcholine (PC), was also reduced in MS erythrocytes but this reduction was not statistically significant. It should be noted that the amino PLs phosphatidylethanolamine (PE) and phosphatidylserine (PS) were slightly elevated MS ghosts, but again the observed differences were statistically insignificant. Since there is permanent lipid exchange between erythrocytes and plasma, we also analyzed the PL composition of plasma from MS and healthy individuals ( Figure 1B). In MS plasma we also observed a decrease of the choline-containing PLs, PC and SM, the reduction of the latter being statistically significant Again, there was an insignificant increase in the level of the two amino-PLs, PE and PS. It should be noted that in plasma and ghosts of both analyzed groups we tested the phospholipid, lysophosphatidylcholine (LPC) that is missing in most of the reports devoted to blood PL composition in MS patients. Since LPC, which was elevated in MS patients, is a hydrolytic product of PC, we were interested in analyzing of the phospholipase activity responsible for degradation of PC to LPC and arachidonic acid (AA) which is discussed below. Acid Sphingomyelinase (ASM), Alkaline Ceramidase (ALCER) and Phospholipase A2 (PLA2) Activity The reduction of SM is an interesting finding, which deserves special attention. SM is the source of the functionally active CER and SPH, which have been related to demyelination and oligodendrocyte death in MS [15]. We found that concomitant with SM decrease, both of these intermediate sphingolipids were elevated in MS ghosts and plasma ( Figure 2). voted to blood PL composition in MS patients. Since LPC, which was elevated in MS tients, is a hydrolytic product of PC, we were interested in analyzing of the phospholip activity responsible for degradation of PC to LPC and arachidonic acid (AA) which is d cussed below. Acid Sphingomyelinase (ASM), Alkaline Ceramidase (ALCER) and Phospholipase A2 (PLA2) Activity The reduction of SM is an interesting finding, which deserves special attention. is the source of the functionally active CER and SPH, which have been related to dem lination and oligodendrocyte death in MS [15]. We found that concomitant with SM crease, both of these intermediate sphingolipids were elevated in MS ghosts and plas ( Figure 2). Since ceramide in erythrocyte membranes and plasma is a product of the secreted acid sphingomyelinase (ASM), we analyzed its activity in blood plasma ( Figure 3). The results showed that the enzyme was more active in plasma from MS patients ( Figure 3A, increase of about 58%), implying that SM is used more actively as a source of ceramide under these pathological conditions. Additional studies were performed using Western blot analysis to elucidate whether the elevated degradation of SM to ceramide was due only to activation of ASM, or the expression of this protein was also affected in the course of MS progression ( Figure 3B). Immunoblotting with specific antibodies showed that the expression of ASM was increased in plasma of MS patients by 22%. Although modest, this rise in the detected protein level was statistically significant (p < 0.05). under these pathological conditions. Additional studies were performed using Western blot analysis to elucidate whether the elevated degradation of SM to ceramide was due only to activation of ASM, or the expression of this protein was also affected in the course of MS progression ( Figure 3B). Immunoblotting with specific antibodies showed that the expression of ASM was increased in plasma of MS patients by 22%. Although modest, this rise in the detected protein level was statistically significant (p < 0.05). Thus, it seems likely that both upregulation and higher expression of ASM underlie the elevation of CER, which through ceramidase, yields SPH, the letter being phosphorylated by sphingosine kinase to S1P. To analyze in detail whether the sphingolipid catabolic pathway is responsible for accumulation of sphingosine, we measured the activity of alkaline ceramidase (ALCER), which is located preferentially in erythrocytes [14]. ALCER was activated by 52% in MS ghosts but unlike ASM its expression did not show any detectable difference when compared to control erythrocytes ( Figure 4A,B). Thus, it seems likely that both upregulation and higher expression of ASM underlie the elevation of CER, which through ceramidase, yields SPH, the letter being phosphorylated by sphingosine kinase to S1P. To analyze in detail whether the sphingolipid catabolic pathway is responsible for accumulation of sphingosine, we measured the activity of alkaline ceramidase (ALCER), which is located preferentially in erythrocytes [14]. ALCER was activated by 52% in MS ghosts but unlike ASM its expression did not show any detectable difference when compared to control erythrocytes ( Figure 4A,B). The relatively high content of LPC in blood plasma and erythrocytes ghosts of MS patients ( Figure 1) focused our attention on the activity of PLA2, because this enzyme produces LPC and unsaturated fatty acids, most commonly AA, which are in constant exchange between plasma and erythrocytes ( Figure 5). A tendency of slight activation of The relatively high content of LPC in blood plasma and erythrocytes ghosts of MS patients ( Figure 1) focused our attention on the activity of PLA2, because this enzyme produces LPC and unsaturated fatty acids, most commonly AA, which are in constant exchange between plasma and erythrocytes ( Figure 5). A tendency of slight activation of PLA2 was observed in MS plasma but the observed differences were not statistically significant ( Figure 5). panel (B). Reaction with anti-glyceraldehyde-3-phosphate dehydrogenase antibodies (anti-GAPDH) was used as an internal control for loading. Graphical depiction of the percent change in alkaline ceramidase expression is presented on the right part of panel (B). Data represent pooled results from at least three independent experiments. Values represent means ± S.D. * p < 0.05. The differences between the values for Alkaline ceramidase expression were not statistically significant. The relatively high content of LPC in blood plasma and erythrocytes ghosts of MS patients ( Figure 1) focused our attention on the activity of PLA2, because this enzyme produces LPC and unsaturated fatty acids, most commonly AA, which are in constant exchange between plasma and erythrocytes ( Figure 5). A tendency of slight activation of PLA2 was observed in MS plasma but the observed differences were not statistically significant ( Figure 5). Fatty Acid Determination in Erythrocyte Membranes and Plasma from Multiple Sclerosis Patients Our studies confirmed the already known fact, that the level of saturated fatty acids (SAT) was higher in erythrocytes and plasma of MS patients (Table 3). We compared the ratios between SAT, UNSAT and polyunsaturated fatty acids (PUFA) and tried to establish correlations between the obtained values, which could serve as markers, distinguishing MS erythrocytes and plasma from controls. The calculations showed that the UNSAT/SAT ratio was higher than 1.5 in control erythrocytes and lower than 1.5 in MS, implying that the value of 1.5 could serve as a reference value. Similarly, this ratio was higher than 2.0 in control plasma and lower than 2.0 in MS plasma ( Table 3). The differences between other ratios such as PUFA/SAT and MONO/SAT were less pronounced, which makes them less likely to serve as markers to distinguish between controls and MS. Thus, we suggest that the UNSAT/SAT ratio could be a useful parameter to distinguish blood from control and MS patients. Measurement of Lipid Peroxide and Isoprostane Levels in Plasma from Multiple Sclerosis Patients SM acts as an intrinsic antioxidant which prevents the PL fatty acids from oxidative destruction due to its tight association with the acyl chains. Thus, its reduction could render the polyunsaturated acyl chains more vulnerable to oxidative attack. As already mentioned, oxidative stress is considered as a pathogenic factor, participating in the onset and progression of MS [18]. Studies were carried out on the oxidative destruction of the lipid molecules in plasma from MS and control individuals using lipid-derived markers. The changes in the plasma lipid peroxides (LPO) level of the MS patients are shown in Table 4. The results indicated a significant increase of about 148% of the LPO values in MS plasma compared to control values. Another specific marker that indicates the degree of lipid oxidation is the level of F2-isoprostanes, also referred to as 8-iso-PGF 2α . These are prostaglandin F 2α -like compounds, which are produced by free radical-catalyzed peroxidation of arachidonic acid and are assumed as markers of in vivo oxidative stress. As evident from Table 4, the level of isoprostanes was significantly higher (181%) in the plasma of the MS patients. The alterations in isoprostanes level were more pronounced compared to LPO (181% vs. 148%). Discussion Multiple sclerosis (MS) is an autoimmune, inflammatory, degenerative disease of the central nervous system, where the body's immune system attacks the nerves and the myelin sheath. These processes result in a variety of symptoms involving disturbed movement, fatigue, pain, changes in vision, cognitive difficulties, etc. [26]. The etiology of MS is still unclear but genetic, viral and lifestyle factors are largely suspected [27]. Although there is no ultimate cure for MS, some therapeutic approaches and nutritional schemes are proposed [28] and in some cases dietary alterations focused on lipid supplementation have shown promising results [29,30]. Sphingolipids, together with glycerophospholipids, are involved in many cellular functions, including cell proliferation, signaling cascades, apoptosis, etc. Changes in lipid metabolism and level of separate lipid molecular species have been suggested to play important roles in multiple sclerosis pathophysiology and progression. Especially sphingolipids, which are largely expressed in the Central Nervous System (CNS), have been implicated in the pathogenesis of MS [15]. The major sphingolipids are represented by SM, CER, SPH and S1P [31]. They build up specific cascades, in which one component emerges from the other by sequential enzymatic degradation [32]. In this work we analyzed the lipid composition of erythrocyte membranes (ghosts) and plasma from MS patients and healthy controls ( Figure 1A,B). Although there are reports showing such results, some of them are controversial [33][34][35], which is why we analyzed the lipid alterations induced by MS. Some have reported a decrease and others an increase of major lipids of significant physiological importance, such as SM and PC. It is possible that the observed differences between our results and the mentioned reports are due to methodological differences or to the use of whole erythrocytes, instead of erythrocyte membranes. As evident from Figure 1, the choline-containing phospholipid SM was reduced, these changes being more pronounced in erythrocyte membranes. We were interested mainly in the alterations in this first member of the sphingolipid cascade, because it is a major component of erythrocyte membranes, participating in raft formation and also acting as a main source of CER, SPH and S1P. Its decrease occurs through degradation by acid sphingomyelinase (ASMase), which produces phosphorylcholine and a sphingolipid of significant physiological importance, CER. There are reports demonstrating that CER regulates the function of phospholipase A2 [36] and some phosphatases [37]. Our results showed that the level of CER was elevated in both the plasma and erythrocyte ghosts (Figure 2A,B), which is in accordance with the upregulated ASM circulating in MS plasma ( Figure 3A). Additional studies were performed using Western blot analysis to elucidate whether the elevated degradation of SM to ceramide was due only to activation of ASM, or the expression of this protein was also affected in the course of MS progression. Western blot showed that the expression of ASM was elevated by 22% in MS plasma ( Figure 3B), indicating that ASM expression was also a factor, underlying the stimulated degradation of SM. Further research was focused on the accumulation of SPH, which is precursor of S1P, the latter being the most functionally active member of sphingolipid metabolites. A large number of studies has been concentrated on S1P and much less interest has been devoted to SPH as a member of the sphingolipid family. This is why we studied the MS-induced changes in the mechanism of SPH production, which occurs through degradation of CER by alkaline ceramidase (ALCER), the latter being reported in human erythrocytes [14]. Our studies showed that the content of SPH was elevated in both erythrocytes and plasma of MS patients ( Figure 2C,D). ALCER was up-regulated in MS ghosts compared to controls and Western blot analysis showed that MS progression did not affect the expression of erythrocyte ALCER. The major membrane phospholipid, PC, which serves as PLA2 substrate, showed a trend for reduction in both erythrocyte membranes and plasma from MS patients, although the observed differences were not statistically significant ( Figure 1A,B). In addition, LPC, was elevated in MS erythrocytes and plasma. LPC, together with AA, are products of PLA2. There is evidence that the metabolic pathway of AA is activated in the central nervous system of MS patients [38]. AA is released from the membrane PLs by PLA2, the latter being modulated by elevated concentrations of ROS [39]. In turn, released AA via cyclooxygenases and lipoxygenases produces pro-inflammatory thromboxanes and leukotrienes, which are accumulated in MS patients [38]. The mentioned above derivatives of AA are proposed to be involved in the pathogenesis of demyelination and axonal disturbance, thus contributing to progression and aggravation of motor disabilities. There are reports demonstrating that CSF and post-mortem brains of MS patients show augmented levels of the AA metabolic pathway [38]. What is more, there is evidence that prostaglandins and leukotrienes are increased in the CSF of MS patients [40,41]. As mentioned above PUFA are products of PLA2, the latter circulating in human plasma. PLA2 plays critical roles in the pathogenesis of neurodegenerative diseases such as multiple sclerosis by enhancing oxidative stress and initiating inflammation. The levels of PLA2 activity in MS patients and the effect of inhibiting PLA2 on the severity in different experimental models of neurodegenerative pathologies have not been elucidated. Inhibiting sPLA2 leads to lower clinical severity or no signs of experimental autoimmune encephalomyelitis (EAE) in mice, and a lower incidence of EAE lesions compared to animals without PLA2 inhibition [42]. The same authors reported that measurement of PLA2 activity in patients with MS and controls showed no significant difference between groups, except when PLA2 activity was measured in urine. Our studies showed a slight tendency for elevation of PLA2 activity in plasma of MS patients but the differences were statistically insignificant ( Figure 5). This observation correlated with the slight decrease of PC ( Figure 1A,B), which acts as PLA2 substrate. The level of the other PLA2 product, AA, was lower in MS plasma, which could be possibly explained by a quick engagement of this PUFA in the synthesis of pro-inflammatory products like prostaglandins and leukotrienes in MS patients. AA reduction could also be due to its extensive oxidative degradation in MS plasma. It contains four double bonds in its molecule which makes it an excellent target for oxidative attack. Changes in plasma FA have been reported to correlate with the progression of MS. Most epidemiological studies state that diets rich in saturated FA correlate with MS progression, provoking the development of this pathology. On the other hand, diets rich in PUFA seem to decrease the risk of MS development [6], may even ameliorate MS symptoms and are related to the mechanisms of disease development [43][44][45]. Our studies confirmed the above-mentioned observation, that the level of saturated fatty acids (SAT) is higher in erythrocytes and plasma of MS patients. We compared the ratios between saturated (SAT), unsaturated (UNSAT) and PUFA and tried to establish correlations between the obtained values, which could serve as markers, distinguishing erythrocytes and plasma of MS from controls. The results showed that the UNSAT/SAT ratio was higher than 1.5 in control erythrocytes and lower than 1.5 in MS. Similarly, this ratio was higher than 2.0 in control plasma and lower than 2.0 in MS plasma ( Table 3). The other analyzed ratios such as PUFA/SAT and MONO/SAT did not show so pronounced differences which could make them suitable markers to differentiate between blood of healthy individuals and MS patients. Reports show that not only unsaturated fatty acid products, but also CERs play role in development of oxidative stress. There is evidence that CERs can significantly increase reactive oxygen species liberation in glial cells [46]. MS pathogenesis is closely related to oxidative stress and free oxygen radicals which, together with pro-inflammatory mediators, underlie the disease onset and progression [47,48]. High content of ROS has been suggested to destroy the blood-brain barrier and consequently increase the migration of monocytes to CNS, thus inducing focal inflammation and demyelination [18,49]. Augmented levels of advanced oxidation protein products and oxidized glutathione have been reported in blood plasma of MS patients [50]. The CNS contains high levels of PUFA and shows high oxygen consumption, which is a prerequisite for excessive formation of lipid peroxides that impair the brain tissue in patients with MS [51,52]. High levels of lipid peroxidation markers were reported in blood plasma of MS patients [23,53]. We measured the plasma content of lipid-derived markers indicating oxidative destruction such as lipid peroxides and F2-isoprostanes ( Table 4). The content of 8-isoprostaglandin F2α (8-iso-PGF2α) is recognized as a reliable biomarker of lipid peroxidation and oxidative stress [53]. As evident from Table 4, both markers of lipid oxidative modification were higher in MS patients compared to controls. In accordance with our results, other authors also observed elevated content of lipid peroxidation markers in plasma of patients with MS [23,51]. Also, Mir et al. [54] reported higher levels of isoprostanes in CSF of MS patients. Isoprostanes represent a class of lipid peroxidation products that are generated upon oxidative attack on AA, which is an acyl chain component of the membrane phospholipids [55][56][57]. The significance of oxidative stress in MS progression implies that the search of adequate therapeutic approaches which induce reduction of the overall oxidative stress is of particular importance. There are certain limitations concerning the conclusions drawn, which are related mainly to the restricted number of the tested cohort. The reported results were observed in patients with relatively high disability and long disease duration. Further studies should be performed to confirm the validity of the present results for patients with lower disability and disease duration. In conclusion, multiple factors are reported to increase the risk of onset and progression of multiple sclerosis, but the etiology of this pathology remains largely unclear. As typical for other inflammatory neurodegenerative pathologies, oxidative stress and lipid peroxidation are closely related to MS development. Alterations in the lipid profile seem to be specific for this disease which is associated with dysregulation of the lipid homeostasis and lipid metabolism, this being valid especially for sphingolipids. Lipid analysis presented in this work demonstrates the changes of lipid molecules and their metabolism in erythrocytes and plasma of MS patients. Clinical lipidomics has the potential to be applied in MS diagnosis as well as in evaluation of therapeutic approaches by providing a detailed analysis of the lipidome profile of MS patients. Finally, clinical lipidomic analytical approaches could become a promising contemporary tool for personalized medicine. MS Patients and Control Individuals All patients and healthy individuals gave written informed consent before being included in this study. The designed investigations were approved by the local ethics commission. Eighteen healthy non-smoking individuals (nine men and nine women) in the range 35-57 years of age participated in the investigations. The 18 patients (nine men and nine women), aged 34-59 years were clinically diagnosed with relapsing-remitting multiple sclerosis (the most common form of MS), according to the McDonald revised diagnostic criteria for MS (2017) and by neuroimaging. The degree of disability was determined according to the scale of Kurtzke's Extended Disability Status Scale (EDSS) [58]. The MS patients were free of disease-modifying therapy three months prior to blood sample collection. Blood Samples Collection and Preparation of Erythrocyte Ghosts Blood samples of 10 mL were collected by venipuncture of the peripheral forearm vein around 8 AM after overnight fasting. The obtained blood was anticoagulated with sodium citrate. The erythrocytes were pelleted by centrifugation at 2000× g for 10 min at 4 • C. The supernatant from this spin was centrifuged at 10,000× g for 10 min at 4 • C to pellet any remaining cells or platelets. The supernatant thus obtained was used in the experiments listed below. Ghosts were prepared by freezing and thawing the red cells. Any intact cells which remained after three times freezing and thawing were pelleted by centrifugation. Erythrocyte membranes were kept frozen at −70 • C until used. Lipid Extraction and Analysis Lipids from erythrocyte ghosts and plasma were extracted with chloroform/methanol according to the procedure of Bligh and Dyer [59]. The organic phase was concentrated and analyzed by HPLC (WATERS Alliance 2695). Fatty Acid Analysis The phospholipid extracts were saponified with 0.5 N methanolic KOH and methylated with boron trifluoride-methanol complex (Merck, Darmstadt, Germany). The fatty acid methyl esters were separated by GC-MS on a column 60 m × 0.25 mm ID BPX70 × 0.25 µm. Determination of Ceramide Separation of NBD-ceramide was performed on a disposable reverse phase column (Nova-Pack, C18) using methanol: water: 85% phosphoric acid (850:150:0.125 v/v) at flow rate of 2 mL/min. The HPLC was equipped with an automatic injector with an injection loop between 50 and 1000 µL. Under these conditions, the typical elution time for NBDceramide was about 10 min. The excitation wavelength was 455 nm and emission was detected at 533 nm [60]. An alternative method was also applied for determination of ceramide level using Ceramide ELISA kit (MyBioSource, Catalog No: 3804520) according to the manufacturer's instructions. Determination of Sphingosine Plasma sphingosine level was determined using sphingosine ELISA kit (Aviva Systems Biology, San Diego, CA, USA, Catalog No: OKEH02615) according to the manufacturer's instructions. Acid Sphingomyelinase (ASM) Activity Assay ASM activity in plasma was measured using NBD-SM as substrate. Before use, the substrate was dried under nitrogen, resuspended in 200 mM sodium acetate (pH 5.0) and sonicated for 10 min to obtain micelles. Incubations were performed for 30 min at a final volume of 0.5 mL. The reaction was terminated by extraction in 0.5 mL CHCl 3 /CH 3 OH (2:1, v/v). The samples were vortexed for 10 s and centrifuged at 5000× g. An aliquot of the aqueous phase was measured for fluorescence. The hydrolysis of NBD-SM by ASM results in release of NBD-phosphocholine into the aqueous phase, whereas ceramide and the unreacted NBD-SM remain in the organic phase. Results were expressed as pmol hydrolyzed SM/min/µL plasma [60]. Phospholipase A2 Assay Phospholipase A2 activity was assayed by the method described below. Plasma samples were incubated with 100 nmol egg yolk phosphatidylcholine as substrate in 100 mM Tris-HCl pH 8.6 with 5mM CaCl 2 and 0.1% fatty acid free bovine serum albumin. Incubation was carried out for 30 min at 37 • C with continuous shaking in a total volume of 0.5 mL. The reaction was stopped with 0.5 mL chloroform/methanol (2:1 v/v) and the liberated fatty acids were extracted. After methylation the fatty acid methyl esters were determined by GC-MS. PLA2 activity was calculated as pmol fatty acids/min/µL plasma and the alteration was expressed as % compared to control values. Determination of Lipid Peroxides Lipid peroxidation in plasma was measured by a fluorimetric method using thiobarbituric acid (TBA). The plasma lipids containing lipid peroxides were precipitated with phosphotungustic acid and then were incubated with TBA. The fluorescence of the reaction product was measured with excitation at 515 nm and emission at 553 nm. The concentration of lipid peroxides was expressed in terms of malondialdehyde (ng/mL plasma) using tetramethoxypropane as a standard. Protein Determination The content of protein was determined according to the method of Bradford and colleagues [61]. Statistical Analysis Statistical processing of the data was made by one-way analysis of variance (ANOVA), using In Stat software, Graph Pad In Stat 3.1, developed by Graph Pad Software, San Diego, CA, USA. 1. The first member of the sphingolipid cascade, sphingomyelin (SM), is reduced and ceramide (CER) is increased in erythrocyte membranes and plasma of MS patients. 2. The expression analyzed by Western blot and the activity of acid sphingomyelinase (ASM) were elevated in plasma of MS patients compared to controls. 3. The activity of alkaline ceramidase (ALCER) is upregulated but the expression is unchanged in erythrocyte membranes of MS patients compared to controls. 4. We suggest that the unsaturated/saturated (UNSAT/SAT) fatty acid ratio could serve as a marker to differentiate between erythrocytes and plasma of MS patients and controls. 5. Our results confirmed that the lipid-derived markers of oxidative stress, lipid peroxides and isoprostanes, are higher in the plasma of MS patients than in control individuals. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Medical Center "Relax" (protocol #1/10.01.2022). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The datasets generated during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
6,631.8
2022-07-01T00:00:00.000
[ "Biology" ]
Estimation for Product Life Expectancy Paramaters under Interval Censored Samples Kaiwen Guo Department of Maths, Tianjin Polytechnic University Tianjin 300160, China Tel: 86-022-2459-0532 E-mail<EMAIL_ADDRESS>Abstract From basic consept for reliability theory, we computed the moment and maximum likehood estimation for product life expectancy paramaters by means of interval censor data. This is a feasible and efficient estimator for life parameters Introduction In survival analysis and reliable research, often because of the restriction of objective conditions, the time lapse could not be accurately observed values, which they can only be observed by their interval.Generally this kind of data is called interval censored data .In 1972 Hole and Walburg had the research and the application in the medicine clinical test domain to the interval censored data .In 1991 Keiding and walburg gave the definition of the interval censored data theoretically.When the survival variable turns to the product life, the products to maintain their performance time are an imporant quality indicator, such reliability and product life are linked each other.When the censored variable turns to a time variable, we assume that this variable be a continuous random variable, the probability density function have un-known parameters which need to be estimated.In this paper, by means of interval censored data, we gave the monment estimation and maximum likehood estimation. Suppose that survival variable and the interruption variable all obey the single parameter exponential distribution Let X be a survival variable, which is a continuous random variable, the probability density function of X is . Let Y be a survival variable, which is a continuous random variable, the probability density function of Y is . Suppose that survival variable obeys the single parameter exponential distribution, , where Suppose that survival variable obeys the single parameter exponential distribution and the interruption variable obeys the even distribution Let X be a survival variable, which is a continuous random variable, the probability density function of X is . Let Y be a survival variable, which is a continuous random variable, the probability density function of Y is . Suppose that survival variable obeys the even distribution, . Assume X and Y be mutually . Now we consider the monment estimation of interal , where . In the moment 0 0 = t we start to admit experimental n-products.In the moment 1 t , 2 t , …, i t , we remove for examination, these n-products in the product life have ended to remove, the remaining time puts the latter to continue testing.In the period of products lives be lost, but c-products lives haven't lost , then we give a reliability function , we have their maximum likehood estimation Example Let the product life obey the single parameter exponential distribution, we extract 12-products to carry on the experiment.When 8-products lives have already finished we stop experiment, the products lives closure time presses the arranged in order is 2, 10,18,36,60,180,720,2880, we discuss the maximum likehood estimate and the monment estimate solution.From the time order 2, 10, 18, 36, 60, 180, 720 and 2880, we conclusion The monment estimate and the maximum likehood estimate to obtain the product life only to be able to small partially to carry on the experiment, the sample which we can take are quite small, but the monment estimate and the maximum likehood estimate to the unknown parameter is a kind of feasible and efficient estimate method. the overall Y.When we observe them actually, we can obtain the sample
800.8
2008-05-19T00:00:00.000
[ "Mathematics" ]
An Evolutionary Algorithmic Approach based Optimal Web Service Selection for Composition with Quality of Service Problem statement: Web service is a technology that provides flexibili ty and interconnection between different distributed appli cations over the Internet and intranets. When a client request cannot be satisfied by any individua l service, existing web services can be combined in to a composite web service. When there are a large num ber of Web services available, it is not easy to find an execution path of Web services composition hat can satisfy the given request, since the searc h space for such a composition problem is in general exponentially increasing. Approach: In this study, we discuss and compare the two algorithms, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) algorithm for solving this optimization probl em of optimal web service selection and composition . Results: The end results indicate PSO perform better over G A for single and multi user service selections. Conclusion: Inferences from the results indicate the service s el ctions from the registry of pooled services can be optimized with the usage of Optimization Algorithms like GA and PSO. INTRODUCTION Many companies are now using the Web as a platform to communicate with their partners.The level of usability is very high which lead to the success of web today (Teoh et al., 2009).The Web and its technologies allow them to provide Web services to individuals as well as other businesses.Web services technology is becoming a highly acceptance in all field (Amirian and Alesheikh, 2008).Web services are frequently used to build a distributed system which can be accessed over Internet.With more and more web services appearing on the web, web service's discovery mechanism becomes essential.It is loosely coupled to enhance productivity, simplify using, get reusability and improve system expansibility.The main challenges in the Web services paradigm are their discovery and their composition and providing an optimized QoS model (Deng and Xing, 2009).If the implementation of a web service's business login involves the invocation of other web services, it is necessary to combine the functionality of several web services.In this case, we speak of a composite service.The process of developing a composite service in turn is called service composition.In real world scenario the life span of a particular service may make a particular service unavailable.In such process of composition, selection of an appropriate web service from large numbers of available alternative services is the chief task.Web service selection should be such that the services that most accurately meet the requirements are selected which can be done based on the non-functional attributes of Quality of Service (QoS) attached to each service.It is always preferred to give services based on the expectation of the consumers (Yamin and Ramayah, 2011).Selection should be such that the overall QoS gets improved in the composite web service.In order to satisfy the multiple functional and non-functional constraints, suitable component services need to be selected for service Orchestration Process (Yu et al., 2008).At the outset designing a composite Web service has to be ensured not only correct and reliable execution, but also the delivery with optimal (Yu et al., 2008). This study describes how Genetic Algorithm and Particle Swarm Optimization algorithm can be applied to the optimisation problem of optimal web service selection and compares the performance of both the algorithms as heuristics algorithm always performs better (Kangrang et al., 2009).The study is organized initially on the study few related works carried out in QoS mesurement and service selection.The Web Services Composition problem is worked on considering a Travel plan domain as an example.The research methodology is carried with two problems for QoS-based Service Selection.Initial problem is the Web Service selection based on Genetic algorithm and the same based on PSO algorithm is the second.discussions based on the results as a comparison between the two algorithms concludes the study. Related works: Web service can support the B2B trading life cycle (Saha, 2007).Business processses and interaction protocols are provided by Business Process Execution Language (Wang et al., 2011).It is also treated as objects for database management system (Yu et al., 2008).Web services composition is based on Quality of services.QoS of compound service is a key factor for satisfying the user's constraints and priorities.Works have been done in analyzing the change of individual QoS attribute based on accumulated historical data and predicting its value in the time periods (Li et al., 2009). Dynamic web service composition requires user to discover service provider that satisfy given funcional and nonfunctional requirements (Menasce, 2002).Quality of Service broker based process model for Dynamic Web Service Composition (DWSC) is proposed by researchers.Several different composition strategies are present based on composition platform and framework (Dustdar and Schreiner, 2005).One of the platform for composing web services is web wide web (Chen et al., 2010).A QoS-aware Method for web service selection is presented by (Bian and Xincai, 2010).In few works various authors proposed algorithms which returns a composition of service from a repository with the optimal response time and throughput .The Travelling Salesman Problem (TSP) is which is also a polinomial time hard problem.Inferences can be drawn towards the web services selection problem from the approach of problem solving in TSP.An optimization approach is proposed for a Meta heuristic framework for the Qos -aware composition problem (Rosenberg et al., 2010).TQosdriven approach consists of a web service selection approach supporting transactional and quality driven WS composition (Hadad et al., 2010). An ontalogy based approach on web service selections were also done by various.Works have been carried out so that Web Service compositions can be seen as workflows based on Web Services (Bhiri et al., 2006).This can be facilitated using the BPEL tools IBM Library, 2010.Since the servic selection is a combinatorial optimization problem and NP hard, evolutionary computation techniques are opted in similar approaches. MATERIALS AND METHODS The web services composition problem: Motivating example: Consider a travel plan domain in Fig. 1.These sets of activities are needed for a person who wants to travel from FREDERICTON to Chicago for an international conference. The following lists the activities they proposed: • They have some free time on Thursday, so they plan for sight seeing on Thursday • Engage a guide for the travel The task is far from trivial as it involves many services, including the booking of flights, hotels and rental cars and it requires a lot of coordination among these services.The coordination of the services is the most difficult part of the task, because it has many options as well as many constraints. Fig. 1: Travel activity flowchart QoS modelling for concrete services: Since the meaning of QoS attributes is different by a variety of end-user factors, contextual circumstances as well as the perspective of interest.Each provider must unambiguously define its QoS model before delivering its QoS aware service.In the past few years, modelling QoS for web service has been a big concern for researchers (Ran, 2003).In this study, we consider four typical quality attributes: execution cost, response time, reliability and availability to model the extendable quality of web services (Hadad et al., 2010). Thus for each individual web service, the QoS vector is given as: Computing the QoS of services composition: The aggregate QoS of web service composition depends on the QoS of its component services.BPEL4WS provides 4 control structures to compose service.They are sequence, flow (concurrence), switch (choice), while (loop).The aggregated rules of QoS are different for different control structures and different quality metrics.For example, the response time is sum measurement for sequence structure, while the response time of concurrence is the maximal value among its sub branches.The cost is the sum for both sequence and concurrence control structure, but it is multiple measurements for both reliability and availability.Each branch in choice structure is assigned to the probability to be chosen and the sum of all of branches is 1.Finally, a loop structure with k iterations of task t is equivalent to a sequence structure of k copies of t. Fitness evaluation: A quality attribute matrix Q = (q i, j ; 1≤i≤n,1≤j≤ m) is built to record the quality information, in which each row corresponds to a web service while each column corresponds to a quality attribute.Some of the quality attributes could be negative, i.e., the higher the value, the lower the quality.Other attributes are positive, i.e., the higher the value, the higher the quality Eq. 1: The above formula (1) is used to compute the overall quality score for each web service.End users express their preferences regarding QoS by providing values for the weight w j .Suppose there are n tasks (t 1 , t 2 … t n ) in a composite service.For each task t i (1≤i≤n), there are l i candidate services that are available to which task t i can be assigned.The QoS-based service selection problem involved in service composition is, how to select one service for each involving task from its corresponding existing candidate service group, so that the overall QoS of the constructed composite service can be maximized. The problem can be formulated as Eq.2: Where: where, F ij is the QoS score of the j th candidate service for the i th task in the composition. Web service selection based on genetic algorithm: Genetic Algorithms (GAs) (Goldberg, 1989) are search methods which were developed by John Holland and are based on the principles of natural selection (Osman et al., 2009) Genetic Algorithm is proposed as a search algorithm and has proven to be powerful in rapidly discovering good solutions for some difficult problems (Ismail and Irhamah, 2008) especially when the search space is large, complex and poorly understood.GA can be applied to the optimal service selection optimization problem (Yu and Lin, 2004).The basic steps involved are as follows. Definition of chromosome: We define Chromosome = [C1, C2… Cn], where n is the number of tasks involved in composite Web services.The Chromosome is used to represent every possible solution that meets user's request.In this solution string, every bit corresponds to a concrete service whose value is 0 or 1.If a concrete service is selected, then the value is 1, otherwise, the value is 0. Fitness function: In order to determine the quality or performance of each chromosome in the population, the GA associates a fitness measure with each solution string.Here we adopt the Qos optimization function defined in formula 1 as fitness function in GA. Genetic operations: Once the chromosomes are defined, we need to reproduce them by performing genetic operations.We use crossover, mutation as genetic operations and use a simple roulette wheel selection to choose the individual chromosome.Step 1: Initialize genetic parameters (the maximal number of iterations maxgen, probability of crossover pc, probability of mutation pm, probability of selection ps). Step 2: Generate randomly an initial population of pop_size. Step 3: While (certain termination criterion is not met) do { While applying Particle Swarm Optimization (PSO), each service that exists for accomplishing a specific task can be considered as a particle (Taher and Tabei, 2008).Several particles may exist forming a population.PSO can then be applied to find the best solution among the candidates to form an optimal composition.Li et al. (2011) propose to improve QoS in web services based on algebraic and physical perspectives to give a good service using PSO.Also in many problems that require meta-heuristic approach of solving PSO reduces the computational time significantly. Encoding scheme: The procedure of web service composition is modelled as a Services Composition Graph (SCG).In each service node, selecting suitable service from the candidates and all of the selected services are joined to form a service path.Each service path denotes a candidate of service composition scenario. In the Fig. 2, φ(i) denotes the number of specific service which is chose in the i-th service node.After the evolution of one particle, new value of each dimension can re-construct a new particle, which denotes a different service selection scenario. Fitness function: In order to determine the quality or performance of each particle in the population, a fitness measure is associated with each solution string.For the problem of Web service selection, performance of every service selection solution is measured by its QoS attribute. Here the QoS optimization function in formula 1 can be adopted as fitness functions in PSOWSS. Particle updating: Supposed there is a service S id , which is the d-th specific service in i-th service selection scenario.According to the encoding scheme, the particle's place X id denotes the service S id .Then the QoS metric value of S id can be defined as the particle's velocity V id .Firstly, the value of service's QoS metric is calculated to get a better QoS value.Secondly, comparing the better QoS value with some services' QoS value to find which is closest to the better one.Thirdly, the specific service can be found by the definition of inverse function, or other matching functions through it's QoS value, which service can be numbered as S' id .Lastly, the d-th specific service in i-th service selection scenario can be updated from S id to S' id .The whole procedure can be defined as particle updating. The initial particle population: Initial particle population is generated randomly.That means, a number of service selection scenarios is randomly generated.Then the candidates are filtered by the constraints to get the initial particle population which meets the requirements. RESULTS AND DISCUSSION Performance comparison based on experiment results: The performance of both the algorithms were tested and compared by some experiments in a LAN environment.Figure 3 shows the comparison chart based on the response time of both the algorithms. Figure 4 shows the comparison chart when there on multiple number of users. It can be inferred from the above diagrams that at multiple instances of execution of the service selections Genetic algorithm was outrun by Particle Swarm Optimization.Further comparing to GA, the advantages of PSO are that PSO is easy to implement and there are few parameters to adjust and the information sharing mechanism in PSO is significantly different compared with Genetic Algorithms (GAs). CONCLUSION Automated web service composition is a very challenging task.This study explains the Web Service Composition problem by considering a travel plan domain as an example.We have presented an algorithmic approach for solving the optimal service selection optimization problem by considering two optimization algorithms, Genetic algorithm and PSO algorithm.Figure 3 show that the algorithms guarantee the resulting composite web service with maximal overall QoS.Also, a comparison chart shown in Fig. 4 is a comparison based on the running times of both the algorithms in a single and multi user environment.As a part of future work, efficiency of the algorithm has to be improved and the user preferences have to be formulated automatically as constraints. Possibly leave on Saturday and back on the next Friday.(The date is flexible depending on the total price of the flight ticket and hotel) • According to the distance from the airdrome to hotel, make a decision on whether by bus or taxi to hotel • Conference meetings on Monday and Tuesday • Stay five days in a hotel in Chicago • Rent a car for five days after arriving in Chicago. Fig. 2 : Fig. 2: Particle encoding instance GA for optimal service selection: The implementation steps of GA as follows. Evaluate every individual and compute its Qos function values according to Eq.1 • Rank individuals and assign their fitness values by the ranking selection method • Random select pop_size/2 individuals according to selection probability PS to form a temporary population; • For the individuals in the temporary population, adopt crossover and mutation operators to form new pop_size/2 individuals • Generate the next new population by collecting the new pop_size/2 individuals and the pop_size/2 individuals in the temporary population • The best individual in current population will be reserved into the next population} Web service selection based on PSO algorithm: • Initialization set up parameters • In accordance with population size set, randomly generated paths of service composition to meet the constraint.Each path is encoded as a particle and all particles from the initial particle population • Implementing the disturbance moving of particles • Randomly selecting current number of mutation particles( PN mutation) from external population and updating those • Updating the values of pbest and gbest • According to the evolution times of fitness function to judge whether meeting the end conditions or not Generating initial particle population: Input: Population size (N) Output: Initial particle population (SN)
3,823.4
2012-02-10T00:00:00.000
[ "Computer Science" ]
The actin binding protein drebrin helps to protect against the development of seizure-like events in the entorhinal cortex The actin binding protein drebrin plays a key role in dendritic spine formation and synaptic plasticity. Decreased drebrin protein levels have been observed in temporal lobe epilepsy, suggesting the involvement of drebrin in the disease. Here we investigated the effect of drebrin knockout on physiological and pathophysiological neuronal network activities in mice by inducing gamma oscillations, involved in higher cognitive functions, and by analyzing pathophysiological epileptiform activity. We found that loss of drebrin increased the emergence of spontaneous gamma oscillations suggesting an increase in neuronal excitability when drebrin is absent. Further analysis showed that although the kainate-induced hippocampal gamma oscillations were unchanged in drebrin deficient mice, seizure like events measured in the entorhinal cortex appeared earlier and more frequently. The results suggest that while drebrin is not essential for normal physiological network activity, it helps to protect against the formation of seizure like activities during pathological conditions. The data indicate that targeting drebrin function could potentially be a preventive or therapeutic strategy for epilepsy treatment. The actin binding protein drebrin helps to protect against the development of seizure-like events in the entorhinal cortex Alexander Klemz 1,3 , Patricia Kreis 2,3* , Britta J. Eickholt 2 & Zoltan Gerevich 1* The actin binding protein drebrin plays a key role in dendritic spine formation and synaptic plasticity. Decreased drebrin protein levels have been observed in temporal lobe epilepsy, suggesting the involvement of drebrin in the disease. Here we investigated the effect of drebrin knockout on physiological and pathophysiological neuronal network activities in mice by inducing gamma oscillations, involved in higher cognitive functions, and by analyzing pathophysiological epileptiform activity. We found that loss of drebrin increased the emergence of spontaneous gamma oscillations suggesting an increase in neuronal excitability when drebrin is absent. Further analysis showed that although the kainate-induced hippocampal gamma oscillations were unchanged in drebrin deficient mice, seizure like events measured in the entorhinal cortex appeared earlier and more frequently. The results suggest that while drebrin is not essential for normal physiological network activity, it helps to protect against the formation of seizure like activities during pathological conditions. The data indicate that targeting drebrin function could potentially be a preventive or therapeutic strategy for epilepsy treatment. Epilepsy is a disease of high prevalence (~ 1%) with a third of the patients having pharmacoresistant seizures intractable to available treatments 1 . In addition, people with epilepsy often have long-term cognitive impairments such as memory loss, learning disabilities and behavioral disorders frequently correlating with the frequency and severity of epilepsy 2 . Current antiepileptic drugs symptomatically suppress the seizures without affecting the underlying mechanisms of epileptogenesis and brain injury. Comorbidities of epilepsy, such as cognitive impairments, are also rarely targeted by antiepileptic strategies, although they can be as disabling as the seizures themselves 1 . Dendritic abnormalities have been increasingly observed in both epilepsy patients and animal models 3 . After seizures, a transient beading of the dendrites occurs followed by a more persistent loss of dendritic spines 4,5 . This dendritic spine loss is well documented also in neocortex distant from the epileptic focus 6 but it is still not clear what is the role of these dendritic abnormalities in promoting epileptogenesis. One possibility is that spine loss is epileptogenetic and enhances the probability of future seizures by disturbing the fine-tuned balance between excitatory and inhibitory circuits, especially when inhibitory inputs are more affected. However, it is also possible that a loss of spines and synapses is a consequence of the seizures with a beneficial role in suppressing seizures, by inhibiting synaptic transmission and the propagation of seizures in the brain. The cytoskeletal protein actin, existing in depolymerized monomeric (G-actin) and stable filamentous form (F-actin), plays a major role in generating the structural support for dendrites and spines. The organization and turnover of actin filaments within dendritic spines is modulated by actin-binding proteins, such as the developmentally regulated brain protein (drebrin) 7 . In its function as actin filament modulator, drebrin stabilizes actin filaments and inhibits their depolymerisation in spines 7 . www.nature.com/scientificreports/ Several studies have reported changes in synaptic transmission and plasticity following drebrin down regulation or drebrin loss. Downregulation of drebrin reduces dendritic spine density, alters spine morphogenesis and inhibits both glutamatergic and GABAergic synaptic transmission in cultured hippocampal neurons [8][9][10] . Drebrin knock out mice as well as depletion of a splice variant of drebrin in mice has been shown to cause LTP impairment combined with decreased spine density 11,12 . Additionally, depletion of one of the splice variants of drebrin show impaired context dependent fear conditioning 12 . We recently generated a drebrin deficient mouse line and observed no changes in spine morphogenesis or glutamatergic transmission in young adults, suggesting that loss of drebrin alone is not sufficient to induce glutamatergic synaptic dysfunction 13 . We surmised that a specific drebrin KO phenotype only becomes evident under certain pathophysiological conditions where the loss of drebrin cannot be compensated. Decreased levels of drebrin in the brain were shown in states with high epilepsy prevalence 14 such as Alzheimer's disease 15 and Down's syndrome 16 and a lower drebrin level in the hippocampi of temporal lobe epilepsy patients was associated with more frequent seizures 17 . Decrease and reactivation of drebrin expression in poststatus epilepsy models have been described already [18][19][20] , but it still remains unclear whether the changes are pro-or antiepileptogenic and if they are the consequence of the seizures or play a role in the epileptogenesis. Here we measured the effect of drebrin ablation on increased neuronal network activity in physiological and pathological conditions. First, we induced gamma oscillations in hippocampal slices from wild type (WT) and drebrin knockout (KO) mice. Gamma oscillations represent physiological synchronization of neuronal activity at frequencies between 30 and 90 Hz and are associated with higher cognitive tasks such as sensory processing, working memory, attention, learning and memory 21,22 . Second, we induced epileptiform activity in the entorhinal cortex in vitro 23,24 and analyzed whether the loss of drebrin affects the emergence of epileptiform discharges. We found that drebrin ablation was largely compensated in neural networks when activity remained in a physiological frame; however, networks without drebrin developed seizure-like events with shorter onset latency and higher incidence. Results Drebrin loss does not alter gamma oscillations in hippocampal slices. Occurring in different regions of the brain, gamma oscillations are physiological rhythmic fluctuations of field potentials enabling the temporal synchronization of neuronal activity within and across groups of neurons. In the cortex and hippocampus, they are generated by recurrent rhythmic synaptic connections between pyramidal cells and perisomatic parvalbumin-containing basket cells 25 . Drebrin is expressed in the dendrites of both glutamatergic and GABAergic neurons at the site of excitatory synapses 26 . Given that drebrin is a regulator of synaptic transmission and that changes in synaptic morphology are associated with memory consolidation, we tested if loss of drebrin influences the development of gamma oscillations in the CA3 region of the hippocampus, where gamma oscillations are primarily generated and have highest amplitudes 27 . We induced gamma oscillations by bath application of kainate (KA, 500 nM) onto hippocampal slices from WT and KO mice (Fig. 1). The responder rate of the slices was comparable in WT and KO animals (WT: 75.0%; KO: 68.8%; p = 0.782, not shown). The induced oscillations had similar peak power from WT (geometric mean: 1.35 µV 2 , SD factor: 8.21, n = 24) and KO animals (1.50 µV 2 , SD factor: 7.96, n = 22) without statistical difference between the two groups (p = 0.76; Fig. 1c). Similarly, there was no difference in peak frequency (Fig. 1d), Q factor (WT: 8.19 ± 1.21; KO 8.57 ± 1.44; p = 0.84; not shown) and half bandwidth of the oscillations (WT: 8.6 ± 2.1 Hz; KO: 6.8 ± 1.0 Hz; p = 0.45; not shown). Similar to the induced gamma power, we observed no differences in spontaneous network activity measured prior to the induction of oscillations with KA in WT and KO mice ( Fig. 1a,b,e). In a fraction of slices, spontaneous oscillations were recorded before the application of KA. Similar to the induced gamma oscillation, the peak power of these spontaneous oscillations did not differ between WT and KO (geometric mean WT: 0.08 µV 2 , SD factor: 1.75, n = 3; KO: 0.126 µV 2 , SD factor: 4.09, n = 10; p = 0.63, not shown). Likewise, we did not find any differences in the peak frequency, half bandwidth and Q factor of the spontaneous oscillations between WT and KO (frequency WT: 29.7 ± 1.3; KO: 33.1 ± 1.4; p = 0.24; half bandwidth WT: 19.2 ± 8.8; KO: 15.3 ± 3.0, p = 0.59; Q factor WT: 3.0 ± 1.8; KO: 3.2 ± 0.8, p = 0.91, not shown). Interestingly, however, the fraction of slices showing spontaneous gamma oscillations in drebrin KO mice were significantly higher suggesting an increase in neuronal excitability when drebrin is absent (Fig. 1f). In summary, our results indicate that drebrin loss does not influence the amplitude of gamma oscillations in the hippocampus, however, increased the likelihood of spontaneous oscillation generation in brain slices. Gamma oscillations do not alter drebrin protein levels in hippocampal slices. We previously demonstrated that regulation of drebrin protein stability is linked to increased neuronal activity and may protect from synaptic dysfunction 28,29 . We therefore investigated whether the emergence of gamma oscillations altered the protein levels of drebrin in hippocampal slices from WT mice. Quantitative assessments by western blotting showed no difference in the amount of drebrin protein following development of gamma oscillations compared to slices where no oscillations were induced. These results indicate that permanent gamma oscillations lasting for more than three hours did not influence drebrin protein expression in the hippocampus (Fig. 1g). Drebrin loss favors the development of epileptiform activity in the medial entorhinal cortex. While drebrin KO did not affect the physiological gamma oscillations, we observed more frequent development of spontaneous oscillations before the pharmacological induction suggesting an increased excitability in hippocampal slices. In following experiments, therefore, we investigated whether the loss of drebrin affects the development of pathophysiological epileptiform activity in the medial entorhinal cortex. This area is characterized by generation of seizure-like events (SLEs) with the lowest threshold within the temporal lobe 30 www.nature.com/scientificreports/ induced seizure-like activity by omitting Mg 2+ from the ACSF and observed that SLEs appeared within tens of minutes after induction. SLEs further converted into continuous late recurrent discharges (LRD) with continued omission of Mg 2+ from the bath solution (Fig. 2). Analyzing the fraction of slices developing SLEs and LRDs in all investigated entorhinal cortex slices revealed that SLEs and LRDs appeared more often in KO compared to WT animals, although the difference was not significant (SLE: p = 0.09, LRD: p = 0.07 compared to WT; Fig. 2e). However, the onset of SLEs was significantly shorter (Fig. 2a,b,f) and SLEs were more frequent in KO slices compared to WT (Fig. 2g). SLE duration did not differ between WT and drebrin KO slices (Fig. 2h). In contrast to the SLEs, development of LRDs (conversion latency of SLE into LRD) was not different between WT and KO animals (Fig. 2i). These results suggest that while physiological network activity was not affected by drebrin loss, seizure-like activity appeared earlier and more often in drebrin KO mice suggesting that the neural networks without drebrin are more susceptible to develop pathophysiological epileptiform activity. Epileptiform activity does not alter drebrin protein levels in medial entorhinal cortex slices. Previous studies describe decrease in drebrin protein levels in response to KA-or pilocarpin induced seizures [18][19][20] . We thus set out to investigate whether the emergence of SLEs and LRDs has an impact on drebrin protein levels in the entorhinal cortex of WT mice. SLEs or LRDs did not alter drebrin levels compared to slices without epileptiform activity during the time window of over three hours of recordings (Fig. 3). Discussion We have investigated the effects of drebrin loss on physiological and pathophysiological network activities in hippocampal and entorhinal cortex slices, respectively. While drebrin KO did not alter hippocampal gamma oscillations, it enhanced the susceptibility of entorhinal cortex slices to develop SLEs. In drebrin KO mice, SLEs developed faster and with higher incidence compared to WT mice. Although previous studies reported a decrease www.nature.com/scientificreports/ in drebrin protein and mRNA levels in epilepsy pathologies 18-20 , we did not find alteration of drebrin expression following in vitro physiological or pathophysiological network activities. The results suggest that drebrin could play a role in controlling the excitation in neuronal circuits and the development of pathological network activity. The results support previous studies showing that drebrin deficiency alone is not sufficient in causing synaptic dysfunction in physiological conditions 13 and may require additional burdens such as increased excitability to observe a drebrin deficient phenotype. Our findings demonstrate that gamma oscillations do not differ in drebrin KO animals from those in WT mice. In addition, the induction of gamma oscillations in WT animals did not affect drebrin protein amount in the hippocampal slice, suggesting that drebrin neither modifies gamma oscillations nor gets modified by the evoked oscillatory activity. Although drebrin is expressed in both excitatory and inhibitory neurons 26 and drebrin downregulation inhibits both glutamatergic and GABAergic synaptic transmission 8-10 our data suggest that loss of drebrin does not affect normal physiological neuronal activity such as gamma oscillations. The lack of change observed for gamma oscillations in drebrin deficient mice may be due to the presence of compensatory mechanisms by other actin binding proteins enabling activation of alternative pathways to safeguard actin cytoskeleton dynamics. In this case an abnormal phenotype may only be observed in conditions when drebrin cannot be compensated for such as disease or stress. In these lines, our results further indicate that drebrin loss augmented the susceptibility of neurons to develop seizure-like activity. The results suggest that neuronal networks with no drebrin expression are more likely to develop SLEs. Pathological network events such as hypersynchronized epileptiform activity also depend on GABAergic interneurons. One function of interneurons under normal conditions is to restrain seizure activity by feed-forward inhibition 33 and failure of this restraint can lead to faster spread of seizures in the brain. GABAergic interneurons are very diverse and at present, no data are available on the expression of drebrin in different interneuron cell types in the brain. However, the loss of drebrin may alter the development of neuronal circuits leading to an imbalance between excitation and inhibition, increased excitability and easier seizure evolution. This is corroborated by our finding that drebrin loss increases the probability of the CA3 network to generate gamma oscillations without exogenous induction. Interestingly, in a recent publication, authors detected antidrebrin autoantibodies in patients with adult onset epilepsy and suspected encephalitis. Exposure of hippocampal neurons to anti-drebrin autoantibodies resulted in aberrant drebrin distribution within neurons and network hyperexcitability 34 . These results are in accordance with our results and suggest that drebrin dysfunction can lead to impaired synaptic connectivity and increased seizure activity. There is increasing evidence for dendritic spine abnormalities in the epileptic neocortex and hippocampus, including changes in both structure and number of dendritic spines. Dendritic spine abnormalities were observed in neurodegenerative diseases that have an increased risk of seizures such as Alzheimer's disease 35 and juvenile Huntington's disease 36 . Genetic disorders with dendritic spine abnormalities were also documented to have a high epilepsy prevalence 14 . Among them, Fragile X syndrome, Rett syndrome, tuberous sclerosis and Down syndrome are all characterized by altered spine morphology and, with the exception of Fragile X syndrome, a decreased spine density 16,[37][38][39] . Spine loss and swelling of dendrites have been frequently observed in neocortical and hippocampal pyramidal cells in patients with temporal lobe epilepsy 4,6 . Morphological and structural changes of spines are tightly coupled to reorganization of the actin cytoskeleton mediated by actin-binding proteins [40][41][42] . In accordance with this, epilepsy is associated with detectable changes in the expression of different actin-binding or their upstream proteins. KA-induced seizures have been shown to activate the actin-depolymerizing protein cofilin and cause a loss of stable actin filaments 43 . Human temporal lobe epilepsy is also associated with decreased expression of reelin, a cofilin phosphorylating and inactivating protein in the hippocampus 44 . Profilin, a protein essential for actin polymerization was also found to be reduced in the hippocampus of temporal lobe epilepsy patients 45 . These findings suggest that epilepsy patients have less stable actin filaments within their spines. In line with this, a recent study found that lower drebrin levels in the hippocampi of temporal lobe epilepsy patients were associated with higher seizure frequency and less neuron survival 17 . These dendritic changes may represent a trait in the pathophysiology of seizure development or a consequence of them or even a compensatory response as a form of homeostatic plasticity to dampen excessive neuronal excitability 14 . Our data on drebrin KO mice suggest that less stable actin filaments in the spines increase the excitability and the probability of seizure-like activity. On the contrary, we did not observe alterations in drebrin expression in the entorhinal cortex after three hours of seizure-like activity, however, we investigated drebrin levels in the whole slice and cannot exclude local or subcellular alterations of drebrin levels after seizures in the hippocampal formation. Previous studies on animal models reported decreased drebrin expression in the hippocampus two hours after systemic KA-or pilocarpin induced seizures [18][19][20] , possibly by ERK-mediated phosphorylation and activation of the calcium-dependent phosphatase calpain-2 20 . The initial reduction was followed by the recovery of drebrin levels in the chronic phase of pilocarpine-induced seizures suggesting the crucial role of drebrin in reactive synaptic plasticity 18,19 . Another recent study, showed a negative correlation between drebrin expression and seizure frequency in particular in the dentate gyrus 17 . Taken together it is difficult to conclude on a linear correlation between drebrin expression and seizure susceptibility. Another possibility is that drebrin may be spatially and temporally regulated in response to seizure activity. In these lines, several studies showed changes in drebrin distribution following high-frequency stimulation in vivo, activation of NMDA receptor or induction of LTP with glutamate uncaging 40,46,47 . Interestingly, induction of LTP using glutamate uncaging, led to an initial decrease in drebrin concentration in the dendritic spine when the spine head volume increases, then later drebrin reenters the spine during the phase of actin stabilization 40 . This highly dynamic distribution of drebrin during synaptic activity suggests that drebrin may also change its localization to regulate actin dynamics in the dendritic spine in response to seizure activity to help protect the synapse from dysfunction. In conclusion, our data demonstrate, in line with previous results on the same drebrin KO mice 13 www.nature.com/scientificreports/ the contrary, we found that drebrin loss increases the susceptibility of the neuronal network to develop epileptiform discharges. The results confirm that stable actin filaments might have a protecting effect against seizures. Experimental procedures Animals and slice preparation. DBN KO mice were generated as previously described 13 Homogenates were centrifuged at 20,000 G and supernatant was collected for further protein quantification analysis using BCA Thermo Scientific Pierce Protein Assay. 30 µg of protein was loaded on an SDS-PAGE gel and western blot analysis was performed as previously described 49 . Anti-Drebrin (M2F6, Enzo) was used at a concentration of 1:1000 and anti-α-Tubulin (Sigma) was used in 1:8000. Quantification of band densities was performed using FIJI. The area of the band and the mean grey value were measured to obtain a relative density. For relative quantifications, measurements were normalized to tubulin loading control. Data evaluation and statistics. This study was conducted in accordance with the ARRIVE guidelines 50 . Gamma oscillations were analyzed by calculating power spectra with a 120-s window every 2 min during the whole recording. Spontaneous activity was computed as the peak power between 20 and 50 Hz during a period of 10 min before induction of the oscillations with KA. In some recordings, spontaneous oscillations were observed during this pre-induction period. Network activity was considered an oscillation when the power spectrum had a peak between 30 and 80 Hz and the Q (quality) factor (frequency/half bandwidth) of the oscillation was higher than the subcritical 0.5 51 . The Q factor measures the periodicity of oscillations independently of the peak frequency (where a high Q factor indicates a sharply distributed oscillation in the power spectrum around the peak frequency and a more periodic, predictable and less dampened oscillation) 52 . KA-induced gamma oscillations were analyzed 20-30 min after induction. Peak power, peak frequency, half bandwidth (at 50% of peak power), and Q factor of the oscillations were extracted by using a custom-made script for the Spike2 software (Cambridge Electronic Design, Cambridge, UK) 51,53 . D' Agostino-Pearson normality test was used to test the Gaussian distribution of the data. Peak power was found to be distributed lognormally, therefore is represented as geometric mean and geometric SD factor 54 . All other normally distributed parameters are presented as (arithmetic) mean ± SEM. The calculated parameters in the WT and KO group were compared with the Student's t-test. The lognormal distributed power values were first transformed to the logarithms and the logs were analyzed statistically. Epileptiform activity was analyzed by calculating the following parameters. Seizure onset latency was calculated as the time until the first seizure-like event (SLE) after omitting Mg 2+ from the extracellular solution 55 . The time until the first appearance of late recurrent discharges (LRD) after zero Mg 2+ application was used as onset latency of LRD. After appearance, SLEs were analyzed for their incidence (events/min) and duration 56,57 . Statistical comparison of epileptiform activity parameters was done by Student's t-test. Fisher's exact test was used to compare the fractions of slices developing gamma oscillations or seizure-like activities. Significance level was set at p < 0.05. Data availability The data that support the findings of this study are contained within the article. Not shown data are available on request from the corresponding authors ZG and PK.
5,168.8
2021-04-21T00:00:00.000
[ "Medicine", "Biology" ]
Development of A Clinically-Oriented Expert System for Differentiating Melanocytic from Non-melanocytic Skin Lesions Classification of PSLs by Abbas Q Differentiating melanocytic from non-melanocytic (MnM) skin lesions is the first and important step required by clinical experts to automatically diagnosis pigmented skin lesions (PSLs). In this paper, a new clinically-oriented expert system (COE-Deep) is presented for automatic classification of MnM skin lesions through deep-learning algorithms without focusing on preor post-processing steps. For the development of COEDeep system, the convolutional neural network (CNN) model is employed to extract the prominent features from region-ofinterest (ROI) skin images. Afterward, these features are further purified through stack-based autoencoders (SAE) and classified by a softmax linear classifier into categories of melanocytic and non-melanocytic skin lesions. The performance of COE-Deep system is evaluated based on 5200 clinical images dataset obtained from different public and private resources. The significance of COE-Deep system is statistical measured in terms of sensitivity (SE), specificity (SP), accuracy (ACC) and area under the receiver operating curve (AUC) based on 10-fold cross validation test. On average, the 90% of SE, 93% of SP, 91.5% of ACC and 0.92 of AUC values are obtained. It noticed that the results of the COE-Deep system are statistically significant. These experimental results indicate that the proposed COE-Deep system is better than state-of-the-art systems. Hence, the COEDeep system is able to assist dermatologists during the screening process of skin cancer. Keywords—Skin cancer; melanocytic; non-melanocytic; dermoscopy; deep learning; convolutional neural network; stack- INTRODUCTION Melanocytic and non-melanocytic (MnM) skin lesions [1] are the two major form of skin cancer.According to estimation in 2016, the skin cancer is rapidly increasing throughout the world and it is very common in white skin populations.Even in the United States, skin cancer is the most common form of cancer.For clinical experts, they have to first decide whether the lesion belongs to melanocytic or nonmelanocytic (MnM) class.After identification of this step, the clinical experts then classify the melanocytic lesion is benign or malignant.Whereas in a case of non-melanocytic lesions, the experts have to further classify them as a basal cell carcinoma (BCC), squamous cell carcinoma (SCC) or seborrheic keratosis (SK) skin lesions.An example of these lesions is visually represented in Fig. 1.All these classes are known as pigmented skin lesions (PSLs).Among different types of pigmented skin lesions (PSLs), the malignant melanoma has the highest mortality rate.Despite this fact, the occurrence of melanoma and nonmelanoma skin cancers are increasing with the highest rate.For early detection of skin cancer, it can definitely reduce the mortality of this disease.To diagnosis PSLs, the dermatologists are widely using digital dermoscopy with automatic image analysis computer-aided diagnostic (CADx) [2] system.In general, the dermoscopy equipped with CADx system is provided the most cost-effective non-invasive technique for early detection. Over the last few years, the computer-aided diagnostic (CADx) systems are developed for automatic classification of pigmented skin lesions (PSLs).Those CADx systems were used for providing the second opinion to dermatologists and assist them in better diagnosis of skin cancer. For classification of CADx system into melanocytic and nonmelanocytic categories, it is very crucial due to highest similarity among them.Compared to existing melanoma CAD system [3], the recognition rate of non-melanoma skin lesions is below than 75%. To differentiate PSLs lesions, the authors developed many state-of-the-art CADx tools [4] because the diagnosis by clinical experts is based on subjective whereas, a CADx system is more objective and reliable.The current CADx tools [5], [6] are developed based on hand-crafted features combine with machine learning algorithms such as neural network www.ijacsa.thesai.org(NN), support vector machines (SVMs), AdaBoost and deeplearning to achieve very good performance on certain skin cancers such as melanoma.But they are unable to perform diagnosis [7] over bigger classes of skin diseases such as in the case of melanocytic and non-melanocytic (MnM) categories. Human hand-crafted features are not providing a perfect solution for the development of CADx system for automatic diagnosis MnM skin lesions.In practice, the hand-crafted features required high expertise for domain-expert knowledge and it is suitable only for limited skin diseases.On the other hand, the deep learning algorithms are utilized in the few studies for the development of CADx tools.By using deep learning algorithms, the hand-crafted features are no need to define and it extracted automatically from an image.As a result, there is no need domain expert knowledge or pre-or post-processing steps to recognize PSLs lesions.Even for large-scale datasets, the deep-learning algorithms have displayed high performance compared to other algorithms such as NN, SVM or AdaBoost.Inspired by deep-learning algorithms, the convolutional neural network (CNN), stackbased autoencoders (SAE) and soft-max linear classifiers are integrated into this paper to get higher performance in terms of large-scale applicability of CADx tools to automatically diagnosis PSLs lesions. The rest of the paper is organized as follows.Section 2 introduces the background about this research study and deep learning architectures.In Section 3, the dataset and the proposed methodology are technically described.Section 4 shows the experimental results on the performance of the deep-learning algorithms using different training settings.Conclusions and future works of this paper are given in Section 5. II. BACKGROUND The past studies suggested that the researchers focused only the classification of melanocytic lesions (benign and melanoma) from dermoscopy images due to certain issues mentioned in the previous section.In practice, it is not so easy for clinical experts to differentiate among non-melanocytic lesions [8] such as SK, BCC or SCC compared with melanocytic lesions.Due to this reason, the differentiation between melanocytic and non-melanocytic (MnM) skin lesions is the first and important steps that are ignored currently by many computer-aided diagnostics (CADx) systems.As those CADx tools were trained and developed through melanocytic lesions and if we provided those nonmelanocytic lesions then the results showed unreliably.In this case, if the CADx system is extended to work with nonmelanocytic lesions then the system should have the capability to recognize them as well. To develop those CADx systems, there are mainly four steps involved such as image enhancement, segmentation, feature extraction and selection, and recognition.As a result, it is very much difficult for a person to develop a CADx system without having expertized on complex image processing techniques.In addition to this, the segmentation of nonmelanocytic lesions is very difficult to compare to melanocytic lesions due to rough and intensity variation around the lesion border.Moreover, the old CADx tools were developed through old machine learning algorithms such as artificial neural network (ANN), support vector machines (SVMs) and AdaBoost classifiers to recognize only melanocytic lesions.However, those CADx tools required lots of pre-or post-processing steps and domain expert knowledge for features selection.Also, those CADx tools were only applied on a limited dataset.Therefore in this paper, a deeplearning modern machine learning algorithms are used to differentiate between melanocytic from non-melanocytic (MnM) pigmented skin lesions, which applies in a large-scale environment.According to my limited knowledge, there is no study available that classify MnM through deep learning algorithm. There are few CADx tools developed in the past to recognize only melanocytic skin lesions based on deep learning architectures.At the beginning, the most famous architecture was used is CNN model to extract the features and then the decision of classification is performed based on softmax linear classifier.As mentioned above, the CNN model can be used to select features for multiple objects.Therefore, the use of simply CNN model is not suitable for differentiation between MnM skin lesions.Those CADx tools are mentioned in the subsequent paragraphs. The support vector machines (SVM) and deep belief network (DBN) are combined together in [9] to recognize a limited number of dermoscopy images such as 100.This system is tested on a set of the limited data set so unsuitable for a large-scale environment.In [10], the hybrid version of AdaBoost-SVM and deep neural network are integrated to learn hand-crafted features for classification of melanoma skin lesion.Also in [11], the SVM is combined with deep learning and sparse encoder techniques to classify melanoma images on 2624 images and reported 91.2% accuracy.By using of deep convolutional neural networks (DC-NN) machine learning algorithm in [12], the authors developed a three pattern detectors approach on a set of 211 images and reported accuracy below than 85%.The CNN model used in [13] to extract features with pooling techniques to recognize PSLS skin lesions and achieved 85.8% accuracy.The deep-neuralnetwork (DNN) is used to classify melanoma and achieved 89.3% accuracy.Similarly, the authors in [14] used CNN model to dermoscopy images to classify malignant melanoma skin lesions. The above-mentioned CADx tools are just used to classify melanoma skin lesions instead of non-melanoma lesions that are the first step required by dermatologists.In the past approaches, there is only one study [15] developed for differentiation between melanocytic and non-melanocytic skin lesions but required pre-or post-processing steps. Hence, this paper is focused on both categories and developed an automatic system through deep-learning algorithms.Deep learning algorithms are based on multilayer architecture and each is connected with other in a non-linear combination [16].There are many variants of deep-learning algorithms such as convolution neural network (CNN), deep belief network (DBN), restricted Boltzmann machine (RBM) and state-based autoencoders (SAE).For differentiation www.ijacsa.thesai.org between melanocytic and non-melanocytic (MnM) skin lesions, the CNN, SAE are integrated together and the final decision is performed through softmax linear classifier [17].In fact, the CNN model is used to best extract features from the pixels of the images and converted them into edges through its multilayer architecture approach.Afterward, the features are extracted by CNN model, are not optimized, therefore, the stack-based autoencoders (SAE) are employed to automatically select most discriminative features for better classification.As a result, the deep-learning algorithms are utilized to diagnosis pigmented skin lesions. III. METHODOLOGY The clinically-oriented expert system through deep learning (COE-Deep) algorithms involve three main steps such as extraction of deep features, optimization of deep features and classification of these features into melanocytic and non-melanocytic skin lesions.The overall systematic diagram of COE-Deep system is shown in Fig. 2.These phases are explained in the following sub-sections. A. Dataset Acquisition Clinically-oriented expert system using deep learning (COE-Deep) algorithms is tested on 5200 dermoscopy images contains an equal number of melanocytic and non-melanocytic skin lesions.These images were obtained from many public and private resources.Among 2300 dermoscopy images, the 400 melanocytic and another 400 non-melanocytic skin lesions are collected from EDRA [18] as a CD-room.One more, the dataset was collected from the Department of Dermatology, University of Auckland (DermAuck) [19].The DermAuck dataset contains 600 melanocytic and 600 nonmelanocytic lesions.The total 1600 melanocytic and 1600 non-melanocytic skin lesions were collected from the International Skin Imaging Collaboration (ISIC) [20].In total, the dataset of 5200 dermoscopy images is obtained from these three different sources along with different image sizes.All these images were resized to a standard size of (800 X 800) pixels resolution.Moreover, an expert dermatologist was requested to verify the images in all these two categories.The images contain skin lesion with other skin areas.Therefore from the center position of each image, the circular region-ofinterest (ROI) of size (400 X 400) pixels is automatically selected.An example of this dataset is also displayed in Fig. 1. B. Features Extraction During last few decades, the discriminative features extraction and selection becomes one of the difficult and challenging tasks because the subsequent recognition step depends on this step.As mentioned above, the features selected required domain expert knowledge for defining handcraft features and there are lots of steps about pre-or post-processing.Therefore in this paper, the convolutional neural networks (CNNs) model [17] is used to automatically select features from the raw pixels of the image.The CNNs model is used because it is utilized as a major tool in the past studies for classification problems.The CNNs model is applied to the pixel of images and there is no need to manually perform features extraction technique to define handcrafted features set.If the CNNs model is used to extract the features then without overfitting, it can have possible train the deep network in a sensible amount of time. In this article, the CNN model employs in the form 3layers deep neural networks to solve the problem of features selection from dermoscopy images.The first layer is directly linked to the image pixels and generated features map after convoluting layer filter.In the second layer, the similar features map are combined to generate edges that are presented in dermoscopy images.At last, the third layer is used to select mean activation function of the features from edge map.In this paper, the unsupervised approach of CNNs model is employed. The mathematical description of the CNN model is defined on a set of k filters, filters element as and elements as with C channels of size (m × n) with a set of N images with C channels of size (l × k).Based on this description, the first convolutional layer output is given as: And the output of an entire image/filter in the convolutional process is defined in CNN model as pairs as follows: Where represents 2D correlation.Fig. 3 illustrates the utilization of CNNs model to extract the features from the dermoscopy images. C. Optimization The features defined by CNNs model is not optimized.To optimize the most discriminative deep-invariant features, the stack-based autoencoders (SAEs) [17] is applied.In this paper, the SAEs algorithm is selected because it depicts the behavior of the human-like brain.The best results described in the past studies, if the supervised SAEs algorithm and four layers were used to optimize the deep features.In practice, the SDAs algorithm hypotheses are tested through trained greedy layerwise pre-training approach on the testing dataset.The main steps for the development of features optimization through SAEs are presented here.www.ijacsa.thesai.orgIn general, the pixels in an image that represents the feature vectors defined as an input hidden layer by autoencoders.However, the first input hidden layer in this paper is defined on features generated in the previous step.The second and third hidden layers transform those features into best representation, and an output final hidden layer matches the input layer for reconstruction.Autoencoders is assumed to be deep if the number of hidden layers is greater than one.Moreover, in this study, the original dimension of the hidden layers are defined small to perform features reduction step.Specifically, the autoencoders are developed through stochastic gradient descent method and trained by back propagation variants. The mathematical description of autoencoders it to learn the code ( ) from the features data, ( ) and map with weights (W) according to some sigmoid ( ) function.It is defined as: Where, b represented the biases of autoencoders.The code is then mapped back through a decoder into a reconstruction (R) through the similar transformation as mentioned above and defined as: And the reconstruction error is measured as: To minimize this mean square reconstruction error, the stochastic gradient decent approach was used in the training process of an AutoEncoder.This minimization step is performed by searching the weights on the encoder and decoder's connection, and share those weights on the encoder and decoder that utilized the same weights.As a result, this step is definitely used to reduce the features by ½ without having any deficiency on the performance of autoencoders.The autoencoders with these four layers are not sufficient to take the final classification decision due to over-fitting problem on this deep neural architecture.Therefore, the softmax linear classifier is used to take the final classification decision. D. Classification The softmax classifier is normally utilized in the past studies to recognize the objects or features through logic regression classifier in the form of binary representation.The softmax linear classifier [17] proceeds with a vector of random real-valued scores and compresses them into a vector of values between zero and one.The decision of differentiation is performed by softmax classifier based on normalize class probabilities and normally, this classifier is used to reduce the cross-entropy between estimated of class probabilities and the known distribution. IV. EXPERIMENTAL RESULTS The proposed clinically-oriented deep-learning (CO-Deep) system was implemented in Matlab® 2016 and tested on Windows 10 platform on Core i7 CPU.The statistical analysis was performed through sensitivity (SE), specificity (SP), accuracy (ACC) and area under the receiver operating curve (AUC) on the dataset of 5200 dermoscopy images collected from different resources.In this selected dataset, the melanocytic and non-melanocytic lesions are in equal quantity to provide equal importance during testing and classification stages.For developing the CO-Deep system, the dataset is divided into 40% of training and 60% of testing through 10fold cross validation test.Some of the results are shown in Table 1 of the proposed COE-Deep system on 5200 melanocytic and non-melanocytic (MnM) skin lesions when diagnosis through digital dermoscopy images.This table describes the statistical analysis in terms of Sensitivity (SE), Specificity (SP), Accuracy (ACC), training errors (E) and area under the receiver operating curve (AUC).As a display in Table 1, the average values for SE of 92%, SP of 94%, ACC of 93%, AUC of 0.94 and E of 0.73 are obtained when tested on this dataset in the case of melanocytic skin lesions whereas in the case of non-melanocytic skin lesions, the SE of 88%, SP of 92%, ACC of 90%, AUC of 0.90 and E of 0.65 are achieved.From these results, it clears that the proposed COE-Deep system is getting significantly higher results in the case of melanocytic than non-melanocytic skin lesions.It is due to the fact that it is very difficult to recognize non-melanocytic lesions compared to melanocytic skin lesions.Therefore, according to limited knowledge, there is no effective study for differentiation between MnM skin lesions through deep-neural-network approach without the need of hand-crafted features and pre-or post-processing steps. In the past studies, there was only one paper found [15], where the authors utilized domain expert knowledge of image processing and machine learning algorithms to perform this classification of MnM skin lesions but the system required lots of steps with pre-and post-processing stages.They represented classification results of melanocytic lesions on 548 lesions in terms of sensitivity of 98.0% and a specificity of 86.6% using a cross-validation test.These obtained results were mentioned on the small dataset and classifier may be over-fitted when applied on a large scale environment.Therefore, the proposed system is better compared to [15] in terms of large-scale applicability.Using the above-obtained results, it confirmed that the COE-Deep system based on the advanced deep learning algorithm is capable of classifying melanocytic and non-melanocytic skin lesions.This is the first and basic difficult step for dermatologists to draw a separate line between MnM skin lesions in the diagnosis process.As a result, the proposed method assists the clinical experts to draw this clear line. The comparisons are also performed with the state-of-theart deep-learning algorithms in terms of SE, SP, ACC, AUC and E-statistical analysis on this selected dataset.As calculated in Table 2, the convolutional neural network (CNN) with four layers on average obtained SE of 80%, SP of 84%, ACC of 82%, AUC of 0.81 and E of 0.75 values to different MnM skin lesions.If CNN is integrated with the softmax linear classifier then the recognition results are high significantly better.In the case of CNN and softmax classifiers, SE of 84%, SP of 88%, ACC of 86%, AUC of 0.87 and E of 0.73 values are achieved.In contrast with CNN, if the stack-based autoencoders (SAEs) are utilized then the SE of 85%, SP of 88%, ACC of 86.5%, AUC of 0.86 and E of 0.71 values on average are obtained.However, the significantly better results are obtained in the case of SAE and softmax linear classifiers.In that case, the SE of 89%, SP of 90%, ACC of 89.5%, AUC of 0.88 and E of 0.69 values on average are gained.But the higher significant results are obtained in the case of proposed COE-deep system when combined CNN, SAE and softmax classifiers to recognize melanocytic and nonmelanocytic skin lesions. All these above-mentioned results in Tables 1 and 2 were reported through 10-fold cross-validation test to classify MnM skin lesions.Fig. 3 has shown the corresponding receiving operating characteristic curve (ROC) for differentiation between MnM skin lesions.An area under the curve (AUC) shows the significant result of this COE-Deep system, which is greater than 0.5 compared to CNN and stack-based autoencoders (SAEs).The SAEs deep-learning algorithms are getting higher AUC value compared to CNN model but less than the proposed COE-Deep system.As displayed in Table 1, it can be noticed that in the case of melanocytic skin lesions, the best performance has been measured i.e., AUC: 0.94.This proposed system based on deep-learning algorithms significantly improves the performance with the average value of AUC: 0.92.It is because of designing an effective classification system through advanced concepts of deeplearning algorithms without focusing on features extraction and selection steps. V. CONCLUSIONS A clinically-oriented expert system based on deep-learning (COE-Deep) algorithms is presented in this paper to automatically differentiate between melanocytic and nonmelanocytic (MnM) skin lesions.The convolutional neural network (CNN) is employed to extract deep features and then most discriminative features are selected by stack-based autoencoders (SAEs) model.Finally, the recognition of decision is performed by Softmax linear classifier.On 5200 clinical dermoscopy images, the statistically significant results were obtained in terms of sensitivity (SE), specificity (SP), accuracy (ACC) and area under the receiver operating curve (AUC) when used 10-fold cross validation test.On average, the 90% of SE, 93% of SP, 91.5% of ACC and 0.92 of AUC values are obtained.Hence, the proposed COE-Deep system is best suited for classification of non-melanocytic skin lesions should improve the accuracy, reliability, and accessibility of pigmented skin lesions screening system.In the future work, this effort much added to get more accurate and an improved accuracy. Fig. 2 . Fig. 2. A systematic flow diagram of proposed COE-Deep system for classification of melanocytic and non-melanocytic skin lesions. Fig. 3 . Fig. 3. Performance comparisons of proposed DermaDeep system with state-of-the-art classification systems in terms of Area under the Receiver operating curve. TABLE II b. Sensitivity, b.Specificity, c. Accuracy, d.Area under ROC curve, e. Training errors
5,047
2017-01-01T00:00:00.000
[ "Computer Science", "Medicine" ]
Interplay between Nonsense-Mediated mRNA Decay and DNA Damage Response Pathways Reveals that Stn1 and Ten1 Are the Key CST Telomere-Cap Components Summary A large and diverse set of proteins, including CST complex, nonsense mediated decay (NMD), and DNA damage response (DDR) proteins, play important roles at the telomere in mammals and yeast. Here, we report that NMD, like the DDR, affects single-stranded DNA (ssDNA) production at uncapped telomeres. Remarkably, we find that the requirement for Cdc13, one of the components of CST, can be efficiently bypassed when aspects of DDR and NMD pathways are inactivated. However, identical genetic interventions do not bypass the need for Stn1 and Ten1, the partners of Cdc13. We show that disabling NMD alters the stoichiometry of CST components at telomeres and permits Stn1 to bind telomeres in the absence of Cdc13. Our data support a model that Stn1 and Ten1 can function in a Cdc13-independent manner and have implications for the function of CST components across eukaryotes. INTRODUCTION Telomeres are complex nucleoprotein structures that protect chromosome ends from DNA damage responses (DDR). The most terminal DNA on a chromosome is typically G-rich 3 0 single-stranded DNA (ssDNA), resembling a DNA doublestrand break (DSB) in the process of repair by homologous recombination. In budding yeast, CST (Cdc13, Stn1, and Ten1), proteins are proposed to form a heterotrimeric telomeric ssDNA-binding complex, that helps cap telomeres and is analogous to the heterotrimeric RPA complex that binds nuclear ssDNA during the process of transcription, DNA replication, and repair (Gao et al., 2007;Sun et al., 2009Sun et al., , 2011. Cdc13 binds telomeric ssDNA strongly via an oligonucleotide/ oligosacccharide binding (OB) fold (Lewis et al., 2014). Stn1 and Ten1 also bind telomeric ssDNA but with lower affinity than Cdc13 and are thought to be recruited to DNA via Cdc13 (Gao et al., 2007;Qian et al., 2009Qian et al., , 2010. So far, the budding yeast CST complex has not been purified, but recent evidence from the distant yeast Candida glabrata suggests that in this organism CST functions as a 2:4:2 or 2:6:2 complex (Lue et al., 2013). Orthologs of CST components have recently been identified in mammals, plants, and fission yeast. The human components of CST (CTC1, STN1 [OBFC1], and TEN1) can be purified as a trimeric complex (Chen et al., 2012;Giraud-Panis et al., 2010;Miyake et al., 2009;Surovtseva et al., 2009). Mutations in CTC1 are associated with human diseases and have been associated with cellular telomere defects (Chen et al., 2013;Anderson et al., 2012). Interestingly, CTC1 and STN1 were originally identified when copurified with human DNA polymerase alpha and named alpha accessory factor (AAF) (Casteel et al., 2009). The interaction of CST with DNA polymerase alpha is conserved because budding yeast Cdc13 and Stn1 also bind to DNA polymerase alpha components (Qi and Zakian, 2000;Grossi et al., 2004). In budding yeast, where CST was first identified, there is evidence that CST subunits perform different functions. For example, Cdc13 helps recruit telomerase via interaction with the telomerase subunit Est1 (Nugent et al., 1996;Qi and Zakian, 2000;Mitton-Fry et al., 2004). In contrast, Stn1 interferes with telomerase activity because Stn1 and Est1 have overlapping binding sites on Cdc13, and Stn1 inhibits telomerase activity by competing with Est1 for Cdc13 binding (Puglisi et al., 2008;Chandra et al., 2001). Another example is that Stn1, when overproduced, acts as a checkpoint inhibitor (Gasparyan et al., 2009). However, because Cdc13, Stn1, and Ten1 are each essential proteins in budding yeast, and there is clear homology to RPA, it is suggested that CST proteins function together to provide the essential function of capping the telomere (Gao et al., 2007). In yeast and human cells, nonsense mediated mRNA decay (NMD) proteins play important roles at telomeres. NMD degrades transcripts containing premature termination codons (PTCs) to reduce the risk that potentially harmful truncated proteins (or RNA) are made in cells (Isken and Maquat, 2008). It is estimated that about 10% of human diseases are associated with PTCs (Bidou et al., 2012). In human cells, the key NMD proteins UPF1, UPF2, and UPF3 bind to telomeres, and telomere loss occurs in UPF1 and UPF2-depleted cells (Lew et al., 1998;Azzalin et al., 2007). Consistent with the telomere effect in human cells, budding yeast nmdD mutants show a short telomere phenotype. Interestingly, in yeast nmdD mutants overexpression of Stn1 and Ten1 is largely responsible for the short telomere length phenotype (Dahlseid et al., 2003). This is presumably because Stn1/Ten1 inhibits telomerase activity by interfering with Est1-Cdc13 interaction. We have previously reported that disabling NMD (NAM7, NMD2, and UPF3) or DDR genes such as EXO1, encoding a nuclease, or RAD24, encoding the checkpoint sliding clamp loader, suppresses temperature sensitivity of telomere-defective cdc13-1 strains to similar extents (Addinall et al., 2011). Given the important roles played by CST, NMD, and DDR proteins at mammalian and yeast telomeres, we wanted to better understand the interplay between NMD and DDR at uncapped telomeres. Remarkably, we find that deleting NMD2 with either EXO1 or RAD24 completely bypasses the requirement for Cdc13. However, the same genetic interventions do not bypass the need for either Stn1 or Ten1. These and other molecular experiments indicate that CST does not always function as an RPA-like trimeric protein in yeast. Instead, our data show that Stn1 and Ten1 are critical for cell viability in conditions when Cdc13 is not, and this suggests that Stn1 and Ten1 can cap telomeres, or perform other essential functions, in the absence Cdc13. RESULTS cdc13-1 Can Be Strongly Suppressed by nmd2D with exo1D and/or rad24D The Cdc13-1 protein becomes increasingly defective at capping the telomere as temperatures increase. At high temperatures, cdc13-1 cells accumulate telomeric ssDNA, activate checkpoint pathways, and arrest before anaphase (Garvik et al., 1995). To begin to systematically define the proteins and pathways that are important for telomere function, cdc13-1 was combined with the yeast genome knockout collection to identify suppressors and enhancers of the temperature-sensitive telomere defect (Addinall et al., 2011). We found that deletions of NMD genes (nam7D, nmd2D, and upf3D), which cause short telomere phenotypes, suppress the cdc13-1 defect strongly. The effects of nmdD mutations were as strong as deletions affecting aspects of the DNA Damage Response (DDR), including deletions of DNA damage checkpoint genes (ddc1D, rad9D, rad17D, and rad24D) or exo1D, affecting a nuclease that attacks uncapped telomeres ( Figure 1A). Interestingly, other deletions affecting the DDR or telomerase cause a short telomere phenotype but enhanced the cdc13-1 defect; such proteins include the Ku complex (Yku70, Yku80), the MRX complex (Mre11, Rad50, Xrs2), or telomerase (Est1 and Est3 regulatory subunits). Therefore, nmdD mutations are somewhat unusual in that they result in short telomeres but suppress cdc13-1. To better understand the role of NMD at telomeres, we investigated the overlap between NMD and the DDR. We generated all possible combinations of nmd2D, exo1D, and rad24D mutations in cdc13-1 strains. We observed strong synergistic interactions between nmd2D and exo1D or rad24D mutations. Specifically, deleting nmd2D in combination with exo1D or rad24D in cdc13-1 strains significantly increased strain fitness compared to each single gene deletion ( Figure 1B). In contrast, exo1D rad24D double deletions only marginally improved growth compared to exo1D or rad24D single deletions. We conclude that NMD inhibits the growth of cdc13-1 mutants by a mechanism that is distinct to the effects of Exo1 and Rad24, which are more similar in effect. The nmd2D rad24D exo1D cdc13-1 strain was most fit, growing robustly at 36 C, demonstrating that Nmd2, Rad24, and Exo1 each perform different functions to inhibit growth of cdc13-1 mutants. The synergistic genetic interactions between NMD and the DDR indicate that that NMD functions in parallel to the DDR proteins Exo1 and Rad24 to inhibit growth of cdc13-1 mutants. nmd2D Affects ssDNA Accumulation in cdc13-1 Strains Exo1 and Rad24 inhibit growth of cdc13-1 strains at least in part by generating single-stranded DNA (ssDNA) at uncapped telomeres (Zubko et al., 2004). To test the effect of Nmd2 on ssDNA, we measured ssDNA near telomeres in nmd2D cdc13-1 and nmd2D rad9D cdc13-1 strains. The checkpoint protein Rad9, Figure 1. Deletion Mutations that Suppress or Enhance cdc13-1 (A) cdc13-1 or CDC13 strains were combined with the yeast knockout collection and fitness (maximum doubling rate 3 maximum doubling potential) determined at 27 C (Addinall et al., 2011). Each spot corresponds to the position of a single gene deletion. cdc13-1 suppressors (red) or enhancers (green) are indicated, as are deletions known to affect telomere length (blue) or the DNA damage response (purple). (B) Saturated cultures, grown at 23 C, were serially diluted in water and spotted onto YEPD plates. Strains were grown at the temperatures indicated for 2 days before being photographed. like its mammalian ortholog 53BP1, inhibits ssDNA accumulation and was used to sensitize some strains to the accumulation of ssDNA (Lazzaro et al., 2008;Bunting et al., 2010). We used quantitative amplification of single-stranded DNA (QAOS) to measure ssDNA accumulation at the DUG1 and RET2 loci on the right arm of chromosome VI-R (Holstein and Lydall, 2012) (Figure 2A). Deleting NMD2 reduced the amount of ssDNA generated in cdc13-1 or cdc13-1 rad9D-strains at loci 20 or 30 kb from uncapped telomeres ( Figures 2B-2E). We further investigated the effect of deleting NMD2 on telomeric ssDNA by using a fluorescent native in-gel assay to measure ssDNA in telomeric repeats in nmd2D cdc13-1 strains, grown at a restrictive temperature. Consistent with the QAOS data, we observed reduced ssDNA accumulation in the telomeric repeats of nmd2D cdc13-1 strains after 4 hr at 36 C ( Figure 2F). To obtain independent evidence that NMD2 affects ssDNA, we measured the effect of nmd2D on cell viability of cdc13-1 and cdc13-1 rad9D strains subjected to restrictive and permissive temperature cycles in an ''up-down'' assay (Fig- (F) Cells dividing exponentially at 23 C were incubated at 36 C and ssDNA in the telomeric repeats was measured. SYBR Safe was used as a loading control. ssDNA was quantified using ImageJ and normalized relative to the loading control. The final fold change is relative to the 0 hr time point of each strain. (G) Yeast strains indicated were grown to saturation at 23 C before being spotted on two plates. One plate was incubated at 23 C for 3 days, the other plate was incubated for three 4 hr cycles at 36 C, separated by 4 hr at 23 C, before colonies were allowed to form at 23 C. See also Figure S1. ures 2G and S1A). Deleting NMD2 in a cdc13-1 or cdc13-1 rad9D background increased cell viability assessed by spot tests after growth at 36 C, similar to the effect of deleting EXO1 in the same backgrounds ( Figure 2G). This spot test result was confirmed by determining cell viability after incubation at restrictive temperature: nmd2D cdc13-1 rad9D cultures contained nearly 8% viable cells compared to around 1% of cdc13-1 rad9D cultures at the 240 min time point ( Figure S1B). We conclude that Nmd2, like Rad24 and Exo1, affects ssDNA levels in cdc13-1 mutants. nmd2D rescued loss of viability caused by rapid accumulation of ssDNA in cdc13-1 mutants, similar to the previously reported effects of exo1D and rad24D mutations (Zubko et al., 2004). It is known that disabling NMD pathways increases the levels of many telomere related proteins and RNAs, including the Ku complex, telomerase, Telomeric Repeat Containing RNA (TERRA), and the Cdc13 partner proteins Stn1 and Ten1 (Guan et al., 2006;Azzalin et al., 2007;Dahlseid et al., 2003;Addinall et al., 2011). It is likely therefore that disabling the NMD pathway increases the levels of one or more of these telomere related proteins or RNAs and thereby reduces resection of telomeric ssDNA. Alternatively, NMD may regulate an unidentified nuclease that attacks telomeric DNA, play a direct role in resection, or affect the stability of ssDNA generated in cdc13-1 strains. The Requirement for CDC13 Can Be Bypassed The robust growth of nmd2D rad24D exo1D cdc13-1 mutants at 36 C suggested that cells deficient in NMD and DDR might be able to divide in the absence of any Cdc13 function. To test this, we deleted CDC13 in a diploid strain that carried heterozygous deletions of UPF1, EXO1, and RAD24. We sporulated the diploid, dissected tetrads, and germinated the spores. Consistent with our hypothesis, 100% of nmd2D rad24D cdc13D, nmd2D exo1D cdc13D, and nmd2D rad24D exo1D cdc13D spores formed visible colonies, whereas all other cdc13D genotypes did not ( Figures 3A and S2A). Inviable cdc13D strains formed microcolonies, and the sizes of microcolonies were increased by nmd2D, exo1D, or rad24D mutations ( Figures 3B and S2B), just as the deletions improved fitness of cdc13-1 cells at semipermissive temperatures ( Figure 1B). Therefore, (A) NMD2/nmd2D EXO1/exo1D RAD24/rad24D CDC13/cdc13D diploids were sporulated. Tetrads were dissected onto YEPD plates, and spores were allowed to form colonies for 5 days at 23 C before being photographed. (B) Following germination of spores in (A), microcolonies were photographed using a 203 objective on a Microtec microscope and reproduced at the same scale. A representative subset of microcolonies is shown. (C) Strains of the genotypes indicated were repeatedly passaged by toothpick every 4 days at 23 C. At the indicated times, 2 ml liquid cultures were grown overnight, serially diluted, spotted onto YEPD plates, and incubated for 2 days before being photographed. (D) Genomic DNA was isolated from the yeast strains indicated, and telomere structures were analyzed by Southern blotting using a Y 0 and TG probe. SYBR Safe was used as a loading control. See also Figures S2 and S3. combining disruptions affecting NMD with those affecting EXO1 and RAD24 can permit cell division in the absence of Cdc13. Because some cdc13D genotypes form visible colonies, whereas other cdc13D genotypes form only microscopic colonies, we wondered whether cells in large cdc13D colonies might eventually stop dividing. To examine fitness over time, we subcultured viable cdc13D strains for many passages and measured fitness by spot test. Fitness of the cdc13D strains increased rather than decreased with time ( Figure 3C), similar to telomerase-deficient strains (tlc1D), which escape senescence and maintain telomere length by mechanisms independent of telomerase (Lundblad and Blackburn, 1993;Wellinger and Zakian, 2012). Consistent with this similarity, when we examined telomere structures of cdc13D strains they were altered by passage 9 and showed rearrangements like telomerase-deficient survivors ( Figure 3D). We conclude that cdc13D cells are viable indefinitely and rearrange their telomere structures like telomerase-deficient cells. Given that cdc13D cells rearranged telomeres like telomerasedeficient tlc1D cells, we wondered if they needed functional telomerase in order to divide, as cdc13D pif1D exo1D have been demonstrated to depend on telomerase for survival (Dewar and Lydall, 2010). We germinated spores derived after introducing a tlc1D disruption into a diploid strain containing heterozygous deletions of CDC13, NMD2, RAD24, or EXO1. We found viable cdc13D tlc1D strains when nmd2D and exo1D, rad24D, or exo1D rad24D were present ( Figure S3A). Furthermore such strains could be cultured for many passages, showed increased fitness over time and altered telomere structure like telomerasedeficient survivors (Figures S3B and S3C). We conclude that nmd2D cdc13D strains use telomerase-independent mechanisms to maintain telomere length. The Requirement for STN1 and TEN1 Cannot Be Bypassed To test whether yeast cells survive without the Stn1 or Ten1, the other components of the CST complex, we introduced stn1D or ten1D disruptions into the diploid strain containing heterozygous deletions of NMD2, RAD24, or EXO1. In contrast to what was found with cdc13D, we could not identify any visible stn1D or ten1D colonies (Figures 4, S4A, and S4B). Interestingly, germinated stn1D and ten1D spores often formed microcolonies like some of the cdc13D genotypes (Figures 4 and 3B). Therefore, stn1D and ten1D cells sometimes undergo a limited number of cell divisions but cannot divide indefinitely, irrespective of the status of NMD2, RAD24, or EXO1. Similarly, cdc13D, nmd2D cdc13D, exo1D cdc13D, rad24D cdc13D, and exo1D rad24D cdc13D cells sometimes undergo a few cell divisions before stopping division ( Figure 3B). We note that others have reported that stn1D rad24D microcolonies are smaller than cdc13D rad24D microcolonies, which is consistent with our data (Paschini et al., 2012). In summary, all these microcolony patterns suggest that stn1D and ten1D strains have similar but more severe growth defects than cdc13D strains. One explanation for the fitness differences between cdc13D and stn1D or ten1D strains was that fitness differences were not due to CST defects per se but instead because important genes adjacent to STN1 and TEN1 were affected in deletion strains (Ben-Shitrit et al., 2012). However, this is not the case because the essential functions missing in stn1D and ten1D strains could be rescued by expressing the missing STN1 or TEN1 genes on plasmids ( Figure S4C). Furthermore, the strains relying on plasmid-borne STN1 or TEN1 could not lose such plasmids ( Figure S4D). Given that several defined genetic backgrounds allow growth of cdc13D but not of stn1D or ten1D strains, this strongly implies that Stn1 and Ten1 are more critical for cell viability than Cdc13. Our experiments show that budding yeast cells defective in NMD and Exo1 or Rad24 can grow indefinitely without Cdc13 and telomerase, but that in such cells telomere function is compromised. However, cells with otherwise identical genetic backgrounds cannot grow in the absence of Stn1 or Ten1. The simplest explanation for these observations is that Stn1 and Ten1 play additional roles to Cdc13 in maintaining budding yeast cell viability. Consistent with this data, other experiments have shown that truncated and overexpressed versions of Stn1/ Ten1 can bypass the need for Cdc13 (Petreaca et al., 2006(Petreaca et al., , 2007Gasparyan et al., 2009). It has been shown that the nmd2D telomere phenotype is due, at least in part, to elevated Stn1 levels. Specifically, overexpression of STN1, or simultaneous overexpression of STN1 and TEN1, leads to short telomeres of a similar length to nmd2D mutants (Dahlseid et al., 2003). Therefore, we wondered whether growth of cdc13D cells depended on Stn1 and/or Ten1 overproduction. To test this hypothesis, we examined a different genetic background that was not expected to affect Stn1 levels. Pif1 is a helicase that is active at telomeres and deletion of PIF1 and EXO1 also permits deletion of CDC13 (Dewar and Lydall, 2010). We repeated previous experiments and were able to generate viable strains from germinated cdc13D pif1D exo1D spores. However, we were unable to generate equivalent stn1D or ten1D strains ( Figures 5A-5C and S5), reproducing what was found in other genetic backgrounds (Figure 4). These results strongly suggested that overexpression of Stn1 was not necessary to bypass Cdc13 function. However, it remained possible that pif1D or exo1D mutations caused increased Stn1 or Ten1 levels. Therefore, we measured Stn1 and Ten1 RNA expression levels in pif1D, exo1D, and pif1D exo1D strains, using quantitative RT-PCR (qRT-PCR). In these strains, levels of STN1 and TEN1 RNA were not significantly different from wild-type ( Figures 5D and 5E), whereas, as expected, levels of STN1 and TEN1 RNAs were increased by an nmd2D mutation. Finally, it was possible that Stn1 or Ten1 might be transcriptionally induced by the response to telomere uncapping in cdc13D cells. However, this is not the case because there is no significant increase in STN1 and TEN1 RNA levels in cdc13-1 strains grown at high temperatures (Greenall et al., 2008). We conclude that bypass of the requirement for Cdc13 does not depend on nmd2D-dependent overexpression of STN1 and/or TEN1. Instead, our data suggest that at normal levels of expression Stn1 and Ten1 can, in some circumstances, function without Cdc13 to maintain viability of yeast cells. Stn1, Ten1, and Cdc13 Can Bind Telomeric DNA at Different Ratios Our experiments show that Stn1 and Ten1 contribute to yeast cell viability in conditions when Cdc13 is not required. To see if the essential function provided by Stn1 or Ten1 was at telomeres, we asked whether disabling the NMD pathway affected the ratio of CST components at telomeres. To investigate this, we used a chromatin immunoprecipitation (ChIP) assay to measure binding of myc-tagged STN1, TEN1, and CDC13 to telomeric DNA, in wild-type or nmd2D backgrounds. We observed about a 10-fold increase in binding of Stn1 and a 5-fold increase for Ten1 to telomeres in nmd2D mutants but only a 2-fold increase in the levels of Cdc13 (Figures 6A-6C). We conclude that Cdc13, Stn1, and Ten1, the components of the CST complex, can bind telomeres at different ratios. Given that we were able to delete Cdc13, but could not delete Stn1 or Ten1, the other two components of the CST complex, this suggests that Stn1 or Ten1 might help cap the telomere in the complete absence of Cdc13. We tested this hypothesis using a ChIP assay. We found that Stn1-Myc was indeed bound to telomeric DNA in a Cdc13-independent manner in an nmd2D exo1D rad24D cdc13D strain ( Figure 6D). The level of Stn1 binding to telomeres was lower in the cdc13D strain compared to the CDC13 + strain; this could be due to the cdc13D cells having dramatically rearranged telomeres. We did not find evidence of Ten1 binding to telomeres in the absence of Cdc13 ( Figure 6E). However, Ten1 enrichment at telomeres was also relatively weak in the nmd2D exo1D rad24D strain, and it may be that any binding is below our detection limit. RPA is another ssDNA binding protein, binds at telomeres, and is therefore likely to compete with Cdc13 as a telomeric ssDNA binding protein. Consistent with this hypothesis, we measured more RPA bound to telomeres in the absence of Cdc13 ( Figure 6F). This suggests that in the absence of Cdc13, RPA can bind telomeric DNA and that RPA cooperates with Stn1, Ten1, and other proteins to cap the telomere. We conclude that Stn1 can bind telomeres in the absence of Cdc13. DISCUSSION We have shown that NMD acts in a parallel pathway to the Exo1 and Rad24 DDR proteins to inhibit growth of yeast cells with defective telomeres. Furthermore, we show that NMD, like Exo1 and Rad24, affects the level of telomeric ssDNA. Remarkably, we find that the requirement for CDC13 can be robustly bypassed in 100% of cells with nmd2D and exo1D or rad24D mutations. Viable cdc13D strains can be cultured for many passages, and the telomeres in such cells resemble those of telomerase-deficient survivors and still bind Stn1. In contrast, none of the four genetic backgrounds that allow robust bypass of cdc13D allowed bypass of stn1D or ten1D. Cdc13, along with Stn1 and Ten1, has been proposed to form an essential heterotrimeric telomeric ssDNA binding complex analogous to RPA, the ssDNA binding complex (Gao et al., 2007). The CST/RPA model is attractive for many reasons, perhaps most notably because all three CST subunits are, like the RPA subunits, essential for yeast cell viability, and all three contribute to telomere protection. However, we have identified several defined genetic backgrounds that permit deletion of CDC13, but none of these permit deletion of STN1 or TEN1. The simplest explanation for these data is that Stn1 and Ten1 play Cdc13-independent roles at the telomere, or elsewhere. We show that STN1 and TEN1 binding to telomeric DNA increases more than Cdc13 in nmd2D strains, which suggests that Stn1 and Ten1 can bind telomeric DNA without Cdc13. Indeed, we also show that Stn1 binds to telomeric DNA in the absence of Cdc13. Consistent with our data, others have shown that C-terminal truncations of Stn1, which disrupt the Stn1-Cdc13 interaction, are sufficient to support cell viability and telomere function (Petreaca et al., 2007). Interestingly, Stn1 overproduction inactivates the S phase checkpoint in budding yeast, and, although the biochemical mechanism explaining this interaction is not known, it is tempting to speculate that some aspect of this checkpoint inhibition function is critical for Stn1 function (Gasparyan et al., 2009). We conclude that budding yeast Cdc13-the largest component of the CST complex-contributes to a subset of the essential functions performed by its smaller partners, Stn1 and Ten1. Ten1 was the last of the budding yeast CST components to be identified, in 2001 (Grandin et al., 2001). It was only much more recently that orthologs of CST components were identified in higher eukaryotes (Giraud-Panis et al., 2010). Our data from budding yeast, showing that STN1 and TEN1 are critical for cell viability in conditions when CDC13 is not, are consistent with data from other organisms, suggesting that this pattern might be universally the case in eukaryotes. For example, so far, no ortholog of Cdc13 has yet been reported in fission yeast but orthologs of both Stn1 and Ten1 have been identified (Jain and Cooper, 2010). Also, mutations in human CTC1, the ortholog of CDC13, are found in a number of diseases associated with telomere defects (Coats plus, dyskeratosis congenita and CRMCC); however, no equivalent mutations in STN1 or TEN1 have yet been identified in the same cohorts of patients (Anderson et al., 2012;Polvi et al., 2012;Walne et al., 2012). Perhaps mutations in Stn1 or (A and B) NMD2/nmd2D EXO1/exo1D RAD24/rad24D STN1/stn1D and NMD2/nmd2D EXO1/exo1D RAD24/rad24D TEN1/ten1D diploids were sporulated. Tetrads were dissected onto YEPD plates, and spores were allowed to form colonies for 5 days at 23 C before being photographed (see also Figure S4). Following germination of spores, microcolonies were photographed using a 203 objective on a Microtec microscope and reproduced at the same scale. See also Figure S4. We are uncertain about the genotypes of individual microcolonies as we cannot establish which gene deletions were inherited by each spore (in contrast to Figure 3B). (A-C) PIF1/pif1D EXO1/exo1D CDC13/cdc13D, PIF1/pif1D EXO1/exo1D STN1/stn1D, and PIF1/ pif1D EXO1/exo1D TEN1/ten1D diploids were sporulated. Tetrads were dissected onto YEPD plates, and spores were allowed to form colonies for 5 days at 23 C before being photographed. See also Figure S5. (D and E) qRT-PCR analysis of Stn1 and Ten1 RNA expression levels in the strains indicated. A single wild-type (WT) strain was given the value of 1, and the error bar indicates the value of the other wildtype strain. All other genotypes are expressed relative to the single wild-type strain, the mean of two independent strains is shown, and error bars indicate the individual value of each strain. See also Figure S5. Ten1 in humans cause stronger phenotypes that are not tolerated. We have previously shown that some cdc13D strains can also be deleted of STN1 . stn1D strains grew less well than the parental (cdc13D) strains, and we were unable to identify any ten1D strains. These data, and those we report here, show that a functional telomere is very flexible in terms of the proteins it contains. The possibility remains that conditions will be identified that permit bypass of Stn1 and or Ten1 but not of Cdc13. A better understanding of the functions of Cdc13, Stn1, and Ten1 at telomeres will be important to see if this is likely. As it stands, our data suggest there is a functional hierarchy of CST subunit function in budding yeast with Ten1 more critical than Stn1, which is more critical than Cdc13. If Stn1 and Ten1 function at eukaryotic telomeres in the absence of Cdc13, then how do they do so? Because Stn1 and Ten1 have low affinity for telomeric DNA (in comparison with Cdc13), one simple explanation is that Stn1 and Ten1 bind and cap the telomere via interactions with any of the numerous other telomere binding proteins or RNAs. The idea that Stn1 interacts with proteins other than Cdc13 to perform essential functions is consistent with data showing that the Ten1 interaction domain of Stn1 is much more critical for cell viability than the Cdc13 interaction domain (Petreaca et al., 2007). Stn1/Ten1 might interact with one or more than one of numerous other proteins found at budding yeast telomeres, and elsewhere, including Rap1, Rif1, Rif2, Ku, MRX, Tel1, Telomerase, Sir proteins, RPA, and DNA polymerase alpha. We tested a model in which subunits of RPA formed heterotrimers with CST subunits, but we could obtain no strong evidence for such a model (data not shown). However, we did observe increased binding of RPA to telomeres in the absence of Cdc13. Both budding yeast and mammalian CST components interact with Pol a primase and in yeast this interaction has been shown to promote telomere capping (Grossi et al., 2004;Gasparyan et al., 2009;Qi and Zakian, 2000; Anderson et al., Figure 6. Altered Stoichiometry of CST Components at Telomeres (A-E) ChIP analysis of Cdc13-13Myc, Stn1-13Myc, and Ten1-13Myc binding to the VI-R telomere and the internal locus PAC2 on chromosome V. Cultures of each genotype were grown at 23 C, and cells were harvested in exponential phase. Duplicate samples were immunoprecipitated with a Myc antibody (IP) or a nonspecific IgG control (BG). ChIP samples were measured in triplicate by qPCR, and group means are shown with error bars indicating SD. (F) ChIP analysis of RPA binding to the VI-R telomere and the internal locus PAC2 on chromosome V. ChIP was conducted as in (A)-(E) using an anti-S. cerevisiae RPA antibody (IP) or a nonspecific IgG control (BG). ChIP samples were measured in triplicate by qPCR, and group means are shown with error bars indicating SD. 2012). In mammalian cells, CST components facilitate the replication of telomeric lagging-strand DNA (Sun et al., 2011;Nakaoka et al., 2012). It will be interesting to determine how telomeres are capped and replication is completed in the absence of Cdc13. Finally, given that CST and NMD play important roles in telomeres in yeast and humans, the genetic interactions we report in yeast may identify useful avenues to pursue for developing future treatments for the human diseases in which CTC1 is affected (Gu and Chang, 2013). Premature termination codons are responsible for around 10% of inherited human diseases and pharmaceuticals targeting NMD have been identified. If we extrapolate from the yeast experiments to human cells, it is conceivable that reducing NMD function pharmaceutically might compensate for loss of CTC1 function in patients. Yeast Strains All strains are in the W303 background and are RAD5 + (Supplemental Experimental Procedures, list 1). Gene disruptions of CDC13, STN1, and TEN1 were created by inserting a hygromycin cassette into a diploid using one step PCR, primers indicated in Supplemental Experimental Procedures (list 2) and a pAG32 plasmid harboring HPHMX4 (Goldstein and McCusker, 1999) (Supplemental Experimental Procedures, list 3). Gene disruptions were confirmed by PCR. STN1 and TEN1 rescue plasmids were created by PCR-based gap repair of plasmid pDL1466 (see Supplemental Experimental Procedures, list 3, for plasmid details). Yeast Growth Assays Single colonies were inoculated into 2 ml of YEPD+adenine and grown in tubes at 23 C overnight until saturation. Six-fold serial dilution series of the cultures were spotted onto plates using a 48-prong replica-plating device. Plates were incubated for 2-3 days at temperatures indicated before being photographed. For cycling temperature assays plates were incubated at 23 C for 4 hr then 36 C for 4 hr, and this was repeated three times before colonies were allowed to form at 23 C. For passage experiments, several colonies were pooled with a toothpick and restruck onto YEPD plates. Synchronous Cultures and QAOS Synchronous culture experiments and viability assay were carried out in strains containing bar1D cdc15-2 mutations and were performed as described . Quantitative amplification of ssDNA was carried out as described (Holstein and Lydall, 2012). In-Gel Assay In-gel assays were performed as previously described (Dewar and Lydall, 2012). The Cy5-labeled oligonucleotide (M2188) was detected on a GE Healthcare Typhoon Trio imager. The agarose gel was poststained using SYBR Safe, and total DNA was detected using a FUJI LAS-4000 imager. ssDNA was quantified using ImageJ and normalized relative to the loading control. The final fold change is relative to the 0 hr time point of each strain. Microcolonies After germination for 5 days at 23 C, colonies were photographed using a 203 objective on a Microtec microscope. An image was taken of each microcolony, and images are reproduced at the same scale for direct comparison. Analysis of Telomere Structure Southern blot analysis was performed essentially as previously described (Maringele and Lydall, 2004). Genomic DNA was cut with XhoI (New England Biolabs), run overnight on a 0.8% agarose gel, and transferred to a positively charged nylon membrane. The membrane was hybridized with a 1 kbp Y 0 and TG probe, obtained by digesting pDL987 with XhoI and BamHI. The probe was labeled, and the blot was hybridized and immunologically detected using the DIG-High Prime Labeling and Detection Kit (Roche, 11585614910). The probe was visualized using a FUJI LAS-4000 imager. Quantitative RT-PCR RNA isolation was performed essentially as described (Collart and Oliviero, 2001). RNA was further purified using the RNEasy Mini Kit (QIAGEN, 74104) and by DNase I digestion (Invitrogen, 18068-015). Quantitative RT-PCR was carried out using the Superscript III Platinum SYBR Green One-Step qRT-PCR kit (Invitrogen, 11736-059). RNA samples were normalized relative to the BUD6 loading control. Figure 6. All authors contributed to experimental design and writing the paper.
7,634.2
2014-05-15T00:00:00.000
[ "Biology" ]
Analysis of HIV quasispecies and virological outcome of an HIV D+/R+ kidney–liver transplantation Introduction Transplantation among HIV positive patients may be a valuable therapeutic intervention. This study involves an HIV D+/R+ kidney–liver transplantation, where PBMC-associated HIV quasispecies were analyzed in donor and transplant recipients (TR) prior to transplantation and thereafter, together with standard viral monitoring. Methods The donor was a 54 year of age HIV infected woman: kidney and liver recipients were two HIV infected men, aged 49 and 61. HIV quasispecies in PBMC was analyzed by ultra-deep sequencing of V3 env region. During TR follow-up, plasma HIV-1 RNA, HIV-1 DNA in PBMC, analysis of proviral integration sites and drug-resistance genotyping were performed. Other virological and immunological monitoring included CMV and EBV DNA quantification in blood and CD4 T cell counts. Results Donor and TR were all ART-HIV suppressed at transplantation. Thereafter, TR maintained a nearly suppressed HIV-1 viremia, but HIV-1 RNA blips and the increase of proviral integration sites in PBMC attested some residual HIV replication. A transient peak in HIV-1 DNA occurred in the liver recipient. No major changes of drug-resistance genotype were detected after transplantation. CMV and EBV transient reactivations were observed only in the kidney recipient, but did not require specific treatment. CD4 counts remained stable. No intermixed quasispecies between donor and TR was observed at transplantation or thereafter. Despite signs of viral evolution in TR, HIV genetic heterogeneity did not increase over the course of the months of follow up. Conclusions No evidence of HIV superinfection was observed in the donor nor in the recipients. The immunosuppressive treatment administrated to TR did not result in clinical relevant viral reactivations. Introduction Kidney transplantation is a primary therapy for end-stage renal disease, just as orthotopic liver transplant (OLT) is considered to be the best curative treatment for patients with hepatocellular carcinoma (HCC) [1,2]. HIV-positive individuals have a higher incidence of end-stage renal disease (ESRD) and face nearly a threefold higher mortality on dialysis, compared to their HIV-negative counterparts [3][4][5][6]. HCV/HIV or HBV/HIV co-infection are frequent in people who inject drugs (PWID) [7,8]. HCC is a relevant cause of mortality in co-infected patients [9,10], since HIV-related immunosuppression enhances viral replication in liver cells contributing to Open Access *Correspondence<EMAIL_ADDRESS>1 Virology Unit, National Institute for Infectious Diseases, I.R.C.C.S. L.Spallanzani, Via Portuense, 292, 00149 Rome, Italy Full list of author information is available at the end of the article HCC pathogenesis [11]. Advances in combined antiviral therapy (ART) however, have made HIV infection a manageable chronic disease. People currently living with HIV and on ART have a near normal lifespan, and are suitable candidates to receive organ transplant, similar to the general population [12]. HIV+ donor to HIV+ recipient (HIV D+/R+) kidney transplantation was pioneered in South Africa in 2008 [13]. In Italy, HIV infected people became suitable organ donors for HIV positive recipients from 2018. Despite multicenter pilot studies reported that overall patient and graft survival in HIV+ donor to HIV+ recipients were excellent [13][14][15], the main concern about HIV/HIV transplantation is the possibility of donor derived HIV superinfection of the recipients. Kidneys and livers are considered a reservoir of HIV infection: compartimentalized HIV replication has been demonstrated in kidneys, with site-specific viral variants in urine segregating from those present in plasma [16,17], whereas livers may harbor latently infected cells in subjects under effective antiviral treatment [18]. Ultradeep sequencing (UDS) of viral quasispecies is a poweful tool to investigate variant mixture among infected individuals and has been used to trace transmission chains and cluster identification [19,20]. The aim of this study was to analyse donor and recipients HIV quasispecies in a D+/R+ kidney-liver transplantation to highlight the possible donor-derived superinfection and to monitor any viral reactivations as well as their clinical consequences. Study population The organ donor was a 54-year-old HIV infected deceased woman who had been under suppressive ART since 1997. There was no evidence of viral failure (plasma HIV-1 RNA always under 200 cp/ml). Her treatment consisted in darunavir/cobicistat monotherapy with no other documented chronic active viral infections. The cause of death was a spontaneous brain hemorrhage. At the time of organ risk assessment for donation, HIV-1 RNA was not detected in plasma, CD4 count was 951 cells/mm 3 , with negative HBV/HCV markers. The kidney recipient was a 49 year-old haemophilic patient with end stage renal disease on hemodialysis; he was infected with HIV (CDC stage B3), HBV (HBsAb+) and HCV (undetectable HCV-RNA). At the time of transplant, he had been under successfull ART (raltegravir plus rilpivirine) with HIV-1 RNA ≤ 50 copies/ml for almost 10 years. Previous ART included NRTIs, NNRTIs and protease inhibitors, with occasional HIV-1 RNA viral loads up to 800 copies/ml between 2003 and 2008. HIV drug resistance genotype performed on PBMC DNA at the time of transplant showed the presence of both NRTIs and NNRTIs associated resistance mutations (D67D/N, T69T/N, K70R, K103K/R, K219K/Q). The liver recipient was diagnosed with HCV related cirrhosis and untreatable hepatocellular carcinoma inside Milan criteria. HCV infection was succesfully treated with DAA in 2016. HBV markers were consistent with efficient immune control (HBcAb+, HBsAb). A good virologic control was obtained with the last ART (darunavir/cobicistat and raltegravir). After transplant, a new ART (emtricitabine/ tenofovir alafenanide and dolutegravir 50 mg BID) based on the previous GRT results was initiated, to avoid drug- Virological evaluation HIV-1 RNA in plasma was measured by Aptima HIV-1 Quant assay (Hologic Inc. San Diego, CA USA). PBMC associated total HIV-1 DNA was quantified as in [21] with a limit of detection of 2.15 Log copies/million PBMC. HIV-1 pol genotyping was performed on PBMC, as previously described [22,23]. CMV DNA, and EBV DNA quantifications were performed on whole blood while BKV DNA was monitored in plasma and urine by CMV ELITE MGB, EBV ELITE MGB and BKV ELITE MGB kits, respectively on the ELITe InGenius Instrument (ELITech Group S.p.A, Torino, Italia). Proviral HIV integration site analysis Digestion with restriction enzymes of 10 µg of PBMC extracted DNA, ligation to double stranded DNA of a linker and semi-nested PCR using primers complementary to both the linker DNA and the long terminal repeat (LTR) end of the HIV provirus were described in [24]. UDS was performed with the shotgun approach by using the Ion Torrent S5 platform (Thermofisher Scientific, Waltham, MA, USA), following the manufacturer protocols. High-quality reads were mapped on HIV-1 reference sequence using BWA v.0. 7 HIV env region UDS and phylogenetic analysis Env region amplification was performed on PBMC DNA by nested PCR: the first and the second PCR were carried out with Platinum quality proofreading polymerase (Invitrogen, by Life Technologies, Monza, Italy). Both PCR were composed of 30 cycles (94 °C for 2 min, 94 °C for 15 s, annealing at 60 °C for 30 s, extension at 68 °C for 1 min or 30 s and final elongation at 68 °C for 5 min) with the primers described in [20]. Sequencing was performed with the amplicon approach on Ion S5 sequencer, following the manufacturer protocols. The reads were corrected with an in-house developed pipeline described in [20]. Quasispecies complexity of env region was evaluated by Shannon entropy, normalizing for the number of total variants identified in each sample as described in [25]. Results and discussion At the time of transplant (September 2019), the organ donor had no detectable HIV-1 RNA in plasma, while the kidney and liver recipients showed < 30 and 59 HIV-1 RNA copies/ml in plasma samples, respectively. During the follow-up period (September 2019-May 2021), recipients were monitored for viro-immunological parameters of HIV infection and for the major pathogens able to reactivate in TR. For HIV infection monitoring, together with HIV-1 plasma viremia, PBMC-associated total HIV-1 DNA was evaluated. Quantitative determinations of CMV and EBV viremia in whole blood were performed at regular intervals in both recipients, while BKV DNA measure in plasma and urine was carried out in the kidney transplanted patient. In Fig. 1, the kinetics of HIV-1 RNA and PBMC-associated HIV-1 DNA in TR, starting from the time of transplant and throughout the whole follow-up period, are shown. In the kidney recipient, although plasma HIV-1 viremia always remained under the clinical threshold of 50 cp/ml, HIV-1 RNA was detected < 30 copies/ml at different time points; in the same period, the HIV cellular reservoir was almost stable (within < 0. In order to provide additional evidence of residual HIV replication, proviral HIV integration site analysis was undertaken in PBMC collected at different times in both TR during the follow-up ( Table 1). The reads containing the LTR region obtained for each sample, per time point, were median (IQR) 1858 (1113-2466). In the kidney recipient (panel A) new integration sites were observed in serial samples together with an increase in the frequency of some integration sites, already detected at baseline; other integration sites present at baseline decreased in the frequency or were lost. In the liver recipient (panel B), the integration sites remained the same with almost identical frequencies, throughout the 5 months of follow-up. In both recipients, drug resistance genotype performed on proviral DNA in PBMC did not change during the follow-up. The kidney recipient experienced a small EBV reactivation with a peak of 3,567 IU/ml of EBV DNA in blood soon after transplantation and a transient asymptomatic CMV reactivation with a peak value of 12,454 IU/ml CMV DNA in blood after 12 weeks, not requiring specific treatment. BKV DNA in urine and plasma remained undetectable for the entire follow-up period. The liver recipient did not show any CMV/EBV reactivations. In order to highlight possible transmission of HIV variants from donor to recipients, an extensive phylogenetic analysis by ultra-deep sequencing of HIV env region was performed in donor and recipients at the time of transplant and in TR at different time points during the follow-up period (Fig. 2). A median (IQR) of 1109 (511-2101) of env corrected sequences was obtained per patient/time point. Regarding viral tropism, all patients carried predominant R5 virus at transplantation and thereafter. The phylogenetic tree constructed with all the corrected sequences from both donor and recipients studied at different times, with respect to the time of transplant, showed complete segregation of HIV quasispecies between the subjects. This implied the absence of donor super-infection of the recipients and an independent genetic evolution in each TR during the follow-up, as suggested by the various sub-clustering observed among each recipient sequences over time. However, this was associated with a decrease of viral complexity overtime, in both TR (see insert in Fig. 2). Despite the use of a powerful tool able to identify very low minority variants in the HIV quasispecies, this study did not observe any evidence of donor derived variants in TR, as was the case in the recent multicentre study [26]. Moreover, it has to be pointed out that previous studies, which highlighted HIV superinfection of recipients after HIV D+/R+ transplantation, mostly involved donors with detectable HIV-1 RNA in plasma [15,27,28]. In our case, the organ donor had no detectable HIV-1 RNA in circulation at the time of the organ explant, but it was not possible to rule out a priori the possibility of superinfection of the recipients through the transplanted organs, since both kidney and liver are organ reservoirs of HIV infection. In general, people with HIV superinfection have a less favourable prognosis, displaying lower CD4+ T-cell counts, higher viral loads and a shorter time to adverse clinical events, as compared to mono-infected persons [29,30]. In addition, viral recombination in the cells of dually infected persons may occur, resulting in recombinant strains that could be resistant to antiretroviral therapies [31]. In our case, although phylogenetic analysis excluded donor derived superinfection, a low level of HIV replication persisted in both recipients, especially soon after transplantation. This was probably due to the temporary suspension of ART (2-3 days), during the stay in the intensive care unit after transplant. Indirect evidence of residual HIV replication during TR follow-up was the increase in the frequencies or the appearance of new specific types of integration sites, at least in one patient. Phylogenetic analysis also proved residual HIV replication, since in both TR some viral evolution was observed in the transplant follow-up. Immune-suppressive therapy, administrated to contrast organ transplant rejection, probably played a role in favouring HIV replication. It has been shown that persistent CMV and EBV shedding could contribute to the dynamics of the HIV-1 DNA reservoir during suppressive ART, increasing proviral genetic heterogeneity and HIV disease progression [32,33]. In this study, CMV and EBV reactivations after transplantation were not associated with an increase of HIV heterogeneity, even in the presence of viral Fig. 2 Phylogenetic tree of donor and recipients env sequences. Phylogenetic tree constructed with all the representative env sequences obtained from donor (green), kidney recipient (red/orange shades) and liver recipient (blue shades) at all time points. Bootstrap values > 85% were considered statistically significant (*). In the insert, complexity (normalized Shannon entropy) associated with each sample, from each patient, at the indicated time of collection, is shown
3,026.8
2022-01-06T00:00:00.000
[ "Medicine", "Biology" ]
Simulation of absorption processes in nanoparticle catalysts In this contribution, we present a novel modeling approach for mass transport problems that connects the microscale with the macroscale. It is based on a proper investigation of the diffusion process in the catalytic pellets from which, after semi‐analytic considerations, a source term for the macroscopic advection‐diffusion process can be identified. For the special case of a spherical catalyst pellet, the parabolic partial differential equation at the microscale can be reduced to a single ordinary differential equation in time through the proposed semi‐analytic approach. After the presentation of our model, we show results for its calibration against the macroscopic response of a mass transport experiment. Based thereon, the effective diffusion parameters of the catalyst pellet can be identified. Furthermore, we test the model's robustness by applying significant noise to virtual experimental datasets. Introduction Heterogeneous catalysis plays a key role in chemistry. In recent developments the catalyst's size shrunk down to micro or even nano scale. While this leads to improved catalysis properties, such as the activity, it becomes at the same time harder to describe their behavior a priori. Hence, the microscale effects need to be captured properly in numerical simulations. In this contribution a computational description for the mass transport experiment of [1] with nano catalysts of [2] is presented. Modeling and Numerics Under the continuum assumption the macroscale obeys the advection-diffusion-reaction (ADR) equation Here, c, w, D and R represent the macroscopic concentration, velocity, diffusion tensor and reaction term, respectively. The semi-discrete finite element formulation is obtained, if the strong form eq. (1) is multiplied with a test function v(x) ∈ H 1 0 (Ω) := V , Green's formula is applied and finally, the infinite space V is substituted by the closed subspace V h ⊂ H 1 0 (Ω) of piecewise linear polynomials, i.e. with initial condition c(x, 0) = 0 and final time T . Dirichlet boundary condition c exp (t) are the concentration measurements of the experiment, shown in Figure 1a with label "Exp. Input" and ∂Ω D is the top boundary of the computational domain. Due to the sharp advective flux in the second term of the weak form on the left hand side, standard Galerkin type discretizations suffer from instability. Hence, a suitable stabilization scheme needs to be added. In this contribution a Streamline-Upwind Petrov-Galerkin (SUPG) method is used [3], where v is substituted by v + δ T w · ∇v. The reaction term R serves in this article as the connection between macro and micro scale and is defined as Two assumptions about the microscale are made, first off a linear distribution of the internal catalyst concentration c is assumed, which results in the right part of equation (4) and secondly, a spherical shape. Note that this can be extended to all regularly shaped and homogeneous catalyst morphologies. Here, c Γ is the concentration at the catalyst boundary and is described by c which evolves according to a volume averaged microscopic transient diffusion equation, which is similar to eq. (1), however without the advective part, i.e. In the second part of equation (5), the right hand side expresses the volume averaged outgoing concentration flux on the catalyst boundary and can be simplified due to the assumptions about the morphology and concentration distribution by In equation 6 the fraction between c − c Γ and h expresses the gradient at the boundary, where h represents the interface thickness. At both scales isotropic diffusion behavior is assumed. Hence, diffusion tensors D and D can be expressed as D = D iso I and D = D iso I, respectively. Results and Discussion The model was calibrated by means of the CMA-ES algorithm [4], which yielded an almost perfect fit, that is visualized in Figure 1a. There were three fixed parameters, namely macro diffusion coefficient D iso , velocity w and interface thickness h. For the first fixed parameter a characteristic value of acetone was taken. The velocity was described by a constant value that was computed based on the experimental setup and interface thickness h was set one, such that if the true interface thickness is known, the optimized value of k Γ can be recomputed. Both time derivatives were discretized by an implicit Euler scheme. The obtained optimized values can be found in Table 1. After that, the optimized set of parameters were taken and the concentration at two different spatial points were measured over time. The two spatial points are the middle and end of the domain at 2.5cm and 5cm, respectively. Then, Gaussian noise was applied to the simulation signal and the calibration was repeated, but this time the distorted simulation signal was the goal. For a consistent and sufficiently robust model, the optimized parameter should be obtained again, approximately. This is the case and can be seen in Table 2 with regards to the parameters and Figure 1b visualizes the fit with the corresponding distorted signal.
1,155
2021-01-01T00:00:00.000
[ "Mathematics" ]
Heating of Jupiter’s upper atmosphere above the Great Red Spot The temperatures of giant-planet upper atmospheres at mid- to low latitudes are measured to be hundreds of degrees warmer than simulations based on solar heating alone can explain. Modelling studies that focus on additional sources of heating have been unable to resolve this major discrepancy. Equatorward transport of energy from the hot auroral regions was expected to heat the low latitudes, but models have demonstrated that auroral energy is trapped at high latitudes, a consequence of the strong Coriolis forces on rapidly rotating planets. Wave heating, driven from below, represents another potential source of upper-atmospheric heating, though initial calculations have proven inconclusive for Jupiter, largely owing to a lack of observational constraints on wave parameters. Here we report that the upper atmosphere above Jupiter’s Great Red Spot—the largest storm in the Solar System—is hundreds of degrees hotter than anywhere else on the planet. This hotspot, by process of elimination, must be heated from below, and this detection is therefore strong evidence for coupling between Jupiter’s lower and upper atmospheres, probably the result of upwardly propagating acoustic or gravity waves. The spectrum in Fig. 1b shows strong emission features at six wavelengths, which appear prominently in the auroral regions and wane towards the equator. These are discrete ro-vibrational emission lines from H 3 + , a major ion in Jupiter's ionosphere, the charged (plasma) component of the upper atmosphere. The colour contours highlight the weaker emissions from this ion across the body of the planet. Far from a uniform intensity at low latitudes, there is a substantial intensity enhancement in all of the emission lines within the − 13° to − 27° planetocentric latitude range occupied by the GRS 9 . As seen in the coloured contours of Fig. 1b, the H 3 + emissions are isolated in wavelength, indicating that there is no continuum reflection of sunlight at the latitudes of the GRS. The ratio between two or more emission lines can be used to derive the temperature of the emitting ions 10,11 . With the observing geometry used here, such temperatures are altitudinally averaged 'column temperatures' of H 3 + , where the majority of H 3 + at Jupiter has been observed to be located at altitudes between 600 km and 1,000 km above the 1-bar pressure level 12 . H 3 + has been demonstrated to be in quasi-local thermodynamic equilibrium throughout the majority of Jupiter's upper atmosphere, meaning that derived temperatures are representative of the co-located ionosphere and (the mostly H 2 ) thermosphere 13 . In the Methods section we detail the data reduction techniques and temperature model fitting procedures, and in Fig. 2 The difficulty in explaining the observed upper-atmospheric temperatures of the giant planets was realized more than 40 years ago 1 , and has since been termed the giant-planet "energy crisis" 2,4 . For Jupiter, only the observed temperatures within the auroral regions have been adequately explained, as the 1,000-1,400 K temperatures 14 observed there result from auroral heating mechanisms that impart 200 GW of power per hemisphere through ion-neutral collisions and Joule heating 15,16 . The low to mid-latitudes do not have such a heat source, and yet are measured to be near 800 K, which is 600 K warmer than can be accounted for by solar heating 15,17,18 . If heating does not come from above (solar heating), and cannot be produced in situ via magnetospheric interactions, then a solution is likely to be found below. Gravity waves, generated in the lower atmosphere and breaking in the thermosphere, represent a potentially viable source of upperatmospheric heating. Previous modelling studies, however, have led to inconclusive results for Jupiter: while viscous dissipation of gravity waves in Jupiter's upper atmosphere can lead to warming of the order of 10 K, sensible heat flux divergence can also lead to cooling by a similar amount, depending on the properties of the wave 6,7 . Recent re-analysis of Galileo Probe data has shown that gravity waves impart a negligible amount of heating vertically to the stratosphere (gravity-wave motion is primarily longitudinal and latitudinal) and that heating near the thermosphere is less than 1 K per Jovian day 19 . A more likely energy source is acoustic waves that heat from below (also via viscous dissipation); this form of heating requires vertical propagation of disturbances in the low-altitude atmosphere. Acoustic waves are produced above thunderstorms, and the subsequent waves have been modelled to heat the Jovian upper atmosphere by 10 K per day 20 and on Earth have been observed to heat the thermosphere over the Andes mountains 20,21 . On Jupiter, acoustic-wave heating has been modelled to potentially impart hundreds of degrees of heating to the upper atmosphere 22 . However, to the best of our knowledge, no such coupling between the lower and upper atmosphere has been directly observed for the outer planets, so vertical coupling has not been seriously considered as a solution to the giant-planet energy crisis. Jupiter's GRS is the largest storm in the Solar System, spanning 22,000 km by 12,000 km in longitude and latitude, respectively. The GRS lies within the troposphere, with cloud tops reaching altitudes of 50 km, around 800 km below the H 3 + layer 9 . In Fig. 3 we show (red circles) that the pattern of H 3 + intensity seen above the GRS, when fitted to our model, gives column-averaged H 3 + temperatures of over 1,600 K, higher than anywhere else on the planet, even in the auroral region. We also fitted temperatures to a swath of longitudes away from the GRS in order to illustrate that the enhancement in temperature occurs only within this longitude band. The latitudinal variation of temperatures away from the GRS is similar to the ranges previously observed 17 , indicating that the high temperature above the GRS is localized in both latitude and longitude. The high temperature in the northern part of the GRS provides direct observational evidence of a localized heating process. We interpret the cause of this heating to be storm-enhanced atmospheric turbulence, which arises due to the flow shear between the storm and the surrounding atmosphere. Some of these waves must then propagate vertically upwards, depositing their energy as heat through viscous dissipation. It is unknown, at present, why the two red data points at GRS latitudes (grey shaded region in Fig. 3 Fig. 2. Red circle symbols correspond to the co-addition of GRS-related spectra (that is, from the spectral image in Fig. 1b) between 239° and 253° in Jovian system III Central Meridian Longitude (CML). The GRS latitudes are indicated by the grey shading. Blue triangle symbols were derived from exposures taken in the ranges 293°-359° and 0°-82° CML, that is, longitudes well separated from the GRS, representing the 'ordinary' background conditions based on solar heating alone. The modelled temperature of the upper atmosphere for these non-auroral regions is 203 K (ref. 1). Uncertainties are standard errors of the mean. of the GRS may be much higher than derived, but only if methane is preferentially brighter in the south. However, as the H 3 + and CH 4 lines at 3.454 μ m are not separated spectrally in this work, it is not possible to conclude whether or not contamination is present. An alternative physical explanation may relate to the relative velocities between the zonal wind and the GRS being greatest on the equatorward side of the storm: relative velocities are 75 m s −1 in the north, 15 m s −1 in the storm core, and 25 m s −1 at the poleward edge 9 . The largest relative velocities would induce the strongest flow shear, leading to the greatest turbulence and therefore the largest contribution to heating above. It is possible that evidence of such energy transfer from the lower to the upper atmosphere would be deposited en route in the intervening troposphere and upper stratosphere (0-150 km altitude), as there is a temperature enhancement of 10 K encircling the GRS at these altitudes 23,24 . However, this enhancement could also be due to the upwelling of material in the centre of the GRS, followed by increased adiabatic heating when the material downwells around the edges 24 . The only previous map of Jovian H 3 + temperatures that contains the GRS was made using ground-based data obtained in 1993 (ref. 17). The authors of ref. 17 did not mention the GRS, as no obvious signature was present in their temperature map. However, on the basis of their temperature contours and the expected location of the GRS at the time, we estimate that there was a measured temperature enhancement of 50 K above the GRS. Such a minor temperature increase may indicate that the GRS-driven heating of Jupiter's upper atmosphere is transient, but the spatial resolution of the 1993 observations was 9,800 km per pixel (at the equator), compared with 500 km per pixel in this study. Therefore, the previous data had much cruder resolution in latitude and longitude, and any localized temperature enhancements would have been smoothed out. In this work, the high-temperature region above the GRS is localized in latitude and longitude, indicating a large temperature gradient and perhaps a confinement by currently unknown upper-atmospheric dynamics. If wave heating driven from below is responsible for the temperatures observed in Jupiter's non-auroral upper atmosphere, then we might expect a relatively smooth temperature profile with latitude, punctuated by temperature enhancements above active storms. The GRS may then simply be the 'smoking gun' that dramatically illustrates this atmospheric coupling process, and provides the clue to solving the giant-planet energy crisis. Online Content Methods, along with any additional Extended Data display items and Source Data, are available in the online version of the paper; references unique to these sections appear only in the online paper.
2,298.8
2016-07-27T00:00:00.000
[ "Environmental Science", "Physics" ]
Magnetic imaging by x-ray holography using extended references We demonstrate magnetic lensless imaging by Fourier transform holography using extended references. A narrow slit milled through an opaque gold mask is used as a holographic reference and magnetic contrast is obtained by x-ray magnetic circular dichroism. We present images of magnetic domains in a Co/Pt multilayer thin film with perpendicular magnetic anisotropy. This technique holds advantages over standard Fourier transform holography, where small holes are used to define the reference beam. An increased intensity through the extended reference reduces the counting time to record the farfield diffraction pattern. Additionally it was found that manufacturing narrow slits is less technologically demanding than the same procedure for holes. We achieve a spatial resolution of ∼30 nm, which was found to be limited by the sample period of the chosen experimental setup. © 2011 Optical Society of America OCIS codes: (090.1995) Digital holography; (340.7440) X-ray imaging; (260.6048) Soft xrays; (310.6870) Thin films, other properties. References and links 1. S. Eisebitt, J. Lüning, W. F. Schlotter, M. Lörgen, O. Hellwig, W. Eberhardt, and J. Stöhr, “Lensless imaging of magnetic nanostructures by x-ray spectro-holography,” Nature 432, 885–888 (2004). 2. C. Tieg, R. Frömter, D. Stickler, S. Hankemeier, A. Kobs, S. Streit-Nierobisch, C. Gutt, G. Grübel, and H. P. Oepen, “Imaging the in-plane magnetization in a Co microstructure by Fourier transform holography,” Opt. Express 18, 27251–27256 (2010). 3. O. Hellwig, S. Eisebitt, W. Eberhardt, W. F. Schlotter, J. Lüning, and J. Stöhr, “Magnetic imaging with soft x-ray spectroholography,” J. Appl. Phys. 99, 08H307 (2006). 4. S. Streit-Nierobisch, D. Stickler, C. Gutt, L.-M. Stadler, H. Stillrich, C. Menk, R. Frömter, C. Tieg, O. Leupold, H. P. Oepen, and G. Grübel, “Magnetic soft x-ray holography study of focused ion beam-patterned Co/Pt multilayers,” J. Appl. Phys. 106, 083909 (2009). 5. C. Tieg, E. Jimenez, J. Camerero, J. Vogel, C. Arm, B. Rodmacq, E. Gautier, S. Auffert, B. Delaup, G. Gaudin, B. Dieny, and R. Miranda, “Imaging and quantifying perpendicular exchange biased systems by soft x-ray holography and spectroholography,” Appl. Phys Lett. 96, 072503 (2010). 6. H. Hopster and H. P. Oepen, Magnetic Microscopy of Nanostructures (Springer, 2005). 7. S. G. Podorov, K. M. Pavlov, and D. M. Paganin, “A non-iterative reconstruction method for direct and unambiguous coherent diffractive imaging,” Opt. Express 15, 9954–9962 (2007). #148822 $15.00 USD Received 7 Jun 2011; revised 25 Jul 2011; accepted 27 Jul 2011; published 9 Aug 2011 (C) 2011 OSA 15 August 2011 / Vol. 19, No. 17 / OPTICS EXPRESS 16223 8. M. Guizar-Sicairos and J. R. Fienup, “Holography with extended reference by autocorrelation linear differential operation,” Opt. Express 15, 17592–17612 (2007). 9. M. Guizar Sicairos and J. R. Fienup, “Direct image reconstruction from a Fourier intensity pattern using HERALDO,” Opt. Lett. 33, 2668-2670 (2008). 10. D. Zhu, M. Guizar-Sicairos, B. Wu, A. Scherz, Y. Acremann, T. Tyliszczak, P. Fischer, N. Friedenberger, K. Ollefs, M. Farle, J. R. Fienup, and J. Stöhr, “High-resolution x-ray lensless imaging by differential holographic encoding,” Phys. Rev. Lett. 105, 043901 (2010). 11. D. Gauthier, M. Guizar-Sicairos, X. Ge, W. Boutu, B. Carré, R. J. Fienup, and H. Merdji, “Single-shot femtosecond x-ray holography using extended references,” Phys. Rev. Lett. 105, 093901 (2010) 12. G. van der Laan and C. R. Physique, “Soft x-ray resonant magnetic scattering of magnetic nanostructures,” 9, 570–584 (2008). 13. G. Beutier, A. Marty, F. Livet, G. van der Laan, S. Stanescu, and P. Bencok, “Soft x-ray coherent scattering: instrument and methods at ESRF ID08,” Rev. Sci. Instrum. 78, 093901 (2007). 14. H. He, U. Weierstall, J. C. H. Spence, M. Howells, H. A. Padmore, S. Marchesini, and H. N. Chapman, “Use of extended and prepared reference objects in experimental Fourier transform x-ray holography,” Appl. Phys. Lett. 85, 2454–2456 (2004). 15. S. Eisebitt, M. Lörgen, W. Eberhardt, J. Lüning, J. Stöhr, C. T. Rettner, O. Hellwig, E. E. Fullerton, and G. Denbeaux, “Polarization effects in coherent scattering from magnetic specimen: implications for x-ray holography, lensless imaging, and correlation spectroscopy,” Phys. Rev. B 68, 104419 (2003). 16. A. Scherz, W. F. Schlotter, K. Chen, R. Rick, J. Stöhr, J. Lüning, I. McNulty, Ch. Günther, F. Radu, W. Eberhardt, O. Hellwig, and S. Eisebitt, “Phase imaging of magnetic nanostructures using resonant soft x-ray holography,” Phys. Rev. B 76, 214410 (2007). 17. S. Hashimoto and Y. Ochiai, “Co/Pt and Co/Pd multilayers as magneto-optical recording materials,” J. Magn. Magn. Mater. 88, 211–226 (1990). 18. V. Baltz, A. Marty, B. Rodmacq, and B. Dieny, “Magnetic domain replication in interacting bilayers with out-ofplane anisotropy: Application to Co/Pt multilayers,” Phys. Rev. B 75, 014406 (2007) 19. A. Hubert and R. Schäfer, Magnetic Domains: The Analysis of Magnetic Microstructures (Springer Verlag, 1998). 20. W. F. Schlotter, R. Rick, K. Chen, A. Scherz, J. Stöhr, J. Lüning, S. Eisebitt, C. Gunther, W. Eberhardt, O. Hellwig, and I. McNulty, “Multiple reference Fourier transform holography with soft x rays,” Appl. Phys. Lett. 89, 163112 (2006). 21. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). Introduction Characterization of magnetic states on the submicron scale is a challenging task but holds promise for rapid advances in understanding and utilizing the properties of new materials for spintronic devices.Fourier transform holography (FTH) is a well established lensless technique for imaging the perpendicular component of magnetic domains [1] and more recently systems with in-plane magnetization [2].Holographic imaging is suitable not only for the remanent state [1] [as typically performed in the case of, e.g.photoemission electron microscopy (PEEM) or magnetic force microscopy (MFM)], but can also be used under applied electric and magnetic fields [3][4][5] and shows great potential for studying the dynamics of multiferroic thin films.Like other photon-in photon-out techniques, x-ray holography offers the opportunity to follow the evolution of magnetic domain structures and to study the intrinsic properties of materials.Complementary to MFM or Lorentz microscopy [6], x-ray holography has the specific advantage of being sensitive to the three-dimensional magnetization profile.Furthermore, its element specificity is beneficial to the study of heterogeneous systems. A recent development in the field of FTH has reduced the restrictions on the reference size to allow a much wider range of possibilities.The technique, known as holography with extended reference by autocorrelation linear differential operator (HERALDO) [7,8], permits the use of larger objects as references without compromising the spatial resolution.Since its first experimental demonstration in 2008 [9], reports of HERALDO have revealed its lensless imaging capabilities in coherent soft x-ray scattering [10] and ultrafast single-shot imaging [11].A comparison with other microscopy techniques, such as a full field transmission x-ray microscope (TXM) and a scanning transmission x-ray microscope (STXM), where high resolution zone plate optics are used, can be found in Zhu et al. [10].Unlike standard FTH using holes as point references, in HERALDO the reference emerges from boundary waves produced by sharp corners or edges of an extended object.In principle, the highest resolution possible is no longer limited by the size of the reference, but rather by the quality of its sharpest features.In our experiment we have chosen an extended reference where boundary waves emerge from both edges of a narrow slit.The far-field diffraction pattern formed from the interference between the waves from the object and extended reference produces a hologram.The hologram is multiplied digitally with a differential filter, and after a simple Fourier transform of this result, a complex valued reconstruction is retrieved. In this paper we demonstrate the use of HERALDO as a means of identifying the magnetic domain patterns which form in the remanent state of a Co/Pt multilayer film.A magnetic multilayer of [Co(5 Å)/Pt(10 Å)]×30 was deposited onto the front side of a 100 nm thick Si 3 N 4 membrane.A 600 nm Au film was deposited on the reverse side of the membrane to form an x-ray opaque mask.Focused ion beam (FIB) milling was used to fabricate a reference slit (2 μm × 30 nm) and a viewing aperture (1.5 μm diameter) in the mask.A schematic cross section through the sample is shown in Fig. 1(a), and a scanning electron micrograph of the viewing aperture and reference slit is shown in Fig. 1(b).The slit length and distance from the viewing aperture were chosen such that separation conditions [8] prevented real space objects in the reconstruction from overlapping. Results and Discussion The polar magneto-optical Kerr effect (MOKE) hysteresis loop for the Co/Pt multilayer film is shown in Fig. 1(c), which indicates that the sample possesses perpendicular magnetic anisotropy.This is supported by the MFM image, shown in Fig. 1(d), which reveals the typical maze pattern of antiferromagnetically coupled up and down domains formed in the multilayer at remanence. In our experiment we exploit x-ray magnetic circular dichroism (XMCD) to achieve magnetic contrast.The tunability of the energy and polarization of the x-rays from a synchrotron allows for an element-specific enhancement of the magnetic scattering [12].Measurements were performed at the Co L 3 absorption edge (hν=778 eV or λ =15.9 Å) using circularly polarized x-rays from beamline I06 at Diamond Light Source.The experimental set-up is depicted in Fig. 2. A pinhole, 20 μm in diameter, was placed upstream from the sample [see Fig. 2 (a)] to skim the x-ray beam, extracting a small fraction with nearly full transverse coherence [13] and defining the spot size.Holograms were recorded using a charge-coupled device (CCD) camera (Princeton Instruments, 2048×2048 pixels of size 13.5 μm) placed 450 mm downstream from the sample.A beamstop, consisting of a ∼600 μm diameter disk on two ∼50 μm diameter crosswires, was used to block the direct x-rays and so prevent overexposure of the CCD. A drawback of standard FTH is a low contrast of the interference fringes, particularly at larger scattering angles where the higher resolution information is encoded.The relative photon flux from the object and reference determines the fringe visibility.Improving the contrast typically compromises the spatial resolution as the reference would need to be enlarged to obtain a higher throughput [14].This problem is significantly alleviated in HERALDO as the flux passing through the extended reference can be far greater than that of a reference hole.Apart from the pixel size and acceptance angle of the CCD, the ultimate resolution in the reconstruction is limited in the direction across the slit by the slit width, and along the slit, by the sharpness of the edges.Moreover, manufacture of extended objects proves to be less demanding than that of reference pinholes due to the nature of FIB milling [10].Interestingly, we found that slits could be manufactured with a width smaller than the size of a reference hole.Upon analyzing the point spread function (PSF) of the two approaches, a reference hole gives a broader response than the derivative of a slit edge. The differential filter to be multiplied to the hologram is defined by the directional derivative of the edges from the extended reference.Prior knowledge of the slit orientation is however not required as it is readily determined from the intense streak that it forms in the recorded holo- gram, as can be seen in Fig. 2(b).Accumulations of 10 images, each frame with an exposure time of 40 s on the full area of the CCD, were recorded for the two opposite x-ray helicities under attenuated beam intensity.The difference between the images for the two helicities gives the magnetic contrast [15].The 800 s total exposure time used to achieve the reconstruction should not be taken as being representative for the required image acquisition time, as we did not optimize this aspect of the measurement.The difference image multiplied by the differential filter [see Fig. 2(c)] is shown in Fig. 3(a) with the corresponding Fourier transform shown in Fig. 3(b).The reference provides a Dirac delta function at either end of the slit.This reconstruction reveals four object-reference crosscorrelations, which are two-by-two conjugated.Each conjugate image shows the same domain structure, similar to the case of traditional FTH.Also the image reconstructions at opposite ends of the slit display the same structure, but as seen in Fig. 3(b) they show an unexpected difference in contrast and reversal in color.The image reconstructions can indeed have different global amplitude and phase if the slit is not uniformly illuminated.Whilst the origin of the color reversal remains less clear, it has been reported by Scherz et al. [16] that a phase change may arise from an inappropriate determination of the centre of symmetry, loss of high-q information due to the limited size of the CCD, or loss of low-q information due to the beamstop.The shadowing from the beamstop can be reduced by a simple modification of the design.Furthermore, faint rings can be seen in the reconstruction between the cross correlations which we believe are due to irregularities along the slit edge.We found that the magnetic domain structure in the images is completely reproducible with the same details observed after the sample had been removed and placed back in the set-up several weeks later. Figure 3(c) shows an enlargement of the crosscorrelation image with the best contrast.The orthogonal lines in the image indicate where vertical and horizontal line scans were made across a domain wall, and these are plotted in Fig. 3(d).The resolution of the imaging can be estimated by taking into account the typical width of a domain wall in CoPt multilayer films (< 10 nm) [16,17], By fitting the measured intensity profile with a hyperbolic tangent of the form I = I 0 tanh[2(x − x 0 )/w] + I C [18], where x 0 is the center position of the domain wall, w is its width and, I 0 and I C are intensity offset values, we obtain a width of the domain boundary w ≈ 30 ± 10 nm in both directions.This value is of the order of the sample period (∼30 nm), and is thus limited by the maximal acceptance angle of the CCD camera. Conclusion In conclusion we demonstrate holographic magnetic imaging using HERALDO.The total resolution achieved in our experimental set-up is limited by the sample period (∼30 nm), which is larger than the physical domain wall width.Within the error bar, the resolution is the same in the vertical and horizontal direction (the latter corresponds to HERALDO filtering).Compared to the conventional FTH method, the clear benefits are expected to be found in a reduction of the image acquisition time, due to an increased intensity through the extended reference, and the easier manufacturing of reference slits compared to holes.The method as described here has significant scope for improvement and a resolution of ∼15 nm [10] seems to be achievable in the near future.Spatial multiplexing could be applied with additional extended references to improve the signal-to-noise ratio [20].Furthermore, HERALDO is a non-iterative approach, which offers a fast and simple reconstruction method that could be used as the starting for further iterative phase retrieval algorithms [21].The technique is robust, similar to previous FTH methods, and is equally suitable for imaging under extreme conditions, such as high magnetic fields or low temperatures.We recognize HERALDO as a promising approach for the future studies of magnetic systems.Its ability to directly image magnetic configurations within an applied field could greatly benefit the advance of magnetic logic devices and magnetic race track memory devices where understanding the propagation and controlled pinning of magnetic domain walls along nanowires is desired. Fig. 1 . Fig. 1. (Color online) (a) Schematic of the sample design cross section.(b) Scanning Electron Micrograph (SEM) of the object aperture and the reference slit viewed from the side of the Au mask (scale bar = 1 μm).(c) Polar MOKE hysteresis loop of the Co/Pt multilayer.(d) Typical MFM image showing the maze domain structure that forms in a [Co(5 Å)/Pt(10 Å)]×30 multilayer at remanence (scale bar = 300 nm). Fig. 2 . Fig. 2. (Color online) (a) The object and reference slit are illuminated by the coherent x-ray beam.(b) The hologram formed by the interference of the scattered x-rays from the object and reference slit is recorded on the CCD camera in the far field.(c) A linear differential filter, defined by the derivative of the slit direction, is multiplied by the hologram before performing a Fourier transform to retrieve the reconstructed image. Fig. 3 . Fig. 3. (Color online) (a) Difference between the images taken with the two opposite x-ray helicities after applying a differential filter (only the central 1000×1000 pixels are shown.Scale bar = 20 μm −1 ).(b) Fourier transform of (a), showing the real part of the image.The crosscorrelations between the reference slit edges and the object, and their twin images can be seen (only the central 410×410 pixels are shown.Scale bar = 1.5 μm).(c) Magnified view of one of the crosscorrelations from (b); the thin lines across the domains indicate the location where the profile scans in (d) were taken.(d) The horizontal line scan (orange square symbols) corresponds to the resolution along the slit direction.The intensity profile across the domain vertically (blue triangle symbols) corresponds to the resolution across the width of the slit.The data is fitted (red lines) using a hyperbolic tangent and the resolved width of the domain wall is obtained as w ≈ 30 ± 10 nm in both directions.For clarity, both curves have been shifted arbitrarily along the x-axis.
3,974.6
2011-08-15T00:00:00.000
[ "Physics" ]
Alpha Centauri System and Meteorites Origin We propose a mathematical model for determining the probability of meteorite origin, impacting the earth. Our method is based on axioms similar to both the complex networks and emergent gravity. As a consequence, we are able to derive a link between complex networks and Newton’s gravity law, and as a possible application of our model we discuss several aspects of the Bacubirito meteorite. In particular, we analyze the possibility that the origin of this meteorite may be alpha Centauri system. Moreover, we find that in order for the Bacubirito meteorite to come from alpha Cen and be injected into our Solar System, its velocity must be reduced one order of magnitude of its ejected scape velocity from alpha Cen. There are several ways how this could happened, for example through collision with the Oort cloud objects (located outside the boundary of our Solar System), and/or through collisions within the Solar meteorites belt (located between Mars and Jupiter). We also argue that it may be interesting to study the Bacubirito meteorite from the perspective of the recently discovered Oumuamua object. Introduction In this work we are interested in answering the question whether meteorites found on the earth actually came from the closest star Alpha Centauri (alpha Cen) system [1]. Typically when one think about a meteorite found on earth one associates its origin with a number of possibilities connected with the solar system, including asteroid belt, Kuiper belt or Oort cloud (see Refs. [1] and [2]). In a sense, this is because one assumes that the probability of the meteorite origin decreases from the sun. Thus, one determines that the most probable meteorite Ref. [15]). However, other studies have shown that the later stage of accretion to produce lunar mass objects is reduced in efficiency due to orbital rephrasing by the binary companion. This inhibits collisional growth around alpha Cen A to regions within 0.75 AU. Moreover, visual double stars are very interesting objects for astrophysics. It is known that by determining the physical parameters of the orbits can help to determine the luminosity of such stars [16]. If circumstellar discs of alpha Cen system is capable of forming planets it is naturally to assume that there are also analogue asteroid belt, Kuiper belt or Oort cloud. From this point of view one must assume the possibility that some asteroids found on earth come from the alpha Cen system. Of course, the process can be vice versa in the sense that there has been interchange of asteroids between the alpha Cen system and the solar system. At present, roughly speaking the Oort cloud starts approximately at distance of 1 light-year and ends at 2 light-year from the sun. So one may assume that the analogue Oort cloud of the alpha Cen system starts also a distance of 1 light-year and ends at 2 light-year (from the alpha Centaury system). Considering that the average distance from the sun and alpha Centaury system is of 4 light-year it seems reasonable to assume that both Oort clouds are in interaction. If this is the case at present times presumably in the past the evolution of both systems must be of great importance in the sense that part of the matter from both systems must be interchanged. So, it is expected that some asteroids trapped in the solar system actually come from alpha Cen system. Consequently, although the meteorite origin probability found on earth is higher in the case of structure near to Earth such as the asteroid belt the probability is not necessarily zero if one assumes an alpha Cen origin of some meteorites. In this work we propose a mathematical model that could answer somehow this question. Such a model corresponds to complex network adapted to gravitational phenomena, and as a possible application of our gravitational complex network model, we consider the Bacubirito meteorite [17] [18] [19]. Finally, it is interesting to mention that a future interstellar spacecraft, in- This work is organized as follows. In Section 2 we comment about complex networks. In Section 3, we obtain an expression for gravitational complex networks. In Section 4, we explore if in the case of the Bacubirito meteorite we can apply our formalism. In Section 6 we mention that the possibility that some meteorite find on earth may have interstellar origin as the Oumuamua object [20] [21] and that perhaps this may be the case for the Bacubirito meteorite. For more details of Alpha Cen System see Refs [22]- [45]. Comments on Complex Networks It is known that random networks with complex topology describe a wide range of systems in Nature. Surprisingly, recent advances in this scenario show that most large networks can be described by mean-field method applied to a system with scale-free features (see Refs. [3] [4] for details). In fact, it is found that in the case of scale-free random networks, the observed power-law degree distribu- where ( ) P k is the probability that a vertex in the network is connected to k other vertices and γ is a numerical scale-free parameter so called "connectivity distribution exponent". Random networks with complex topology are based in two principles: 1) Growth: starting with small number of vertices 0 v , at every time step t one adds a new vertex with 0 e v < edges that will be connected to the vertices already present in the system. 2) Preferential attachment: When choosing the vertices to which the new vertex connects, one assumes that the probability that a new vertex will be connected to vertex i depends on the connectivity (node degree) i k of that vertex and is given by (The reacher becomes reacher.) Observe that the sum in (2) goes over all vertices in the system except the new one. Assuming that i k is continuous parameter, one can assume that the variation of i k with respect the time is proportional to this probability where e is proportional constant. Thus, considering (2) we have It is possible to show that Therefore, one gets equation which has the following solution (given the condition Then, using this expression, the probability that a vertex has connectivity i k smaller than k can be written as where we have assumed that the probability density for i Making the differentiation of this expression with respect to k, one obtain the probability that a vertex in the network is connected to k other vertices Comparing this expression with (1), one sees that in this model the free-scaling parameter becomes 3 γ = . Complex Network; Gravitational Information Theory The idea of connecting gravity with networks has been of great interest through the years (see Ref. [5] for details). In Ref. [5] it was shown that by relaying in a connection between information theory and scale-free random networks one can obtain the Newton gravitational theory (see Appendix). In Ref. [6] the identification P F ↔ of the expressions 3 1 , P k (10) and 2 1 , F r (11) was considered. Consequently the possible relation between the radio r and the connectivity k was established, 3 2 . r k (12) In fact, the expression (10) can be generalized in the form where, as it was mentioned in section 1, γ is just a free-scale parameter called the connectivity distribution exponent. It turns out that the scale-free parameter γ is a model dependent. For instance, in the observed networks the values in the range 2 3 γ ≤ ≤ [6]. For gravitational theory the most interesting possibility is when 2 γ = . In this case and because of the above expression, ( ) P k becomes 2 1 P k (14) and therefore one can make the identification r k to obtain Journal of Applied Mathematics and Physics 2 , Mm P G r (15) where the constant of proportionality must have units inverse of force units. This expression can be interpreted as "the probability that object of mass m is connected to other object of mass M is inversely proportional to the square distance between the two masses". Thus, from of point of view of complex networks the Newton gravitational law is the emergent probabilistic expression (A7) (see Appendix), which can be used to estimate the probability for a meteorite to impact the Earth from a given location, that is, to determine meteorite origin impacting the Earth. Bacubirito Meteorite The Bacubirito meteorite [17] is a notorious and famous anomalous iron meteorite which was found at 25˚42'05''N, 107˚54'19''W in 1889 at small village called "el Camichn", about 10 km away from Bacubirito town located in the northern mountains of Sinaloa, México. It is worth mentioning that this location has been verified by recent expedition. At the time of its finding, it was considered the biggest worldwide meteorite. Nowadays, with its 4.1 meters it is still maintain as the world fifth largest meteorite [19]. At present, it is not known the origin of the Bacubirito meteorite. Of course, one should expect that its most probable origin is the asteroid belt, the Kuiper belt or the Oort cloud. However, since it is considered anomalous iron meteorite we would like to leave open the possibility that its origin is the alpha Cen system. According to the Ref. [13] the alpha Cen system contains high concentration of metallic substances. So if one is interested to see whether a meteorite origin is alpha Cen system one needs to look for iron anomalous meteorites which it turns out to be the case of the Bacubirito meteorite. Furthermore, just by looking the expression (15), one realize that the probability that a meteorite origin is alpha Cen system is small compared with the asteroid belt, the Kuiper belt or the Oort cloud origin because of the huge distance difference. However, it is not zero indicate that metal abundance of alpha Cen A is greater than the sun abundance. In fact, alpha Cen A may be classified as anomalous metal rich start. In this sense, the metal anomaly of alpha Cen A is related with the metal anomaly of the Bacubirito meteorite. But of course, to make a complete identification one needs among other studies to compare the full chemical composition of alpha Cen A and the Bacubirito meteorite. It is known that in order to understand the evolution of the Milky Way one Journal of Applied Mathematics and Physics uses the chemical composition of stars. Using this data, one concludes that neighbor stars are not isolated system but rather the vicinity of one another are affected by the same astrophysical events. So it is likely that alpha Cen system and the sun interact in different forms during their evolution formation. In particular one should expect that the matter interchange between both systems was a very possible scenario. In particular, in different time periods some asteroids of alpha Cen system could reach the solar system and vice versa. In this context, the study of the comparison of chemical abundance in both neighbors stars and different meteorites found of earth can be of great importance. We would like also to analyze the possibility that the scape velocity play an important physical role in our search for a meteorite from alpha Cen enter to the solar system. As we mentioned Alpha Cen system is mainly made up of alpha Cen A, alpha Centauri B and Proxima Centauri which is a red dwarf. They masses are respectively , where G is the gravitational constant ( . Therefore, one has that this comparison ratio between alfa Cen and our Solar System is where it has been assumed that r r ≈  one can guess that this object in alpha Centauri must be located between alpha Cen A and alpha Cen B where the non linear resonances can produce instabilities in the objects and can be ejected from the system, where the distance from alpha Cen A to alpha Cen B is about the distance from our Sun to Pluto, between 4.4 to 7.4 billion Km. In this way, any object escaping from Alpha Centauri in direction to our Solar System will have a speed such that the Solar System will not be able freely captured it into its system, unless directly hit the sun of a planet; in particular the earth. Thus, a direct hit of meteorite of the size of the Bacubirito meteorite from alpha Cen will produce a crater bigger than size left by the meteorite which made the Arizona crater (about one Km of diameter). However, there is not such a crater on the place where Bacubirito was found. Therefore, its enter energy in Earth atmosphere Journal of Applied Mathematics and Physics had to be much smaller. Let us see with some details: taking 8 3 AU 4. 5 Final Remarks While we were preparing and refining the present article for publication we became aware of the surprisingly discovered, on 2017 October 19, of the first interstellar object called 1I/2017 U1 (Oumuamua) (see Refs. [21] and [46]). This discovered triggers the possibility that many interstellar objects has been passed through our solar system in the past [20]. Thus, one may consider the possibility that some of these interstellar objects may be reached the earth. In particular, one may assume that the Bacubirito meteorite has interstellar origin as the Oumuamua object. Previously, we have considered the possibility that the Bacubirito meteorite came from alpha Cen system, but the fact that the Oumuamua object has other possible origins opens other scenarios for the origin of the Bacubirito meteorite. It is worth mentioning that the surface reactivity of the Oumuamua object is spectrally red suggesting, among other possibilities, a surface containing minerals with nanoscale iron [47]. In fact, it is interesting to mention that the chemical composition of Bacubirito meteorite is (Fe 88.94%, Ni 6.98%, Co 0.21%, F%) indicating that it is very unusual with respect most of the meteorites that are assumed to come from meteorites Belt of our solar system. Thus, it will be interesting for further research to study the Bacubirito meteorite from the perspective of the Oumuamua object. , where p l is the Planck's length 3 . In addition, one assumes the following two basic conditions for small x ∆ displacement and two conditions: 3) The equipartition rule for the energy: 4) The rest mass equation expression: From 1) and 2) one has 4π . This expression can also be written as which is the familiar Newton's gravitatio law. Here, M denotes the mass enclosed by a spherical screen S 2 (see also Ref. [5] for details).
3,406.2
2018-11-07T00:00:00.000
[ "Geology", "Physics" ]
Design of an FPGA-Based High-Quality Real-Time Autonomous Dehazing System : Image dehazing, as a common solution to weather-related degradation, holds great promise for photography, computer vision, and remote sensing applications. Diverse approaches have been proposed throughout decades of development, and deep-learning-based methods are currently predominant. Despite excellent performance, such computationally intensive methods as these recent advances amount to overkill, because image dehazing is solely a preprocessing step. In this paper, we utilize an autonomous image dehazing algorithm to analyze a non-deep dehazing approach. After that, we present a corresponding FPGA design for high-quality real-time vision systems. We also conduct extensive experiments to verify the efficacy of the proposed design across different facets. Finally, we introduce a method for synthesizing cloudy images (loosely referred to as hazy images) to facilitate future aerial surveillance research. Introduction Image acquisition, for example outdoor imaging and remote sensing, is highly problematic owing to numerous natural factors, notably bad weather conditions.Under these adverse effects, acquired images are subject to various types of degradation, ranging from color distortion to visibility reduction.Consequently, high-level computer vision algorithms-which generally assume clean input images-may incur a sharp drop in performance, creating great demand for visibility restoration, as can be seen by the rapid development of myriad algorithms for image dehazing, deraining, and desnowing over the past two decades.In this paper, we restrict the discussion to image dehazing because haze (or equivalently fog) appears to be more prevalent than rain and snow.Furthermore, as haze and cloud originate from atmospheric scattering and absorption, image dehazing algorithms also find applications in remote sensing. Image Dehazing in Remote Sensing Remote sensing applications such as aerial surveillance, battlefield monitoring, and resource management fundamentally impact on many aspects of modern society, including transportation, security, agriculture, and so on.Despite their crucial importance, these applications are prone to failure in areas of cloud cover, because light waves are subject to atmospheric scattering and absorption when traversing cloud banks.As a result, remotely sensed images become unfavorable for subsequent high-level applications, rendering image dehazing highly relevant for visibility restoration. For example, Figure 1 demonstrates the negative effects of cloud and the beneficial effects of image dehazing on an aerial surveillance application.Specifically, Figure 1a is a clean image from the Aerial Image Dataset (AID) [1], and Figure 1b is its corresponding synthetic cloudy image.Cloud is synthesized herein due to the sheer impracticality of remotely sensing the same area in two different weather conditions.We will discuss synthetic cloud generation in detail in Section 4.2.2. Figure 1c is the result of dehazing Figure 1b using a recent algorithm developed by Cho et al. [2].The three images on the second row are the final outcomes of processing Figure 1a-c with a YOLOv4-based object recognition algorithm [3].In addition, it is noteworthy that the haziness degree evaluator (HDE) [4] serves as a basis for discriminating Figure 1a as a clean image.It can be observed that the recognition algorithm detected nine airplanes from the clean image in Figure 1a.In contrast, the number of detected airplanes in Figure 1e was significantly lower.The detection rate dropped 66.67% from nine to three detected airplanes.This observation implies that bad weather conditions such as cloud and haze have a negative impact on high-level remote sensing applications. To address this problem, we preprocessed the synthetic cloudy image using a dehazing algorithm developed by Cho et al. [2].As Figure 1c shows, the visibility improved; however, the airplane under the dense veil of cloud remains obscured.The corresponding detection result in Figure 1f demonstrates a considerable increase (133.33%) in detection rate from three (in Figure 1e) to seven detected airplanes.This observation, in turn, implies the crucial importance of image dehazing in remote sensing applications. However, another issue arises regarding whether to apply image dehazing, because cloud occurs occasionally, while most image dehazing algorithms assume a hazy/cloudy input.Obviously, dehazing a clean image results in untoward degradation, as Figure 2 demonstrates.Although the dehazed image in Figure 2b appears to be passable, without any noticeable distortion, its corresponding detection results in Figure 2d exhibit a sharp drop (66.67%) in detection rate from nine to three detected airplanes.The algorithm also misrecognized two airplanes as birds compared to only one misrecognition in Figure 2c.This example, coupled with the previous one, emphasizes the need for an autonomous image dehazing algorithm. Real-Time Processing Remotely sensed images usually possess high resolution, leading to a computationally heavy burden for subsequent algorithms.For example, the S-65A35 camera of the SAPPHIRE series, widely available on aerial surveillance systems, can deliver a superb resolution of 9344 × 7000 pixels at 35.00 frames per second (fps) [5].As a result, virtually every embedded surveillance system downscales the acquired image sequence to a reasonable size before supplying the sequence to other algorithms, for computational efficiency and to enable real-time processing.A good example of this is an aerial surveillance system known as ShuffleDet [6], which downscales the input image to a resolution of 512 × 512 to achieve a processing speed of 14.00 fps. Regarding the implementation of image dehazing, the software implementation per se usually fails to meet the real-time processing requirement.To support this claim, we adopt Table 1 from Ngo et al. [7].The authors measured the processing time of nine algorithms [2,[7][8][9][10][11][12][13][14] whose source code is publicly available, for different image resolutions.The simulation environment in this study was MATLAB R2019a, and the host computer was equipped with an Intel Core i9-9900K (3.6 GHz) CPU, with 64 GB RAM, and an Nvidia TITAN RTX graphics computing unit (GPU).The run-time evaluation in Table 1 demonstrates that none of the nine algorithms could deliver real-time processing.Even with such a small resolution as 640 × 480, the fastest algorithm, developed by Zhu et al. [11], exhibited a processing speed of 4.55 fps (≈1/0.22),approximately one fifth of the required speed of 25.00 fps. Hence, there are currently two main approaches toward real-time processing.The first approach aims to reduce the development time by focusing on flexibility, portability, and programming abstraction.Under this approach, the embedded system usually needs to be equipped with powerful computing platforms such as GPUs and low-power GPUs.In the previous example of ShuffleDet, Azimi [6] presented an implementation on the Nvidia Jetson TX2 board including a low-power GPU named Tegra X2 [15].Although this approach can meet the growing demand for high computing performance, it is not the best choice compared with field-programmable gate arrays (FPGAs), which are at the center of the second approach toward real-time processing.Wielage et al. [16] verified that a Xilinx Virtex UltraScale+ FPGA was 6.5× faster and consumed 4.3× less power than the Tegra X2 GPU, to support the preceding claim.For this reason, we present herein an FPGA implementation of an autonomous dehazing system for aerial surveillance.The first is attributed to self-calibration on haze conditions, which results from the utilization of the HDE.The second is achieved through a pipelined architecture for improving throughput and a number of design techniques for reducing propagation delay.The third is the desired result of simulating haze/cloud using the low-frequency parts of a random distribution, with the density of synthetic haze/cloud controlled by the HDE.Thus far, it can be observed that the HDE plays an essential role in the proposed system, and therein lies the cause of its limitations, as discussed later in Section 4.3. Literature Review Image dehazing is a fundamental problem in computer vision, and is rooted in studies on atmospheric scattering and absorption phenomena.As witnessed by the work of Vincent [17] and Chavez [18], early research on image dehazing started five decades ago.Through the long history of development, there have been various approaches to restoring the scene radiance.Polarimetric dehazing [19,20], image fusion [21,22], and image enhancement [7,10] are cases in point.It is also noteworthy that each approach has resulted in hundreds of papers, and therein lies the sheer impracticality of reviewing them all.Consequently, we focus our discussion on the single-image approach that relies on an acquired red-green-blue (RGB) image. To facilitate understanding of the review, we first briefly formalize the image dehazing problem.Given a hazy RGB image I ∈ R H×W×3 of size H × W, the atmospheric scattering model (ASM) [23] decomposes it into two terms, known as the direct attenuation and the airlight, as Equation (1) shows.Herein, J ∈ R H×W×3 is the scene radiance, t ∈ [0, 1] H×W is the transmission map, A ∈ R 1×1×3 is the global atmospheric light, and x represents the spatial coordinates of pixels.Direct attenuation and airlight correspond to Jt and A(1 − t), respectively.The former signifies the multiplicative attenuation of reflected light waves in the transmission medium, while the latter represents the additive influence of the illumination. Based on the ASM, most image dehazing algorithms develop two mapping functions f A : R H×W×3 → R 1×1×3 and f t : R H×W×3 → R H×W that estimate the global atmospheric light and the transmission map, given the input image I. Researchers usually denote these two estimates as  and t, and they restore the scene radiance J by rearranging Equation (1) as follows: where a small positive t 0 helps avoid division by zero.Recently, deep learning models have also found an application in image dehazing.Some early models [13,14] also learned the mapping functions f A : R H×W×3 → R 1×1×3 and f t : R H×W×3 → R H×W , whereas recently developed models [24,25] learned an end-to-end mapping function f J : R H×W×3 → R H×W×3 .Although image dehazing is achievable in various ways, it is worth recalling that this astonishing operation is a preprocessing step, since this imposes strict requirements on its implementation.A crucial requirement is real-time processing, as discussed in Section 1.2. According to a recent systematic review [26], image dehazing algorithms in the literature fall into three categories: image processing, machine learning, and deep learning.Table 2 summarizes essential information on each category, and we exemplify them by one or two representative methods in the following sections. Image processing Uses traditional computer vision techniques and only the input RGB image [7][8][9][10] Machine learning Uses machine learning techniques additionally to exploit the hidden regularities in relevant image datasets [11,12,27,28] Deep learning Uses deep neural networks with powerful representation capability to learn relevant mapping functions [13,14,24,25] Representative Single-Image Dehazing Algorithms The categorization in Table 2 based on the primary technique employed to restore the scene radiance and how the algorithm exploits image data can give an early indication of the real-time processing capability of an image dehazing method.Generally, the first two categories-image processing and machine learning-can handle the input image sequence or video in real time.Conversely, the third category, deep learning, suffers from some practical difficulties in achieving real-time processing. Image Processing Image dehazing methods founded on traditional computer vision techniques usually favor human perception [29] because they are rooted in hand-engineered image features such as contrast and saturation, which greatly influence the perceptual image quality.Perhaps the most well-known research in this category is the dark channel prior of He et al. [9], inspired by the dark-object subtraction method of Chavez [18].He et al. [9] developed f t : R H×W×3 → R H×W from the following two assumptions: • The scene radiance J exhibits an extremely dark channel whose intensities approach zero in non-sky patches; • The transmission map t is locally homogeneous. The first is based on the colorfulness of objects, i.e., one of the color channels should be very low for the color to manifest itself.The second is based on the depth-dependent characteristic of the transmission map.Depth information is mostly smooth except at discontinuities in an image, and so is the transmission map.Mathematically, the equivalent expressions are: • min y∈Ω(x) {min c∈{R,G,B} [J c (y)]} = 0, where Ω(x) denotes an image patch centered at x, and c denotes a color channel; • min y∈Ω(x) [t(y)] = t(x). A transmission map estimate resulting from these two assumptions suffers from block artifacts, rendering a refinement step essential.Accordingly, He et al. [9] utilized soft matting [30].Despite an excellent dehazing performance, the method of He et al. [9] has two main drawbacks: failures in sky regions and high computational cost.These shortcomings have resulted in a series of follow-up studies [31][32][33]. Regarding the mapping function f A : R H×W×3 → R 1×1×3 , He et al. [9] developed a robust approach that remains widely used.Under this approach, the top 0.1% of the brightest pixels in the dark channel of the input image serve as candidates for singling out the atmospheric light.From among these, the pixel with highest intensity in the RGB color space is chosen.Consequently, this approach is fairly robust against the problem of incorrectly selecting white objects as atmospheric light. Machine Learning As image dehazing methods from the first category are based on hand-engineered features, they may fail in particular circumstances.A prime example is the fact that the dark channel prior proposed by He et al. [9] does not hold for sky regions.Therefore, some hidden regularities learned from relevant image datasets can improve performance in those cases. Zhu et al. [11] developed the color attenuation prior in that manner.Through extensive observations on outdoor images, they discovered that the scene depth correlated with saturation and brightness.They then assumed that a linear model sufficed for expressing that correlation and devised the simple expression f t : R H×W×3 → R H×W .Next, they utilized maximum likelihood estimates to find the model's parameters.The input data consisted of a synthetic dataset with haze-free and corresponding synthesized hazy images.The dehazing method of Zhu et al. [11] was relatively fast and efficient, as were the methods in some of the follow-up studies [28, 34,35]. Another notable approach is the learning framework proposed by Tang et al. [27].This framework comprises two main steps: feature extraction and transmission map inference.Tang et al. [27] implemented the former in a multi-scale manner, and they utilized random forest regression to realize the latter.Many deep learning models developed thereafter bear a fundamental similarity to this framework.Despite an excellent dehazing performance, the implementation of Tang et al. [27] incurs a heavy computational burden, hindering its broad application in practice. Deep Learning An early attempt at applying deep learning models to image dehazing can be traced back to the DehazeNet developed by Cai et al. [13].They adopted a similar approach to that of He et al. [9] to devise the mapping function f A : R H×W×3 → R 1×1×3 .To estimate the transmission map, they utilized a convolutional neural network (CNN).The CNN's functionality is similar to that of the learning framework of Tang et al. [27].The main steps include: (i) feature extraction, (ii) feature augmentation, and (iii) transmission map inference, corresponding to: (i) feature extraction and multi-scale mapping, (ii) local extrema, and (iii) the nonlinear regression presented by Cai et al. [13]. Recently, end-to-end networks that learn the mapping function f J : R H×W×3 → R H×W×3 have been gaining popularity.These networks are usually based on the encoder- decoder architecture, which has been proven to be highly efficient due to its astonishing ability to learn a robust representation of image features from a low to a high level of abstraction.The FAMED-Net approach developed by Zhang and Tao [24] is a prime example.FAMED-Net is a densely connected CNN whose architecture is designed based upon multi-scale encoders and image fusion.It is also one of a few deep models that can fulfill the real-time processing requirement.Zhang and Tao [24] realized FAMED-Net using a powerful Nvidia Titan Xp, yielding a processing speed of 35.00 fps on 620 × 460 image resolution. Summary Image dehazing has long development history and dates back to the early 1970s.As a result, hundreds of studies have been recorded in the literature.However, it is fortunately unnecessary to review all of them.A recent systematic review [26] collated information from influential studies and categorized the results into image processing, machine learning, and deep learning approaches.This categorization can serve as an early indication of the real-time processing capability of image dehazing algorithms.The first two categories are generally capable, whereas the last one rarely is. Moreover, most image dehazing methods assume a clean input image, but this assumption is uncertain in practice, rendering an autonomous dehazing method highly relevant.Therefore, we present herein an FPGA-based autonomous dehazing system to fulfill the aforementioned requirements: real-time processing and autonomy. Autonomous Dehazing System To achieve autonomous dehazing, it is necessary to answer the following questions: • How can the haze condition be determined from a single input image?• How can an input image be dehazed according to its haze condition? Regarding the first question, a practical solution is to use a metric such as the HDE.This no-reference metric proportionally quantifies the haze density of the input image and can be considered as the following mapping function f HDE : R H×W×3 → R. Because the HDE yields a normalized score between zero and unity, it is highly appropriate for controlling the dehazing process.Hence, an elegant answer to the second question is to exploit the HDE score to adjust the dehazing power in proportion to the haze condition of the input image. This idea is the underlying principle of the autonomous dehazing algorithm in [7], which fails to meet the real-time processing requirement, as Table 1 demonstrates.Based on this algorithm, the following first introduces the autonomous dehazing process and then discusses major real-time processing hindrances.After that, Section 3.2 describes in detail the proposed FPGA implementation for surmounting those hindrances, enabling real-time processing for even high-quality (DCI 4K) images. Base Algorithm Figure 3 illustrates the main steps constituting the autonomous dehazing algorithm, which accepts and handles arbitrary images.The fundamental idea is to combine the input image with its corresponding dehazed result according to the HDE score.More specifically, the algorithm first senses the haze condition of the input image and then adjusts the dehazing power correspondingly.If the condition is haze-free, the dehazing power becomes zero to keep the input image intact, because it is unnecessary to dehaze a haze-free image.Otherwise, the dehazing power varies in proportion to the sensed haze condition (thin, moderate, or dense haze).This haze-condition-appropriate processing scheme is robust against image distortion caused by excessive dehazing, as the evaluation results in [7] demonstrated. According to [4], Equation (3) gives the HDE score ρ I of an RGB image I, where Ψ represents the whole image domain, and hence the representation |Ψ| denotes the total number of pixels.The variable B keeps Equation (3) from growing too lengthy; its expression is given in Equation ( 4), where κ is a user-defined parameter that was set to −1 in [7], I mc is the difference between two extremum channels, and σ I is the standard deviation of the image luminance.Finally, I mΩ and  denote the dark channel and the global atmospheric light estimate discussed earlier in Section 2. Based on the HDE score ρ I , the self-calibrating factor calculation block utilizes four additional user-defined parameters (ρ 1 , ρ 2 , α, and θ) to compute a weighting factor for image blending and adaptive tone remapping blocks.The self-calibrating factor calculation follows Equations ( 7) and (8). where ω weights the contribution of the input image I in the image blending block, and ρI is a result of applying the mapping function f : R → R, Provided that J is the dehazed result of I, Equation ( 9) shows the restored image R, which is the output of the adaptive tone remapping block.This post-processing block first enhances the luminance and then emphasizes the chrominance accordingly, lest color distortion occurs.Equation ( 9) displays this as P ω {•} to imply that it is also guided by the self-calibrating factor. The algorithm in [7] computes the dehazed result J based on multi-scale image fusion.This image dehazing approach belongs to the image processing category and is based on underexposure.Because this phenomenon occurs when inadequate incoming light hits the camera sensor, a postulation exists in the literature that underexposure can alleviate the negative effects of atmospheric scattering and absorption [36].Therefore, fusing images at different exposure degrees is analogous to image dehazing.Additionally, for adapting this idea to the single-image approach, researchers have widely utilized gamma correction to artificially underexpose an input image.Readers interested in a detailed treatment of this dehazing approach are referred to [7,36].Meanwhile, Algorithm 1 below provides a corresponding formal description. Algorithm 1 Multi-scale image dehazing Input: An RGB image I ∈ R H×W×3 , the number of artificially underexposed images K ∈ Z + 0 and corresponding gamma values (a) First scale: (a) Last scale: Fuse Laplacian pyramid: (a) Temporary results: The input data for multi-scale image dehazing include an RGB image I ∈ R H×W×3 of size H × W, a number of artificially underexposed images K ∈ Z + 0 and the corresponding gamma values After that, there follows the computation of Laplacian and guidance pyramids ({L k n } and {G k n }).It is noteworthy that Algorithm 1 computes the guidance pyramid according to the dark channel prior [9], due to its strong correlation with haze density.Before performing multi-scale fusion, it is essential to normalize the guidance pyramid to prevent the out-of-range problem.Finally, the fifth step demonstrates multi-scale fusion, beginning at the last scale and finishing at the first, whose result is the restored image J. Figure 4 depicts an example where K = 3 and N = 3. Substituting the restored image J into Equation ( 9) yields the final result R. Despite the excellent performance, the autonomous dehazing algorithm in [7] fails to deliver real-time processing, as shown by the run-time comparison in Table 1.A major reason is the multi-scale fusion scheme, because this algorithm sets N = log 2 [min(H, W)] .This setting is beneficial to the restored image's quality, but it carries a heavy burden of memory, thus prolonging the processing time.The problem worsens from the perspective of hardware implementation because multi-scale fusion requires multiple frame buffers for upsampling and downsampling. Furthermore, the minimum filtering operation is also at the root of the failure to achieve real-time processing.From the perspective of software implementation, the ideal complexity of filtering operations is O(H × W), which comprises two for loops to filter an H × W image. Consequently, the processing time increases in proportion to the image size, hindering high-quality real-time processing.The following presents an FPGA implementation where the computing capability suffices for handling DCI 4K images in real-time to surmount the aforementioned challenges. FPGA Implementation The challenges of improving computing performance are rooted in software implementation, and parallelization is often a practical solution.In parallel computing, a task divides into several sub-tasks, which central processors can execute independently, combining the results upon completion.For example, Figure 5 illustrates a naive parallelization of the autonomous dehazing algorithm discussed above, in which multi-scale image dehazing and a haziness degree evaluator occur simultaneously.In contrast, self-calibrating factor calculation, image blending, and adaptive tone remapping are dependent and thus occur sequentially.This computation flow consists of four stages, and the first accounts for most of the heavy computations.Accordingly, we assume that it is responsible for nine tenths of the entire algorithm, which, fortunately, supports parallelization.Following Amdahl's law [37], it is theoretically possible to achieve at most a 10× speedup in processing time [=1/(1 − 0.9)].The run-time comparison results in Table 1 demonstrate that it took 0.65 s to handle a 640 × 480 image.Hence, even if we apply parallelization with the maximum 10× speedup, the corresponding processing speed of 15.38 fps (≈1/0.065)would still be less than required.Consequently, FPGA implementation is essential for real-time processing, and the following play key roles in the proposed design. Pipelined Architecture Figure 6 illustrates the pipelined architecture for a real-time FPGA implementation of the base algorithm.The three primary components are the main logic, arithmetic macros, and memories.The first realizes the computation flow depicted in Figure 5, in which computation-intensive operations (such as multiplication, division, and taking square roots) are offloaded onto the second.Meanwhile, the third is analogous to a cache, consisting of SPRAMs for the temporary storage of data. Input data include an RGB image I and timing signals, namely, clock, reset, and horizontal and vertical active video (denoted as clk, rstb, hav, and vav in Figure 6).The image I simultaneously undergoes the following three blocks: stalling, single-scale image dehazing, and haziness degree evaluator.It is noteworthy that single-scale image dehazing is a special case of Algorithm 1 where N = 1 and K = 5.We restricted the proposed FPGA implementation to single-scale dehazing to circumvent the heavy burden of frame buffers.In addition, to avoid race conditions when combining the input I and its dehazed result J, we utilized stalling to delay I until J is available.After that, image blending combines I and J to produce the blended image B, which, in turn, undergoes adaptive tone remapping for luminance enhancement and chrominance emphasis.The proposed FPGA implementation then outputs the restored image R, together with its corresponding horizontal and vertical active video signals. As briefly mentioned, arithmetic macros are responsible for heavy computations.Thus, the design of all modules in the main logic becomes straightforward because they only account for lightweight operations (such as addition, subtraction, and data routing).However, to avoid digression, we set out the discussion of arithmetic macros in Appendices A and B, except for split multipliers.These circuits are aimed at reducing the propagation delay of large multiplications, and we explain their operation principle later in Section 3.2.3. Regarding the haziness degree evaluator, Equation (3) demonstrates that its calculation involves global average pooling.Therefore, we exploited the high similarity between video frames to design this block.As a result, its output ρ I becomes available during the vertical blank period, and the calculation of the self-calibrating factor ω takes place immediately thereafter.Hence, the ω value of a frame self-calibrates the next frame, thus enabling real-time processing of video data.Meanwhile, for processing still images, the proposed FPGA implementation needs a rerun to correctly self-calibrate the image blending and adaptive tone remapping blocks.To implement this hardware architecture, we utilized the Verilog hardware description language (IEEE Standard 1364-2005) [38] and register-transfer level (RTL) design abstraction.The former supports generality, portability, and plug-and-play capability, while the latter eases the hardware design burdens.For example, as the RTL methodology focuses on modeling the signal flow, it is simple and convenient to describe all modules in the main logic following the description in Section 3.1.In particular, the plug-and-play capability allows reuse of existing RTL designs, and the adaptive tone remapping is a case in point.Cho et al. [39] implemented and packaged this module as intellectual property, facilitating its integration into the proposed implementation. The pipelined architecture in Figure 6 improves the system's throughput, whereas the processing speed depends on the propagation delay of combinational logic circuits (CLCs).Accordingly, the following describes two techniques for reducing the propagation delay: • Fixed-point design for minimizing the signal's word length to reduce the size of CLCs; • Split multiplying for breaking large multiplications (represented by a large CLC) into smaller ones and inserting pipeline registers (PRs) between them, thus reducing propagation delay. Fixed-Point Design Fixed-point representation is a concept in computing that represents fractional numbers using only a fixed number of digits.Consequently, it sacrifices accuracy to reduce the representational burden.The fixed-point representation Q f of a real number Q is given below, where U denotes the number of fractional digits (or fractional bits when dealing with binary numbers). Fixed-point design refers to a method of finding the optimal fixed-point representation of all system signals, and an error tolerance ∆ is a prerequisite for that purpose.Specifically, given Q, its integer part determines the number of integer bits.Meanwhile, the absolute difference |Q f − Q • 2 U | is compared with ∆ to determine and adjust the number of fractional bits.Herein, given the eight-bit input image data, we determined the word length of the signals in Figure 6 based on an error tolerance of ±1 least significant bit.The results were {12, 13, 13, 12, 12} bits for {J, ρ I , ω, B, R}, respectively. Customized Split Multiplier Split multiplying is analogous to the grid method that is often taught at primary school.Under this approach, the S M -bit multiplicand M and the S E -bit multiplier E arbitrarily divide into The product P can then be expressed as follows: Hence, a large multiplication M • E divides into four smaller ones: By inserting four additional PRs to store the results of these multiplications, the latency increases by one clock cycle.However, the propagation delay incurred for computing each of and M 2 E 2 is significantly smaller than that for computing the original multiplication M • E. As described thus far, the proposed FPGA implementation is the final result of a sophisticated design process.We adopted pipelining and fixed-point design to improve the throughput and processing speed, respectively.In addition, we also utilized split multiplying to break large multiplications into smaller ones, further reducing the propagation delay until achieving real-time processing for DCI 4K resolution. Evaluation This section provides the hardware implementation results and compares the proposed FPGA implementation with existing benchmark designs to verify its efficacy.A performance evaluation then follows to demonstrate the autonomous dehazing capability on outdoor and aerial images. Implementation Results Table 3 summarizes the implementation results of the proposed autonomous dehazing system.Given the total hardware resources available in the mid-size FPGA device mentioned above, less than one third was required to realize the proposed system.More precisely, it took 53,216 slice registers, 49,799 slice look-up tables (LUTs), 45 RAM36E1s, and 22 RAM18E1s out of the corresponding 437,200, 218,600, 545, and 1090.The minimum period reported in Table 3 is equivalent to the maximum propagation delay among all CLCs of the system.This specifies the minimum interval at which the system produces new output data; thus, its reciprocal is the maximum frequency.As reported, the proposed system can handle at most 271.37 Mpixels per second. Let f max denote that maximum frequency.Then, the following equation demonstrates the calculation of maximum processing speed (MPS) in fps. MPS where H and W are the image height and width, and B ver and B hor denote the vertical and horizontal blank periods.Herein, the three variables f max , B ver , and B hor were design-dependent.Accordingly, if hardware designers fail to consider the blank periods, a design with an impressive f max may deliver a slow MPS.In this study, we implemented the proposed system to operate correctly with minimum periods of one clock cycle (B hor = 1) and one image line (B ver = 1).Table 4 summarizes the MPS values for different image resolutions, ranging from Full HD to DCI 4K.Thus, the proposed FPGA implementation can handle DCI 4K images/videos at 30.65 fps, which satisfies the real-time processing requirement.In the literature on image dehazing, a few real-time implementations exist, and those developed by Park and Kim [43] and Ngo et al. [35,42] are cases in point.The first design realizes the well-known algorithm of He et al. [9], in which Park and Kim [43] improve the atmospheric light estimation for video processing.The second design [42] improves the dehazing method of Tarel and Hautiere [8] by devising an excellent edge-preserving smoothing filter to replace the standard median one.Finally, the third design [35] is an improved version of the method of Zhu et al. [11].It has remedied several visually unpleasant problems such as background noise, color distortion, and post-dehazing false enlargement of bright objects. Table 5 below summarizes the implementation results of the four designs.A conspicuous observation is that the proposed autonomous dehazing system requires the least hardware resources.Despite its compact size, its processing speed is virtually the same as the fastest implementation in [35].Finally, the proposed system is equipped with the unique feature of autonomous dehazing, as demonstrated in the following. Performance This section evaluates the dehazing performance of the proposed system against five state-of-the-art methods, including those proposed by He et al. [9], Zhu et al. [11], Cai et al. [13], Berman et al. [12], and Cho et al. [2].The evaluation is performed on two types of images-outdoor and aerial-to demonstrate the breadth of applications of the proposed system.An essential difference between these two is the area of inspection.Outdoor images depict an area close to the camera, and they serve as data for understanding the environment within which the camera operates.In contrast, aerial images depict a larger inspection area, and they serve as data for monitoring a changing situation. Outdoor Images Because the aforementioned methods usually deliver satisfactory performance, images demonstrated hereinafter are those for which dehazing-related artifacts are easily noticeable.Figure 7 shows four representative outdoor images and the corresponding results of applying six dehazing methods in which the haze condition is determined based on the HDE score.Following [7], we adopt two thresholds {ρ 1 , ρ 2 } = {0.8811,0.9344} to discriminate the haze condition.Let ρ I be the input image's HDE score.Then, its haze condition is one of the following: It emerges from Figure 7 is that the five benchmark methods could not handle haze-free images correctly, as can be seen by the severe color distortion (dark-blue sky), except for the method of Cai et al. [13], where it can be seen that the powerful CNN is versatile enough to adapt to various haze conditions.However, slight degradation is noticeable in the near-field plants.The proposed system, in contrast, successfully discriminates this image as haze-free and zeroes the dehazing power through ω = 1 in Equation ( 9).Consequently, it leaves the haze-free image intact and thus free of any visually unpleasant artifacts. In addition, except for the deep CNN of Cai et al. [13], the benchmark methods exhibit post-dehazing artifacts in thin, moderate, and dense haze.Their dehazing power is too strong and not well adapted to the local content of images, as can be seen in the excess haze removal in the upper half and the persistence of haze in the lower half.For the same reason as that mentioned above, the results of Cai et al. [13] demonstrate a less severe problem.The proposed system takes a step forward and displays more satisfactory results than the benchmark methods.It automatically adjusts the dehazing power lest excess haze removal occurs.This desirable behavior is attributed to the elegant use of HDE scores to guide the image blending and adaptive tone remapping blocks.Furthermore, we utilized three full-reference metrics, namely, mean squared error (MSE), structural similarity (SSIM) [44], and feature similarity extended to color images (FSIMc) [45] to assess the dehazing performance quantitatively.In these three metrics, the smaller the MSE the better, whereas the opposite applies to SSIM and FSIMc.In addition, as these are full-reference metrics, we employed the following fully annotated datasets: FRIDA2 [46], D-HAZY [47], O-HAZE [48], I-HAZE [49], and Dense-Haze [50].FRIDA2 consists of 66 graphics-generated images of road scenes, based on which Tarel et al. [46] synthesized four hazy image groups (in total, 66 haze-free and 264 hazy images).Similarly, D-HAZY is composed of 1472 indoor images whose corresponding hazy images are synthesized with scene depths captured by a Microsoft Kinect camera.In contrast, O-HAZE, I-HAZE, and Dense-Haze comprise 45, 30, and 55 pairs of real hazy/haze-free images depicting indoor, outdoor, and both indoor and outdoor scenes, respectively.Another facet to consider is that input images to a dehazing system are not necessarily hazy.Hence, we employed both the haze-free and hazy images of those datasets and an additional 500IMG dataset [35] consisting of 500 haze-free images collected in our previous work. Table 6 summarizes the quantitative evaluation results, where we boldface the top three results in red, green, and blue, respectively, for ease of interpretation.Thus, it is clearly seen that the proposed system demonstrates the best performance regardless of haze conditions.In particular, it attains virtually perfect scores for haze-free images, attributed to the excellent performance of HDE in haze condition discrimination.In addition, even the results on hazy images per se show a clear gap between this and the second-best method. Overall, the methods of He et al. [9] and Cai et al. [13] share the following two positions.Table 6 shows that the former is situational.On the one hand, it exhibits the top scores on D-HAZY due to its well-known excellence in indoor dehazing.On the other hand, its inherent failure to handle sky regions results in poor performance on FRIDA2.Conversely, the latter is versatile as it performs relatively well on all datasets.It is also noteworthy that SSIM does not account for the chrominance information; hence, the method of He et al. [9] is ranked second overall under this metric.However, under FSIMc, which accounts for chrominance, the DehazeNet of Cai et al. [13] is ranked second, consistently with the qualitative evaluation results in Figure 7. The remaining three methods of Berman et al. [12], Cho et al. [2], and Zhu et al. [11] occupy the last three positions.Quantitative results on Dense-Haze demonstrate that the two methods of Berman et al. [12] and Cho et al. [2] are effective for haze removal.However, as the qualitative evaluation shows, they are susceptible to severe post-dehazing artifacts.The method of Zhu et al. [11] suffers from several problems such as color distortion and background noise (as pointed out by Ngo et al. [34]), resulting in its poor performance. Aerial Images In the aerial surveillance literature, no real datasets exist comprising pairs of hazy (or cloudy) images and their corresponding ground-truth reference.This is due to the sheer impracticality of capturing the same area under different weather conditions.Therefore, we propose a method to synthesize hazy images for evaluating image dehazing algorithms in aerial surveillance. According to Equation (1), the global atmospheric light A and transmission map t are prerequisites for hazy image synthesis.As A remains constant across the entire image domain, it is a common practice to derive A from the uniform distribution.In contrast, synthesizing t is a difficult task.On the one hand, Zhu et al. [11] proposed creating a pixel-wise random transmission map whose values were uniformly distributed.On the other hand, Jiang et al. [28] added a constant haze layer to a clean image by utilizing a scene-wise random transmission map.These two approaches are unrealistic because they do not reflect the true distribution of haze.To address this problem, we propose synthesizing haze/cloud as a set of low-frequency randomly distributed values, as shown in Algorithm 2. Using the random haze/cloud distribution discussed above, we synthesized hazy/cloudy images from their clean counterparts based on Equation (1), as shown in Al-gorithm 3.For customization, we exploited the HDE [4] to guide the generation to arrive at an image that possessed a desirable HDE score.In Algorithm 3, the haze density control D ρ ∈ R + 0 and its step δ are responsible for varying the haze density to meet the predetermined HDE score.In addition, to help to avoid the generation of an infinite loop, we adopted the HDE tolerance ∆ ρ and a maximum number of iterations M I .An example of this synthetic hazy/cloudy image generation is shown in Figure 1b. Figures 9 and 10 demonstrate the dehazing performance of the proposed system and the benchmark methods on synthetic aerial hazy images, where their corresponding hazefree images are from AID [1].As with the assessment of outdoor images, the benchmark methods suffered from color distortion and halo artifacts, causing a marked difference between their results and the corresponding haze-free reference at the top left.Table 7 summarizes the MSE, SSIM, and FSIMc scores on synthetic aerial images in Figures 9 and 10.It can be observed that the proposed system shares the top performance with the two methods of Cai et al. [13] and He et al. [9].More specifically, its performance is within the top two for images with thin and moderate haze as well as for haze-free images.However, for densely hazy images, the performance is slightly worse than that of the aforementioned two benchmark methods.This is due to the fact that the benchmark methods often suffer from severe color distortion in the sky, whereas aerial images generally cover territorial areas.Therefore, the reduced performance for aerial images with dense haze is explicable. Finally, we assessed the performance of a YOLOv4-based high-level object recognition algorithm (mentioned in Section 1.1) on the dehazed results depicted in Figure 9. Table 8 summarizes the detection results, while Figure 11 illustrates them visually.The term Failure in Table 8 denotes the number of incorrectly detected objects.It is also noteworthy that the detection results reported in the table were aggregated based on the confidence level.The results for the method of Zhu et al. [11] for a moderately hazy image in Figure 11 can be taken as an example.The recognition algorithm yielded two detection results for the airplane near the center of the image: bird with 40% confidence and airplane with 31% confidence.Therefore, the final result for that airplane was the label with the higher confidence level, i.e., bird.Obviously, the algorithm incurred a Failure in this case, and the underlying reason was probably color distortion occurring due to excess haze removal. Based on Table 8 and Figure 11, the proposed system is clearly superior to the benchmark methods because it does not cause any additional Failures compared with the input image.Two Failures for haze-free and thin haze images are inherent in the input image itself.In contrast, the benchmark methods are prone to excess haze removal, and therein lies the cause of many Failures. Conclusions This paper presented an FPGA-based autonomous dehazing system that could handle real-time DCI 4K images/videos.Starting from the position that the currently predominant deep approach represented overkill, we analyzed a non-deep approach for autonomous image dehazing.Under this approach, the fundamental idea was to combine the input image and its dehazed result according to the haze condition.After that, we adopted pipelining, fixed-point design, and split multiplying to devise a 4K-capable FPGA implementation.We then conducted a comparative evaluation with other benchmark hardware designs to verify its efficacy.In addition, we presented a performance evaluation on outdoor and aerial images to demonstrate its effectiveness in various circumstances, rendering the proposed implementation highly relevant to real-life systems (such as autonomous driving vehicles and aerial surveillance). Furthermore, we pointed out two inherent limitations of the proposed system: handling haze-free images with a broad and homogeneous background and handling hazy night-time images.Since the adopted HDE discriminated the haze condition of these images incorrectly, the self-calibration feature did not function as intended.Such limitations notwithstanding, the proposed system is deemed to be reliable due to the HDE's high reliability for haze condition discrimination. Appendix A This appendix discusses the design of serial and parallel dividers in arithmetic macros.Figure A1 depicts the datapath and state machine for realizing the former type, which is appropriate for dividing user-defined parameters.The datapath consists of three main registers: the (M + N)-bit holder, N-bit divisor, and Q-bit quotient.There is also an implicit counter to signify the completion of division.Upon the transition from IDLE to OPERATION, the holder is loaded with an M-bit dividend at the least significant positions and zero-padded to (M + N) bits. According to the state machine, the operation is relatively straightforward.Upon reset, the serial divider is in the IDLE state.When the start signal occurs, this changes to the OPERATION state, and loads the dividend and divisor into holder and divisor registers.In this state, if the divisor is equal to zero, the divider changes to the ERROR state and produces a flag to signify division by zero.After that, it returns to the IDLE state.Otherwise, it starts the implicit counter and compares the divisor to every bit of the dividend, beginning with the most significant bit and proceeding according to the comparison result.It also generates quotient bits, and shifts them to the quotient register at the least significant position.When the quotient register captures all Q bits, the counter produces a signal to trigger a transition to the DONE state.The divider then returns to the IDLE state and waits for the next call. Figure 1 . Figure 1.Illustration of the negative effects of cloud and beneficial effects of image dehazing on an aerial surveillance application.First row: (a) a clean image and its corresponding (b) synthetic cloudy image and (c) dehazed result.Second row: (d-f) results obtained after processing (a-c) using a YOLOv4-based high-level object recognition algorithm.Notes: cyan labels represent airplanes, and navy-blue labels represent birds. Figure 2 . Figure 2. Illustration of the negative effects of image dehazing on an aerial surveillance application when the input image is clean.(a,b) A clean image and its corresponding dehazed result.(c,d) Detection results obtained after processing of (a,b) by a YOLOv4-based high-level object recognition algorithm.Notes: (a,c) were adopted from Figure 1a,d.Cyan labels represent airplanes, and navy-blue labels represent birds. 1 : and the number of scales N ∈ Z + 0 , N ≤ log 2 [min(H, W)] Output: The restored image J ∈ R H×W×3 Auxiliary functions: u 2 (•) and d 2 (•) denote upsampling and downsampling by a factor of two BEGIN Create input pyramid: Figure 4 . Figure 4. Illustration of the multi-scale image dehazing in Algorithm 1 with K = 3 and N = 3. Figure 5 . Figure 5. Illustration of a naive parallelization of the autonomous dehazing algorithm. Figure 6 . Figure 6.Pipelined architecture of the proposed FPGA implementation. Table 1 . Processing time in seconds of different image dehazing methods for different image resolutions. •An FPGA-based implementation of an autonomous dehazing algorithm that can satisfactorily handle high-quality clean and hazy/cloudy images in real time.•Anin-depthdiscussion of FPGA implementation techniques to achieve real-time processing on high-resolution images (DCI 4K in particular).•Anefficient method for synthesizing cloudy images from a clean dataset (AID). Table 2 . Summary of image dehazing categories. and the number of scales N ∈ Z + 0 .The representation Z + 0 denotes a set of non-negative integers, and thus k ∈ Z + 0 ∩ [1, K] means that k is a non-negative integer lying between 1 and K. Based on the image size, N must be smaller than its maximum value of log 2 [min(H, W)] .Two auxiliary functions u 2 (•) and d 2 (•) denote upsampling and downsampling by a factor of two.The first step is to create an input pyramid {I k n Table 3 . Hardware implementation results for the proposed autonomous dehazing system.LUT stands for look-up table, and the symbol # denotes quantities. Table 4 . Maximum processing speeds in frames per second for different image resolutions.The symbol # denotes quantities. Table 5 . Comparison with existing benchmark designs.The symbol # denotes quantities. Table 6 . Average mean squared error (MSE), structural similarity (SSIM), and feature similarity extended to color images (FSIMc) scores on different datasets.Top three results are boldfaced in red, green, and blue.Image size H, W ∈ Z + 0 and cut-off frequency F c ∈ [0, π] Output: Transmission map t ∈ [0, 1] H×W Auxiliary functions: N (H, W) generates a H × W image of random Gaussian noise, {F (•), I(•)} denote forward and inverse Fourier transforms, and L(X, F c ) denotes low-pass filtering the image X with the cut-off frequency F c Table 7 . Average MSE, SSIM, and FSIMc scores on synthetic aerial hazy images.Top three results for each image are boldfaced in red, green, and blue.
10,913
2022-04-12T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Driver Attention Area Extraction Method Based on Deep Network Feature Visualization Featured Application: The method proposed in our paper is mainly applied to intelligent driving, driving training and other fields. Our method greatly solves the problems of complex information processing and massive consumption of computing resources in the field of intelligent driving. We can find the area that drivers are most interested in among many items of complicated information. The application of our method can reduce the cost of driverless driving, thereby promoting the early realization of unmanned driving. The method can also be applied to the field of driving training, for example, a novice driver can use our method to judge whether the driver’s attention area is correct and ensure driving safety. Abstract: The current intelligent driving technology based on image data is being widely used. However, the analysis of tra ffi c accidents occurred in intelligent driving vehicles shows that there is an explanatory di ff erence between the intelligent driving system based on image data and the driver’s understanding of the target information in the image. In addition, driving behavior is the driver’s response based on the analysis of road information, which is not available in the current intelligent driving system. In order to solve this problem, our paper proposes a driver attention area extraction method based on deep network feature visualization. In our method, we construct a Driver Behavior Information Network (DBIN) to map the relation between image information and driving behavior. Then we use the Deep Network Feature Visualization method (DNFV) to determine the driver’s attention area. The experimental results show that our method can extract e ff ective road information from a real tra ffi c scene picture and obtain the driver’s attention area. Our method can provide a useful theoretical basis and related technology of visual perception for future intelligent driving systems, driving training and assisted driving systems. Introduction Traffic driving scene is an extremely complex scene, which is characterized by three-dimensional diversification and rapid change of information. In the current field of intelligent driving, the target detection algorithm based on YOLO [1,2] and the target detection algorithm based on SSD [3] both detect all targets, which not only increase the calculation cost due to processing a lot of useless information, but also cannot extract the effective information from the outside and make corresponding driving behaviors like a real driver. Human visual selective attention mechanism is an important neural mechanism for a visual system to extract key scene information and filter redundant information. The combination of human visual selective attention mechanisms and intelligent driving technology can greatly reduce the cost of intelligent driving and promote the popularization of intelligent driving technology. Furthermore, it can also promote the interpretive approach of artificial intelligence in the field of intelligent driving, which is helpful to develop safer and more intelligent driverless vehicles. Therefore, the extraction methods of driver's attention area in traffic driving scenes have gradually become the research hotspot of intelligent driving vehicles, and many experts and scholars have carried out extensive research on the subject. Yun S.K. et al. [4] obtained a series of physiological parameters such as electroencephalogram (EEG), electrooculogram (EOG) and electrocardiogram (ECG) through the driver wearing various types of medical monitoring equipment, which were used to detect the driver's attention. By analyzing those detected physiological parameters, it can be concluded that the ECG will change significantly when the driver is accelerating, braking and steering. When the driver is tired, their heart rate will decrease significantly. Obviously, when the driver is tired or distracted, their physical parameters will change significantly. Bibhukalyan Prasad Nayak et al. [5] concluded the following from their experiment: when the driver is in a state of severe fatigue and the concentration is significantly reduced, the high-frequency ECG component will drop sharply. Qun Wu et al. [6] used principal component analysis method based on kernel function to analyze ECG signals, and separated fatigue state from normal state, thus detecting driver's distraction. Li-Wei Ko [7] developed a single-channel wireless EEG solution for mobile phone platform, which can detect driver's fatigue state in real time. Moreover, Degui Xiao et al. [8] suggested that it is the driver's distraction during driving that is the main cause of traffic accidents, so they proposed an algorithm to detect whether a driver is distracted. The algorithm can track the driver's gaze direction and detects moving objects on the road through motion compensation. Most of the above methods focus on the detection of whether the driver's attention is focused, and there is no discussion about the driver's attention area. Meng-Che Chuang et al. [9] used the driver's gaze direction as an indicator of the driver's attention, and defined a feature descriptor for SVM gaze classifier training, which takes eight common gaze directions as the output. Francisco Vicente et al. [10] proposed a low-cost vision-based driver sight detection and tracking system, which can track the driver's facial features, and can use the tracked landmarks and three-dimensional face model to calculate the head position and gaze direction. Sumit Jha and Carlos Busso [11] constructed a regression models to estimate the driver's line of sight based on the head position and direction from the data in the natural driving record to determine the driver's area of interest. Tawari et al. [12] recorded the eye movement data of the driver using head-mounted cameras and google glasses. Then they used eye tracking technology to detect the target of interest to the driver, and finally determined whether the target was located in the center of the driver's attention. However, the above methods all require complicated instruments and equipment, so that experiments cannot be carried out on real roads. Moreover, they ignore the complex traffic scenes and have certain limitations. In recent years, there have been few studies on the driver attention area based on real traffic scenes. Lex Fridman et al. [13] focus on a driver's head, detecting facial landmarks to predict driver attention area. Nian Liu et al. [14] put forward a novel computational framework, which uses a multiresolution convolutional neural network (Mr-CNN) to predict eye gaze. Zhao, S. et al. [15] proposed a driver visual attention network (DVAN), which can extract the key information affecting the driver's operation by predicting the driver's attention points. The above method provides a new idea for the driver's attention area extraction but there is no certainty about the adherence of predictions to the true gaze during the driving task. Andrea Palazzi et al. [16] published the data set of DR(eye)VE, which is a traffic scene video database for predicting the attention position of drivers. The DR (eye)VE data set contains 74 traffic driving videos, each of which lasts 5 min, and records the eye movement data of eight drivers during real driving. The data set is not only composed of more than 500,000 images, but also records the driver's gaze information and its geographic location information, driving speed and driving route information; this information is not recorded in other data sets. In the follow-up work, they used the ready-made Convolutional Neural Network (CNN) algorithm to train on their database to predict the location of the driver's attention area in the driving scene [17,18]. Tawari and Kang [19] further improved the prediction results of driver's attention area on DR(eye)VE data set based on Bayesian theory. However, for the study of predicting driver's attention area in the driving scenes, each video only includes the eye movement data of a single driver, which not only makes the eye movement experimental data too limited, but also is easily affected by individual differences, resulting in some important traffic scene information being ignored. In our paper, we propose a driver attention area extraction method based on deep network feature visualization. This method mainly includes the Driving Behavior Information Network (DBIN) and the Deep Network Feature Visualization Method (DNFV). Firstly, we use the DBIN and DBNet data set [20] to construct the relationship between driver's horizon information and driving behavior. Then, we use the DNFV to obtain the driver's attention area. Finally, we analyzed the predicted results based on the real traffic scene and driving behavior. Driver Attention Area Extraction Method To solve the problems that the current intelligent driving field cannot effectively locate and identify the driver's attention area during the target information extraction process, and the fact that the information processing process is complicated and expensive, we propose a driver attention area extraction method based on deep network feature visualization. This method mainly includes the Driving Behavior Information Network (DBIN) and the Deep Network Feature Visualization Method (DNFV). Among them, the role of DBIN is to determines the correspondence between driver's horizon information and driving behavior, and the role of DNFV is to determine the driver's attention area. Figure 1 shows the overall structure. Driving Behavior Information Network (DBIN) Our paper proposes a Driving Behavior Information Network (DBIN) and uses DBIN to train the DBNet data set, taking the driver's horizon information (video frames captured by the driving recorder) as input, and driving behavior (steering wheel angle and speed) for output. In this way, the one-to-one correspondence between the driver's horizon information and driving behavior is determined. The driver's horizon information first passes through a 7 × 7 Convolutional Layer (CON), a BatchNorm layer (BN), and a 3 × 3 Maximum Pooling Layer (MP). The output of the MP will enter an inception block [21] which contains four parallel lines. The first three lines use CON with window sizes of 1 × 1, 3 × 3, and 5 × 5, respectively, to extract different spatial scale information, which makes the extracted information more complete and reduce the model parameters. The two middle lines will use 1 × 1 CON to reduce the number of input channels, thereby reducing the complexity of the mode. The fourth line uses a 3 × 3 MP and 1 × 1 CON connection to change the number of channels. Appropriate padding is selected for all four lines to keep the height and width of input and output consistent. Finally, the output of each line is combined on the channel dimension to obtain the output layer (MO). The output of MO will first pass through six Information Extraction Blocks (IEB) and one Conversion Layer (CL), then through twelve IEBs, and finally through the MP and the full connection layer (FCL) to obtain the final output. The conversion layer (CL) consists of two neural network layers, which is a BN followed by a 1 × 1 CON. Accordingly, this procedure controls the number of output channels and prevents the number of channels from being too large. As shown in Figure 2. An IEB module includes five neural network layers, as shown in Figure 2, where ACT and DRO, respectively, represent the Activation Layer and the Dropout Layer. We have adopted a dense connection mode between each IEB, connecting any layer with all subsequent layers, so that the information is retained to the greatest extent without losing the key information concerning the driver's attention. The dense connection mode is shown in Figure 3. The l t layer receives the feature maps of all previous layers, and its mathematical feature map is expressed as Formula (1), where [x 0 , x 1 , . . . , x t−1 ] represents the feature mapping from l 0 to l t−1 layer, x t is the output of l t , H t represents connecting the information of the previous layer in the channel dimension. The final Information Extraction Blocks (IEB) will go through Global Average Pooling (GAP) and FCL to get the final output. The Deep Network Feature Visualization Method (DNFV) After using DBIN to accurately construct the one-to-one correspondence between driver's horizon information and driving behavior, we use the Deep Network Feature Visualization Method (DNFV) to determine the driver's attention area. After passing the IEB, we map all the feature maps generated by the convolution through GAP, and send the mapping results to the FCL. According to the weight matrix W of the FCL, the final output driving behavior is determined. After DBIN training is completed and high accuracy is obtained, we project the W of the output layer into the convolution feature map, weight the feature map with W, and then superimpose it with the original image frame to display the driver's attention area. The mathematical feature mapping of this process is expressed as Formula (2). The I t out represents the output image superimposed with the feature map at time t. I t represents the input image at time t. [W] are the weight matrix of the last output layer, and [F P ] represents the feature map of the last IEB output. As shown in Figure 4, when the model started training, the W matrix had just been initialized. At this time, the model listened to the W matrix, and may choose the No. 12/15/19 feature map as the basis for determining the output. The output driving behavior may be described, however, the predicted loss is very large at this time, so the W matrix is constantly updated in the subsequent back propagation, and the No. 10/20/50 feature map is gradually used as the basis for judgment. As the value of loss decreases and the accuracy increases, the model will choose a more suitable feature map as the judgment basis, and the driver's attention area will become more and more accurate. Dataset Description DBNet (DB is the abbreviation of driving behavior) data set was jointly released by SCSC Lab of Xiamen University and MVIG Lab of Shanghai Jiaotong University, and is specifically designed to study strategy learning for driving behavior. DBNet records video, lidar point cloud, and the actual driving behavior of the corresponding senior driver (over 10 years of driving experience). It also solves the problem of an end-to-end method proposed by Nvidia researchers [22] in 2015 without data sets. The data scale of DBNet is about 10 times that of KITTI [23,24]. DBNet not only can provide training data for learning the driving model of senior drivers, but also evaluate the difference between the driving behavior predicted by the model and the real driving behavior of senior drivers. In our paper, we select a part of the training set of DBNet to remake the training set, validation set and test set used in our research, and remove the point cloud data, so that training set: validation set: test set = 6:1:1. The part of the data set is shown in Figure 5. Experimental Details The input of Driving Behavior Information Network (DBIN) is the video frame of DBNet. We change the original size to make the input size 224 × 224 × 3. Epoch is set to 100. The labels of training data are driving behaviors (speed and steering wheel angle), in which the steering wheel angle indicates turning right and turning left with positive and negative values. The loss function is the Mean Square Error (MSE), which can evaluate the degree of data change. The smaller the MSE value, the better the accuracy of the experimental data described by the prediction model. The MSE mathematical expression is shown in Equation (3), where k represents the dimension of the data, y t represents the label of the training data (driving behavior), and y p represents the predicted value of the driving behavior information network (DBIN). In order to prevent the value of the loss function from being too large and increase the effect of data fitting, we conduct some processing on the driving behavior data in DBNet. The mathematical expression of the processing process is shown in Equation (4), where v and Ang represent the actual collected speed and steering wheel angle, and V r and Ang r represent the processed speed and steering wheel angle. The hardware configuration of the experimental environment is NVIDIA GTX1080 video card and 16 GB of memory; the programming environment is Tensorflow. Experimental Results and Analysis We constructed the Driving Behavior Information Network (DBIN), which is used to establish the one-to-one correspondence between driver horizon information and driving behavior. The change of loss function during training is shown in Figure 6a. The accuracies are measured within 6 • or 5 km/h biases. In addition, the results of the accuracy of the test set and classical convolutional networks, such as DensNet169 [25], incidence v3 [26] and VGG16 [27] are compared. The results are shown in Figure 6b. It can be seen from Figure 6a that as the training progresses, the value of the loss function continues to decrease and eventually stabilizes when iterating over 100 epochs. Moreover, we can clearly see from Figure 6b that DBIN has higher accuracy than several other models, and obtains good results. In Figure 7, the red area represents the driver's main attention area and it can be seen that the current traffic scene depicts our car following the white vehicle through the zebra crossing. At this moment, the driver will pay more attention to the distance from the vehicle and observe whether there is a pedestrian on the zebra crossing. After analysis, we can see that the driver's main attention area obtained by our method accords with the driver's selective attention mechanism. Because the accuracy of DBIN is higher, the display effect in Figure 7d is the best, and the determined driver's main attention area is also the most accurate. Figure 8a-c represent the early, middle and last three stages of training, respectively. It can be clearly seen from the figure that the driver's main attention area is incomplete and inaccurate in the early stage of training. Some images show that the attention area is only a tiny part of the front windshield, and some show the full screen as the attention area, which is obviously abnormal. However, with the training, the driver's attention area gradually changes and eventually becomes accurate and complete. In Figure 8, there are three traffic scenarios from top to bottom. The first and second traffic scenarios are similar in that our car passes on the road where the vehicle stops on the right side, but the difference is that our car in the first scenario is closer to the parked vehicle on the right side. For the first traffic scene, the driver's speed is 4 km/h, and the steering wheel turns 30 • to the left. Obviously, the driver is slowly moving the vehicle to the left to prevent the collision with the vehicle on the right. Therefore, the driver's main attention area will be in the right front of the vehicle. While in the second traffic scene, our car is far away from the vehicle on the right and the front view is wide. At a speed of 20 km/h, the driver turns the steering wheel 5 • to the left, and it is obvious that the driver is crossing the street at a low speed. Therefore, the driver puts the main attention area in front of the car. The third traffic scene is that our car passes through the bridge, which is dangerous to some extent. In this scene, there are no vehicles around, and the driver has a wide field of vision. At the moment, the speed of the car is 57 km/h, and the steering wheel turns 7 • to the left. It can be seen that the driver is crossing the bridge at normal speed. Therefore, the driver will only focus on the lane ahead and the distance from the vehicle ahead. Validation In order to better validate that our method is also effective in different traffic scenarios, we show that our method extracts the driver's attention area information in various traffic scenarios in Figure 9. In Figure 9a, the vehicle speed is 8 km/h, and the steering wheel does not turn left and right. It can be clearly seen from Figure 9a that our car is driving forward on the road, and a white car is coming from the left at the intersection of the road in front. If the driver does not handle it properly, it is very easy to cause a traffic accident. Therefore, the driver's main attention area will be placed on the upcoming white vehicle. At the same time, the driver will reduce the speed to prevent traffic accidents. The behavior in Figure 9b is that the vehicle speed is 28 km/h, and the steering wheel turns 5 • to the left. Although there are parked vehicles on the right side of the road, it is more important that our car is slowly approaching to the left, which is very close to the white vehicle in the adjacent reverse lane and thus, to the white vehicle and the fence on the left. Figure 9c shows that the vehicle speed is 34 km/h, and the steering wheel turns 5 • to the right. It can be clearly seen from Figure 9c that our car turns right and will cross the crosswalk, but there is also a white vehicle in front to the right. In order to avoid traffic accidents, the driver's main attention area will be on the crosswalk and the white vehicles. Compared with the above three traffic scenes, the traffic scenes in Figure 9d-f are relatively simple with fewer vehicles, but they are often encountered in real life. The behavior shown in Figure 9d is that the vehicle speed is 0 km/h, and the steering wheel does not turn left and right. It can be clearly seen from Figure 9d that our car is parked waiting for pedestrians to pass the crosswalk. Therefore, the driver will put the main attention area on the crosswalk to prevent traffic accidents with pedestrians. The behavior in Figure 9e is that the vehicle speed is 28 km/h, and the steering wheel turns 10 • to the left. It can be clearly seen from Figure 9e that our car turns to the left, and the surrounding view is wide. There is only a black car parked in front of the left. At this time, the driver will put the main attention area on the left, observe the distance from the left road and the distance from the black car to avoid traffic accidents. In addition, in Figure 9f the vehicle speed is 25 km/h, and the steering wheel turns 20 • to the left. It can be clearly seen that there is a white car driving in the same direction directly in front of our car, and a bus starting to move in the front right. This is a very common traffic scene in daily life. At this time, the driver will keep the distance of the vehicle ahead and approach slowly to the left. In order to avoid the occurrence of traffic accidents, the driver's main attention area will be in the area between their own vehicle and the vehicle ahead to maintain a safe distance. Conclusions At present, there is a problem that the driver's attention area cannot be determined in the field of intelligent driving. To solve this problem, our paper proposes Driver Attention Area Extraction Method Based on Deep Network Feature Visualization. In our paper, we first determine the correspondence between driver's horizon information and driving behavior by building a Driving Behavior Information Network (DBIN), and then use the Deep Network Feature Visualization Method (DNFV) to determine the driver's attention area. In the experimental part, we first use the DBNet data set for training, and conduct a comparative analysis with a variety of classic convolutional neural networks. Finally, we combine the current driving behavior and traffic scenarios to analyze our experimental results; the experimental results show that our method can accurately determine the driver's attention area no matter if it is in a complex or simple traffic scene. Our research can provide a useful theoretical basis and related technical means of visual perception for future intelligent driving vehicles, driving training and assisted driving systems.
5,583.2
2020-08-07T00:00:00.000
[ "Computer Science", "Engineering" ]
Search for decays of stopped exotic long-lived particles produced in proton-proton collisions at $\sqrt{s}=$ 13 TeV A search is presented for the decays of heavy exotic long-lived particles (LLPs) that are produced in proton-proton collisions at a center-of-mass energy of 13 TeV at the CERN LHC and come to rest in the CMS detector. Their decays would be visible during periods of time well separated from proton-proton collisions. Two decay scenarios of stopped LLPs are explored: a hadronic decay detected in the calorimeter and a decay into muons detected in the muon system. The calorimeter (muon) search covers a period of sensitivity totaling 721 (744) hours in 38.6 (39.0) fb$^{-1}$ of data collected by the CMS detector in 2015 and 2016. The results are interpreted in several scenarios that predict LLPs. Production cross section limits are set as a function of the mean proper lifetime and the mass of the LLPs, for lifetimes between 100 ns and 10 days. These are the most stringent limits to date on the mass of hadronically decaying stopped LLPs, and this is the first search at the LHC for stopped LLPs that decay to muons. Introduction Heavy long-lived particles (LLPs) on the order of 100 GeV are not present in the standard model (SM). Therefore, any sign of them would be an indication of new physics. Many extensions of the SM predict the existence of LLPs [1][2][3][4][5][6][7][8]. At the CERN LHC, the LLPs will stop inside the detector material if they lose all of their kinetic energy while traversing the detector, which will typically occur for particles with initial velocities less than about 0.5c [9]. This energy loss can occur via nuclear interactions if they are strongly interacting and/or through ionization if they are charged. The observation of a stopped particle decay signature would not only indicate new physics but also help measure the lifetime of LLPs, giving insights into various beyond the standard model (BSM) theories. If these stopped LLPs have lifetimes longer than tens of nanoseconds, most of their decays would be reconstructed as separate events unrelated to their production [10]. Owing to the difficulty of differentiating between the LLP decay products and SM particles from LHC protonproton (pp) collisions, these subsequent decays are most easily identified when there are no proton bunches in the detector. The detector is quiet during these out-of-collision time periods with the exception of rare noncollision backgrounds, such as cosmic rays, beam halo particles, and detector noise. If LLPs come to a stop in the detector, they are most likely to do so in the densest detector materials, which in the CMS detector are the electromagnetic calorimeter (ECAL), the hadron calorimeter (HCAL), and the steel yoke in the muon system. If the stopped LLPs decay in the calorimeters, relatively large energy deposits occurring in the intervals between collisions could be observed. Furthermore, if the stopped LLPs decay into muons, displaced muon tracks out of time with the collisions could be detected. In this paper we present two searches for stopped LLPs that decay out of time with respect to the presence of proton bunches in the detector. One search targets hadronic decays detected in the calorimeters, and the other looks for decays to muon pairs in the muon system. These two search channels are analyzed independently using data collected by the CMS experiment in 2015 and 2016 with separate dedicated triggers. The calorimeter (muon) search uses √ s = 13 TeV data corresponding to an integrated luminosity of 38.6 (39.0) fb −1 collected with LHC pp collisions separated by 25 ns during a search interval totaling 721 (744) hours. The size of the search sample is further reduced by applying a series of offline selection criteria to decrease the number of events that most likely come from the primary sources of background. The calorimeter search presented here improves upon previous searches performed by the CMS Collaboration, the most recent of which used √ s = 8 TeV pp collision data corresponding to an integrated luminosity of 18.6 fb −1 collected in 2012 [11]. This search excluded long-lived gluinos ( g) with masses below 880 GeV and long-lived top squarks ( t) with masses below 470 GeV, for lifetimes between 10 µs and 1000 s. The results of earlier, similar searches have been reported by the D0 Collaboration at the Tevatron [12] and by the CMS [13,14] and AT-LAS Collaborations [15,16]. The displaced muon search is newly added to investigate different models with leptonic decays of stopped LLPs, such as those of gluinos [9] and multiply charged massive particles (MCHAMPs) [17][18][19][20]. Searches for decays of stopped LLPs are complementary to searches for heavy stable charged particles (HSCPs) that pass through the detector and can be identified by their energy loss and time-of-flight (TOF) information [21][22][23][24][25][26][27][28][29][30][31][32][33][34]. The searches presented here would allow the study of the decay of such heavy particles, whereas dedicated HSCP searches typically look for the particle itself, before it decays. However, both the searches for decays of stopped LLPs and for HSCPs are sensitive to a similar range of lifetimes. The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal ECAL, and a brass and scintillator HCAL, each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity η coverage provided by the barrel and endcap detectors. In the region |η| < 1.74, the HCAL cells have widths of 0.087 in η and 0.087 radians in azimuth (φ). In the η-φ plane, and for |η| < 1. 48, the HCAL cells map on to 5 × 5 arrays of ECAL crystals to form calorimeter towers projecting radially outwards from close to the nominal pp collision interaction point (IP). For |η| > 1.74, the coverage of the towers increases progressively to a maximum of 0.174 in ∆η and ∆φ. Within each tower, the energy deposits in ECAL and HCAL cells are summed to define the calorimeter tower energies, which are subsequently used to provide the energies and directions of hadronic jets. In the HCAL barrel (HB) and endcap, scintillation light is detected by hybrid photodiodes (HPDs), and each HPD collects signals from 18 different HCAL channels. Signals from four HPDs are then digitized by analog-to-digital converters within a single readout box (RBX). Muons are measured in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. Muons are measured in the range |η| < 2.4, with detection planes made using three technologies: drift tubes (DTs) in the barrel, cathode strip chambers (CSCs) in the endcaps, and resistive plate chambers (RPCs) in both the barrel and the endcaps. All these technologies provide both position and timing information. Hits within each DT or CSC chamber are matched to form a reconstructed DT or CSC segment. The first level (L1) of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a fixed time interval of less than 4 µs. The high-level trigger processor farm further decreases the event rate from around 100 kHz to less than 1 kHz, before data storage. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [35]. Data samples The LHC accelerates two proton beams in opposite directions such that the protons collide at several points along the LHC ring, including one at the CMS detector. Each LHC beam consists of a number of proton bunches arranged into an irregular pattern of "trains" [36]. Within a train, the proton bunches are nominally spaced 25 ns apart, with a larger spacing between trains to account for the needs of the injection process. In an LHC orbit there are 3564 bunch slots (BXs), which are 25 ns long. Each BX could be filled with proton bunches, which usually occupy the first 2.5 ns of the BX, or could be empty. The trains may be spaced such that there could be multiple empty BXs between filled BXs. To search for LLP decays during these empty BXs, dedicated triggers select events at least two BXs away from any proton bunches. Thus these triggers are live only during these specific time windows. This distance of two BXs is chosen so that we maximize the search time window while suppressing most of the events from secondary pp interactions and from "beam halo", which are mostly muons traveling outside the LHC beam that are produced by LHC beam-collimator scattering. The search is performed with √ s = 13 TeV pp collision run data collected by the CMS experiment in 2015 and 2016. The 2015 calorimeter (muon) search sample, taken between August and November 2015, corresponds to an integrated luminosity of 2.7 (2.8) fb −1 and spans a trigger livetime, which is the amount of time the triggers are live in between collisions, of 135 (155) hours. The 2016 calorimeter (muon) search sample was taken between May and October 2016, during which a data sample corresponding to an integrated luminosity of 35.9 (36.2) fb −1 was recorded, spanning a trigger livetime of 586 (589) hours. We do not consider the possibility of LLPs that were produced in 2015 but decayed in 2016. In both the 2015 and 2016 searches, we use cosmic run data collected by dedicated triggers as a control sample. These dedicated cosmic run data were recorded during LHC machine technical stops, several days after collision runs. A negligible amount of long-lived signal produced during collisions could have decayed during these cosmic runs for the lifetimes considered in this analysis. The instrumental noise background estimate is extrapolated from the instrumental noise measured in these control samples. Most of the other sources of background are estimated from sideband regions of the main data sample, except for the cosmic ray muon background in the calorimeter search, which is estimated from MC simulation. Benchmark models Several simplified models are considered in this search, and samples are generated for each using Monte Carlo (MC) simulation. In the calorimeter search, we interpret the results in the context of two-body ( g → g χ 0 ) and three-body ( g → qq χ 0 ) decays of a gluino into the lightest supersymmetric (SUSY) particle (LSP), the neutralino ( χ 0 ). Long-lived gluinos are predicted by "split SUSY" [37,38], in which gauginos have relatively small masses with respect to sfermions, which could be massive, since SUSY is broken at a scale much higher than the weak scale. This large mass splitting causes the long lifetime of the gluinos, since gluinos can only decay via a virtual squark. We also consider the decay of a long-lived top squark ( t → t χ 0 ) that can be the next-to-LSP particle (NLSP) in various dark matter scenarios [39][40][41]. Here the LSP should be loosely interpreted as any new, neutral, non-interacting fermion, and not necessarily as a SUSY neutralino. In the muon search, we consider a different model for a three-body decay of the gluino ( g → qq χ 0 2 , χ 0 2 → µ + µ − χ 0 ), which is complementary to the calorimeter search. In this model, the mass of the LSP neutralino ( χ 0 ) is chosen to be 0.25 times the gluino mass, and the mass of the NLSP neutralino ( χ 0 2 ) is chosen to be 2.5 times the LSP neutralino mass. A second simplified model used in the muon search predicts exotic particles called MCHAMPs, whose charges are multiples of the elementary charge e and which are predicted by several BSM theories [20]. We assume an MCHAMP with charge |Q| = 2e decays into two same-sign muons (MCHAMP → µ ± µ ± ). Signal generation The signal generation process is divided into three major stages. In Stage 1, the LLPs for each signal process are generated from pp collisions with PYTHIA [42,43] and propagated through the detector with GEANT4 v9.2 [44,45]. For the MCHAMP signal, PYTHIA v6.4 is used, while for the gluino and top squark signals, PYTHIA v8.205 is used. If the LLPs are strongly interacting, as in the case of the gluinos and top squarks, they hadronize into R-hadrons [46][47][48] upon production, whose interaction with the CMS detector in the simulation is described by the cloud model [49,50]. In this model, R-hadrons are treated as SUSY particles surrounded by a cloud of loosely bound quarks and gluons. The fraction of produced R-hadrons that contain a gluino and a valence gluon is set to 10%, a convention used in previous analyses [11,21]. However, because the R-hadrons interact an average of ten times in the calorimeter, their flavor is effectively randomized. Some fraction of these R-hadrons are sufficiently slow moving to come to a stop in the detector material. Because they are doubly charged, MCHAMPs ionize heavily and thus a significant number also stop in the detector. In Stage 2, the parent LLP or R-hadron is constrained to decay at the stopping position defined in Stage 1. The LLP decay is simulated by a second GEANT4 step, and the decay products are propagated through the detector. Finally, in Stage 3, a pseudo-experiment MC simulation is conducted to estimate the probability for stopped particle decays to occur in the time window between collisions when data is being collected. The Stage 3 MC simulation determines an effective integrated luminosity by using the good data-taking periods and the LHC filling scheme to calculate the fraction of stopped particle decays that occur when the trigger is live. For a given particle lifetime, the effective integrated luminosity is defined as the total integrated luminosity multiplied by the probability that the particle decays at a time when the trigger is live in between collisions. In other words, Stages 1 and 2 determine how the signal will look in the detector, and Stage 3 determines when it will occur. More details on the signal generation process are given in Refs. [11,13,14]. Event selection The calorimeter search and the muon search employ different search strategies and thus different selection criteria, which are described in turn below. Calorimeter search In the calorimeter search, we look for hadronic decays of LLPs in the calorimeter that produce energy deposits that could be reconstructed as at least one high-energy jet. We trigger on calorimeter jets with energy greater than 50 GeV and |η| < 3 that are at least two BXs away from pp collisions. The major background sources are cosmic rays, beam halo, and HCAL noise. Cosmic ray and beam halo muons can emit a shower of photons via bremsstrahlung, which could be reconstructed as a jet and mistaken for signal. HCAL noise [51] can give rise to spurious signals, which in the barrel could appear in one or several HPDs within a single RBX, and thus be incorrectly reconstructed as a jet. We observe that the rate of each of these background sources drops exponentially as a function of the jet energy. We thus require the events to have a leading (highest energy) calorimeter-based jet with energy greater than 70 GeV. The calorimeter-based jets are reconstructed using an anti-k T clustering algorithm [52,53] with a distance parameter of 0.4. To increase the sensitivity of the search, we require that the leading jet in each event is located within |η| < 1.0, where R-hadrons are more likely to stop and where there is relatively less background from beam halo. Secondary background sources include out-of-time collisions from remnant protons between bunches, and beam-gas interactions in the detector. The rate of these secondary background events becomes negligible after we require that there are no reconstructed collision vertices in the events. Cosmic ray muon events usually feature a large number of reconstructed DT segments and RPC hits, whereas signal events in the calorimeter search would not. We exploit this difference to distinguish signal events from cosmic ray muons. While it is possible for the hadronic shower of an R-hadron decay to pass through the first layers of the iron yoke and induce reconstructed DT segments, these DT segments are located only in the inner layers of the muon chambers (r < 560 cm, where r is the transverse distance to the IP) and cluster near the leading jet. On the other hand, cosmic ray muons are equally likely to leave DT segments in all layers in both the upper and lower hemispheres of the muon system, and the angle between the jet and DT segments in φ is more evenly distributed. As a result, we are able to substantially reduce the cosmic ray muon background contamination in the signal region by rejecting events that have at least two DT segments in the outermost barrel layer of the muon system, events that have any DT segments in the second outermost barrel layer, events that have two DT segments with a large separation in φ (|∆φ| > π/2), events that have DT segments in the three innermost layers that are separated in φ from the leading jet by at least 1.0 radian, and events that have close-by RPC hits in different layers (∆R = √ (∆φ) 2 + (∆η) 2 < 0.2 and ∆r > 0.5 m). We make looser DT segment requirements in the outermost than in the second outermost layer because signals are very likely to coincide with standalone DT segments that are not from cosmic ray muons but particles from the pp collision. Most of these standalone DT segments from the pp collision are located in the outermost muon barrel layer. With these selection criteria, we are able to avoid incorrectly rejecting signal events, thus increasing the signal efficiency, while still rejecting most of the cosmic ray muon events. Beam halo muons travel closely along the beam pipe, typically traversing both sides of the muon endcap systems and resulting in a few reconstructed CSC segments. Therefore, we veto events with any CSC segments having at least five reconstructed hits. As will be discussed in Section 5, since signal events may include some CSC segments, requiring a minimum number of CSC hits in the veto avoids a loss of signal efficiency. Random electronic noise in the HCAL gives rise to events in which the time response of the HCAL readout is very different from the well-defined response from particles showering in the calorimeter. This HCAL noise creates spurious clustered energy deposits that can be reconstructed as a jet, which would contaminate the signal region and therefore should be removed. Analog signal pulses produced by the HCAL electronics are read out over ten BXs centered around the pulse maximum. The pulse shape from showering particles consists of a peak at the collision BX and an exponential decay over the subsequent BXs. Particle showers create clustered energy deposits spread over several neighboring calorimeter towers in z and φ, while noise produces deposits in just one or two towers, or several towers in a single HPD or RBX. In addition to the standard HCAL noise filter [51], we use a series of offline selection criteria that exploit these timing and topological characteristics to remove the HCAL noise events. These criteria are described in detail in Ref. [14]. Muon search In the muon search, we look for LLPs where the decay products include two muons. We expect the signal to look like a pair of muons originating anywhere in the detector material, but displaced from the IP. The muons would be back-to-back in the two-body MCHAMP decay, but not for the three-body gluino decay. The primary background sources in the muon search include cosmic ray muons, beam halo, and muon detector noise. The latter two background sources are negligible after we apply the full selection. The trigger used in the muon search selects events at least two BXs away from the pp collision time with at least one muon reconstructed in the muon system, whose transverse momentum p T is at least 40 GeV. As in the calorimeter search, we select events offline that have no reconstructed collision vertices. Tracks that are reconstructed using only hits in the muon system are called standalone muon tracks [54]. However, the standard standalone track reconstruction assumes that muons originate from the IP, which is inappropriate for displaced muon searches. As a result, a new muon reconstruction algorithm was developed for this analysis, which produces displaced standalone (DSA) muon tracks [55]. The DSA tracks are reconstructed using only hits in the muon detector, and they have no constraints to the IP. Thus, DSA tracks are truly using only the muon system. We require events to have exactly one good DSA track in the upper hemisphere of the detector and exactly one good DSA track in the lower hemisphere. Both DSA tracks must have p T > 50 GeV, at least three DT chambers with valid hits, and at least three valid RPC hits. To reduce the background from beam halo, the DSA tracks must also have zero valid CSC hits. Timing information in the DTs and RPCs, indicating whether the muon is incoming toward the detector center or outgoing away from the detector center, is used to distinguish muons from a signal event from the cosmic ray muon background. Cosmic ray muons are predominantly incoming when traversing the upper hemisphere and outgoing when traversing the lower hemisphere, as they come in from above the detector and continue to move downwards. Muons from a signal event, on the other hand, would be outgoing in both hemispheres. We place selection criteria on both the upper and lower hemisphere DSA tracks in order to obtain a good time measurement. We require at least eight independent time measurements for the TOF computation. We require that the uncertainty in the time measured at the IP for DSA tracks, assuming the muon is outgoing, is less than 5.0 ns. Next, we ask for the time measurement to be signal-like. We require that the direction of the lower hemisphere DSA track, as determined by a least-squares fit to the timing in each DT layer where the fit is not constrained to the IP, is consistent with being in the downward direction. We define t DT as the time at the point of closest approach to the IP as measured by the DTs, assuming the muon is outgoing. Since cosmic ray muons are incoming in the upper hemisphere and outgoing in the lower hemisphere, the t DT of the upper hemisphere track is expected to be 40 to 50 ns earlier than that of the lower hemisphere track. As for the signal, since both muons are outgoing, they are reconstructed to have similar times as measured at the IP. Thus, we require that ∆t DT , which is defined as ∆t DT = t DT (upper) − t DT (lower), is greater than −20 ns, which greatly reduces the cosmic ray muon background. In addition to these DT timing variables, we use a timing measurement from the RPCs that assigns a BX to each hit. For each of the six layers of the RPCs, the hit is given a BX assignment. A typical prompt muon created at the IP has a BX assignment of 0 for each of its RPC hits. The BX assignments of cosmic ray muons are especially useful in the lower hemisphere of the detector, as the incoming cosmic ray muons will typically trigger the event and thus be assigned BX values of 0 in each RPC layer, but the outgoing cosmic ray muons are often assigned positive BX values. For example, a lower hemisphere cosmic ray muon typically has a BX assignment of 2 for each of its good RPC hits. For the signal, each RPC BX assignment for each muon is typically 0. Given the BX assignments in each RPC layer for a muon, we can compute the average RPC hit BX assignment multiplied by 25 ns as the RPC time for a track (t RPC ) and use this as a discriminating variable. A typical muon from the benchmark decays has a t RPC of 0 ns for both upper and lower hemisphere DSA muon tracks. On the other hand, the t RPC of a cosmic ray muon is typically 25 or 50 ns in the lower hemisphere and 0 ns in the upper hemisphere. We define ∆t RPC = t RPC (upper) − t RPC (lower), and we require ∆t RPC > −7.5 ns to further select signal-like events. Figure 1 shows ∆t DT (left) and ∆t RPC (right) for data and MC simulation. The events shown here contain good-quality DSA muon tracks, but they are dominated by the cosmic muon background; they are selected with a subset of the criteria described above. This selection is defined by the same trigger and reconstructed vertices requirements as above. Additionally, exactly one DSA track in the upper hemisphere and exactly one DSA track in the lower hemisphere are required. Looser requirements than in the full selection are placed on the DSA track p T (>10 GeV), the number of DT chambers with valid hits (greater than one), and the number of valid RPC hits (greater than one). We require the same number of DT hits with good timing measurements per DSA track and number of valid CSC hits as above for this selection. None of the remaining criteria from the main selection criteria described above are used to select the events in Fig. 1. As can be seen in Fig. 1, the number of cosmic ray muon background events is greatly reduced when the full selection is applied, as we require ∆t DT > −20 ns and ∆t RPC > −7.5 ns. Since ∆t DT and ∆t RPC correspond to independent measurements of essentially the same quantity, a mismeasured cosmic ray muon is much less likely to pass both selections than just one; adding the second requirement improves the rejection of simulated cosmic ray muons by a factor of approximately 350. The events plotted pass a subset of the full analysis selection that is designed to select goodquality DSA muon tracks but does not reject the cosmic ray muon background. The number of cosmic ray muon background events is greatly reduced when the full selection is applied, as we require ∆t DT > −20 ns and ∆t RPC > −7.5 ns. The gray bands indicate the statistical uncertainty in the simulation. The histograms are normalized to unit area. Signal efficiency In this section, we describe the calculation of the signal efficiency ε signal , which is the product of several efficiencies. In the calorimeter search, the stopping efficiency ε stopping is the probability that the R-hadron stops in the HB or ECAL barrel (EB), while in the muon search, ε stopping is the probability of each LLP to stop in any region of the detector. The Stage 1 simulation determines ε stopping . The reconstruction efficiency ε reco is the efficiency of an event to pass all of the selection criteria, including the trigger, and it is computed independently of ε stopping . In addition, ε reco is calculated assuming that the LLP decay occurs when the trigger is live in between collisions, Table 1: Summary of the values of ε stopping , ε CSCveto , ε DTveto , and the plateau value of ε reco for different signals, for the calorimeter search. The efficiency ε stopping is constant for the range of signal masses considered. The efficiency ε reco is given on the E g or E t plateau for each signal. and assuming a branching fraction (B) of 100% to the decays in the signal models described above. The Stage 2 simulation determines ε reco . The efficiency ε signal is defined as the product of ε stopping and ε reco for the muon search. For the calorimeter search, ε signal is the product of ε stopping , ε reco , and two additional factors, ε CSCveto and ε DTveto , which are defined in the next subsection. Calorimeter search For the calorimeter search, ε stopping is constant at about 0.054 for gluinos and 0.045 for top squarks, for the range of masses considered. The ε stopping value is larger for gluinos than for top squarks of the same mass because gluinos are more likely to produce doubly charged Rhadrons. The value of ε reco depends primarily on the energy of the visible daughter particle(s) of the R-hadron decay, denoted by E g (E t ) if the daughter is a gluon (top quark). When E g > 130 GeV (E t > 170 GeV), ε reco becomes approximately constant, as shown in Fig. 2. For the three-body gluino decay, ε reco depends approximately on the mass difference between g and χ 0 , becoming constant when m g − m χ 0 160 GeV. Some physical effects that are not modeled in simulation can cause reconstructed CSC or DT segments that are out of time with respect to a collision. For example, thermal neutrons can take up to a tenth of a second after being produced in pp collisions before they arrive at the muon detectors and induce a signal in the CSCs or DTs. Since these segments can occur when the trigger is live, it is possible that some of the events in the search sample could contain such segments. These events would be rejected by the selection criteria, thus decreasing the probability for a signal to be observed. The terms ε CSCveto and ε DTveto measure this decrease in efficiency due to these sources. We define ε CSCveto (ε DTveto ) as the conditional probability that a signal passes the beam halo (cosmic ray muon) rejection criteria assuming the potential occurrence of coincident CSC (DT) segments, given that the signal itself passes the full selection criteria. HCAL noise events that are collected by the trigger are used to estimate these two efficiencies from data, since this noise is independent of any muon detector activities and should pass both beam halo rejection and cosmic ray muon rejection criteria. These events are selected by inverting some of the noise rejection criteria. Then ε CSCveto (ε DTveto ) is simply the percentage of noise events that survive the beam halo (cosmic ray muon) vetoes among all selected noise events. Table 1 summarizes the values of ε stopping , ε CSCveto , ε DTveto , and the plateau value of ε reco . Figure 2: The ε reco values as a function of E g or E t (left), and m g − m χ 0 (right), for g and t Rhadrons that stop in the EB or HB, in the MC simulation, for the calorimeter search. The ε reco values are plotted for the two-body gluino and top squark decays (left) and for the three-body gluino decay (right). The shaded bands correspond to the systematic uncertainties, which are described in Section 7. Tables 2 and 3 show ε stopping and ε reco for each assumed signal mass in the muon search. The ε signal value is the product of these two efficiencies. The ε stopping value is larger for MCHAMPs than for gluinos because the MCHAMPs considered have |Q| = 2e and the gluinos sometimes produce singly charged R-hadrons. We lose signal efficiency because the L1 muon trigger is designed to identify muons coming from the IP, although the muons from the signal can be very displaced. A further loss in signal efficiency is due to the very strict requirements on the quality of the DSA muon track. Similarly, the requirement to have exactly one DSA track traversing the upper hemisphere and exactly one DSA track traversing the lower hemisphere further reduces the geometrical acceptance, particularly for the gluino decay, which does not produce back-to-back muons, unlike the MCHAMP decay. The numbers in Tables 2 and 3 represent the maximum number of signal events that can be measured before applying the different search windows depending on the lifetime of the stopped particle. Background estimation Since the background sources in both the calorimeter and the muon searches are not well modeled in simulation, we use control samples in data to estimate their contributions after the full Calorimeter search After applying the selection criteria in the calorimeter search, some background sources from cosmic ray muons, beam halo, and calorimeter noise remain in the data. We quantify the probability of background events escaping the background vetoes and thus being observed by this search. These inefficiencies are calculated as follows. We generate a sample of cosmic ray muon events to estimate the rate of such events escaping the cosmic ray muon rejection criteria. The events are generated using CMSCGEN [56], a generator based on the air shower program CORSIKA [57] and validated in a CMS analysis [58]. We require that the events pass the preselection criteria, namely that they are required to have substantial energy deposits in the calorimeter and no CSC segments in the muon endcap system. The cosmic ray muon veto inefficiency is defined as the fraction of preselected simulated cosmic ray muon events that are not rejected by the cosmic ray muon rejection criteria. It is found to be 1 × 10 −3 . To account for the small difference in occupancy between the cosmic ray muon events in data and MC simulation, we first bin the simulated events in the number of DT and outer barrel RPC hits and calculate the inefficiency bin by bin. Then, we apply the halo veto and the noise veto to a sample of events in data, and bin these data events in the same way as the simulated events. For each bin, we multiply the inefficiency by the number of events in data, giving the binned cosmic ray muon prediction. The nominal cosmic ray muon background prediction is then the sum of the events in each bin. The uncertainty in the cosmic ray muon background is due to the uncertainty in the estimate of muons that escape detection by passing through uninstrumented regions of the CMS detector, which is necessarily estimated from simulation. Since data in the uninstrumented regions are ipso facto not available to compare to simulation, we define equivalent fiducial volumes of instrumented regions of the muon system. Using these as a proxy for the uninstrumented regions, we assess the reliability of the simulation by comparing data and simulation. We find the average discrepancy between cosmic ray muon data and simulation in the number of detected muons traveling through various fiducial regions in the detector to be about 32%, and we assign this to be the systematic uncertainty in the cosmic ray muon background estimate. Thus, we estimate the cosmic ray muon background to be 2.6 ± 0.9 (8.8 ± 3.1) events in 2015 (2016) data. Because there was a high rate of beam halo production in 2015 and 2016 data, and because it is possible for halo muons to escape the acceptance of the endcap muon system, the halo background is nonnegligible. We estimate the halo veto inefficiency using a tag-and-probe method [59] that analyzes a high-purity sample of halo events by selecting events having one calorimeter jet with |η| < 1.0 and CSC segments in at least two endcap layers of the muon system. Since the rates of beam halo in each beam are not the same, the events are first classified according to whether they originated in the clockwise (−z direction) or the counterclockwise (+z direction) beam. Then for each class, depending on whether these events have CSC segments in only one endcap or both endcaps of the muon system, they are categorized into events that have only the incoming portion of a halo muon track, events that have only the outgoing portion, and events that have both portions. The number of events that escape detection is N IncomingOnly N OutgoingOnly /N Both . We define N IncomingOnly (N OutgoingOnly ) as the number of events that have only an incoming (outgoing) portion of a halo muon track. The number of events that have both an incoming and an outgoing halo muon track is N Both . After binning halo events in their x and y coordinates and performing the classification and calculation discussed above, we estimate the halo veto inefficiency to be 1 × 10 −4 . We then multiply this inefficiency by the number of halo events vetoed in the search region. To account for the possibility that the x-y binning does not reproduce the actual shape of the inactive or uninstrumented regions of the detector, thus biasing the estimate, we repeat the calculation above, but binning events in φ and r instead. The systematic uncertainty is then defined as the difference between the results from the two binning schemes. We find a halo background estimate of 1.1 ± 0.1 (2.6 ± 0.2) events in 2015 (2016) data. Finally, the background estimation of instrumental noise is performed using control data in dedicated cosmic runs with no beams in the LHC, which include only cosmic ray muon and noise events. We select cosmic runs taken several days after pp collision runs so that there would be little chance for the signal to appear. After applying all selection criteria on the control data, we observe 2 events in each of the 2015 and 2016 control data. We then subtract the expected cosmic ray muon background from the total event yield, obtaining a noise background estimate of 0.3 +2.4 Table 4. Muon search In the muon search, a small number of cosmic ray muon background events remains after applying the full event selection to the data. The cosmic ray muon background is estimated by extrapolating the data from a background-dominated region into the signal region. We ap-ply the full event selection to the data except the ∆t DT criterion and invert the ∆t RPC criterion. We then fit the ∆t DT distribution with the sum of two Gaussian distributions and a Crystal Ball function [60], since ∆t DT is relatively Gaussian with a long asymmetrical tail. Next, we compute the integral of the fit function, for ∆t DT > −20 ns. Then, we compute the same integral after having tightened the selection criteria on ∆t RPC to −50 < ∆t RPC < −7.5 ns, then −45 < ∆t RPC < −7.5 ns, etc. in steps of 5 ns up to −10 < ∆t RPC < −7.5 ns. Finally, we plot each integral as a function of the lower selection on ∆t RPC , and fit this with an error function to extrapolate to the ∆t RPC > −7.5 ns region (see Fig. 3). We use an error function fit in order to make a conservative background estimate. Given this extrapolation, we predict 0.04 background events in 2015 data, with a negligible statistical uncertainty, and 0.50 ± 0.02 background events in 2016 data, where the uncertainty given is statistical only. The statistical uncertainty in the background prediction derives from the uncertainty in the error function fit parameters. We checked the background prediction method by repeating the procedure with nonoverlapping ∆t RPC regions and found that the numbers of background events predicted are consistent with the nominal values. The systematic uncertainty in the background prediction is evaluated by repeating the steps above, except changing the fit of the ∆t DT distribution to the sum of two Gaussian distributions and a Landau function [61]. Using the error function fits to extrapolate to ∆t RPC > −7.5 ns gives a prediction of 0.07 ± 0.06 (0.10 ± 0.01) background events in 2015 (2016), where the uncertainty given is statistical only. Thus, the background prediction is: 0.04 ± 0.03 (syst) background events in 2015 data, with a negligible statistical uncertainty, and 0.50 ± 0.02 (stat) ± 0.40 (syst) background events in 2016 data. Despite the fact that we require exactly one upper hemisphere DSA track and exactly one lower hemisphere DSA track, there could still be some background from two coincident cosmic ray muons. This background from two coincident cosmic ray muons could occur if the upper hemisphere DSA track of one cosmic ray muon is reconstructed and if the lower hemisphere DSA track of the other is also reconstructed. We estimate this contribution from data by finding the rate of events with exactly one reconstructed DSA track in one hemisphere satisfying all of the selection criteria except for the ∆t DT and ∆t RPC criteria, and no tracks in the other hemisphere. Then, making simple assumptions about when the two coincident cosmic ray muons could occur and about the DSA track reconstruction efficiency as a function of BX, we calculate the number of accidentally coincident cosmic ray muons and find it to be negligible. Systematic uncertainties in the signal efficiency While the GEANT4 simulation used to derive the stopping probability accurately models both the electromagnetic and nuclear interaction energy loss mechanisms, the relative contributions of these energy loss mechanisms to the stopping probability depend significantly on unknown R-hadron spectroscopy. We do not consider this dependence to be a source of uncertainty for either the calorimeter or the muon search, however, since for any given model the resultant uncertainty in the stopping probability is small. Nevertheless, there are several sources of uncertainty in the signal efficiency measurement. Calorimeter search In the calorimeter search, the systematic uncertainty due to the trigger efficiency is negligible since the offline jet energy criterion ensures the data analyzed are well above the turn-on region, so ε reco is constant. We consider possible systematic uncertainties in ε CSCveto and ε DTveto by varying the criteria used to select HCAL noise events that were described in Section 5.1. We compare the efficiency of data events to pass these new HCAL noise criteria with that of the nominal HCAL noise selection criteria, and we find that the relative change in the efficiencies is less than 0.2% for both ε CSCveto and ε DTveto , and therefore negligible. The uncertainty in the integrated luminosity is estimated as 2.3 (2.5)% for 2015 (2016) data [62,63]. The relative uncertainty in ε reco is estimated to be 7.7 (5.2)% for g ( t) in the 2015 analysis, and 7.5 (5.2)% for g ( t) in the 2016 analysis. This uncertainty, which is shown by the shaded bands in Fig. 2, is determined by computing the maximal relative difference among points on the plateau. Jets in this analysis are not formed by particles originating from the center of the detector, so the standard uncertainty in the jet energy scale does not apply. Instead, we refer to a study performed on the HCAL during cosmic data taking in 2008 [64]. This study compares the energy of the reconstructed jets in simulated cosmic ray muon events and cosmic ray muon events in data, concluding that the uncertainty in the jet energy in the simulation is about 2%. Moreover, a study conducted with 2012 data [65] compares the data and simulation for dijets originating from the interaction point. The comparison leads to an estimate of <2% for jets striking the HCAL barrel with angles of incidence from 0 to π/3. After rescaling the jet energy by 2%, the signal efficiency varies by 2%. This estimate is conservative since only the yield of signals with jet energy near the offline threshold is affected by the variation of the jet energy, and as a result the uncertainty decreases rapidly as E g (E t ) increases. We have also considered the uncertainty associated with the jet energy resolution. Studies have shown that the signal yield is insensitive to variations in this uncertainty, and thus that the systematic uncertainty associated with the jet energy resolution is negligible. The total systematic uncertainty in the signal yield is 8.3 (8.2)% in the 2015 (2016) search. The systematic uncertainties are summarized in Table 5. Muon search The muon search also has several sources of systematic uncertainties. We consider the systematic uncertainty associated with the MC simulation modeling of the charge divided by the p T (Q/p T ) resolution by comparing this resolution in cosmic ray muon data and cosmic ray muon MC simulation. The resolution compares Q/p T of the upper and lower hemisphere tracks: We plot the standard deviation of Gaussian fits of the resolution, as a function of the lower hemisphere track p T , for both cosmic ray muon data and MC simulation. A fit of the ratio between data and MC simulation in this plot for muon tracks in the lower hemisphere with p T > 50 GeV gives a difference between cosmic ray muon data and simulation of 9.0 (5.3)% in the 2015 (2016) analysis. We propagate this resolution uncertainty to an uncertainty in the signal efficiency by smearing the momentum distribution of muons in the signal and observing the corresponding variation in the signal yields. We take the largest variation in the signal yield, namely, 13 (7.0)% in the 2015 (2016) analysis, as the systematic uncertainty in the modeling of the Q/p T resolution. There is also a systematic uncertainty associated with the trigger acceptance. Since the largest difference between data and MC simulation in the plateau of the trigger turn-on curves is 13 (2.8)% in the 2015 (2016) analysis, we take these values as the systematic uncertainty in the trigger acceptance. The total systematic uncertainty in the signal yield is 19 (7.9)% in the 2015 (2016) search. The systematic uncertainties are summarized in Table 6. Results In the calorimeter search, we predict 4.1 +3.0 −1.0 (11.4 +10.3 −3.1 ) background events in the 2015 (2016) data. Four events that pass all of the selection criteria are observed in 2015 data, while 13 events are observed in 2016 data. Both observed numbers of events are consistent with the predicted backgrounds. The observed events are most likely cosmic ray muon or beam halo events, as they each consist of a single reconstructed jet. In the muon search, we predict 0.04 ± 0.03 (0.50 ± 0.40) background events in 2015 (2016). There are zero observed events in both 2015 and 2016 data that pass all of the selection criteria. In both the calorimeter and muon searches, we count the number of observed events in equally spaced log 10 (time) bins of signal lifetime hypotheses from 10 −7 to 10 6 s. For lifetime hypotheses shorter than one LHC orbit of 89 µs, we search within a sensitivity-optimized time window of 1.3 times the stopped particle's lifetime, where the window starts after each pp collision, to avoid the addition of backgrounds for time intervals during which a signal with a given lifetime has a large probability to have already decayed. We assume that the cosmic ray muon background (and noise background in the calorimeter search) is uniformly distributed in time. In the calorimeter search, we estimate the halo background for each lifetime hypothesis by finding the ratio of halo events in the search time window to the total number of halo events, then multiplying this ratio by the halo background estimate for the full trigger livetime. We select the halo events by requiring events to pass all of the selection criteria except the CSC segment veto described above, and then requiring the events to have at least one CSC segment. Then, we determine if these halo events are within the search window by observing how long after the most recent filled BX they occurred. For lifetimes longer than one orbit, the trigger livetime, the expected background, and the number of observed events are independent of the lifetime. The effective integrated luminosity decreases with lifetime for lifetimes longer than one LHC orbit, and the analysis sensitivity degrades with lifetimes longer than one LHC fill because any signal that decays between fills will have few chances to be observed. For lifetime hypotheses shorter than one orbit, both the number of observed events and the expected background depend on the time window considered, which is a fraction of the total trigger livetime. Similarly, the effective integrated luminosity is reduced for short lifetimes. As we gradually increase the lifetime in the hypothesis from the minimal value, we include more observed events in the search window. When the lifetime is shorter than one orbit, to explicitly show the discontinuous changes of the upper limits whenever the expanding search window covers a new observed event, we test two lifetime hypotheses in addition to the equally spaced log 10 (time) ones, for each observed event in these counting experiments. These two additional lifetime hypotheses are the largest lifetime hypothesis for which the event lies outside the time window, and the smallest lifetime hypothesis for which the event is contained within the time window. Tables 7 and 8 show the results of the counting experiment for the 2016 data. The data show no excess over background, and we set upper limits on the signal production cross section (σ) using a hybrid method with the CL s criterion [66,67] to incorporate the systematic uncertainties [68], in both the calorimeter and muon searches. By combining the likelihoods of the search results from the 2015 and 2016 analyses, we set combined upper limits on Bσ for the benchmark signal models. In the calorimeter search, the 95% confidence level (CL) upper limits on Bσ for g ( t) pair production for combined 2015 and 2016 data as a function of the particle's lifetime τ are shown in Fig. 4, assuming E g > 130 GeV (m g − m χ 0 160 GeV or E t > 170 GeV). In Fig. 5, the gluino and top squark mass limits are shown, assuming B( g → g χ 0 ) = B( g → qq χ 0 ) = B( t → t χ 0 ) = 100%. We exclude gluinos with m g < 1385 (1393) GeV that decay via g → g χ 0 ( g → qq χ 0 ) and top squarks with m t < 744 GeV at 95% CL for 10 µs < τ < 1000 s. Fig. 7 for combined 2015 and 2016 data. The combined 2015 and 2016 95% CL upper limits on Bσ of gluino and MCHAMP pair production as a function of mass are shown in Fig. 8, for lifetimes between 10 µs and 1000 s. Gluinos with masses between 400 and 980 GeV are excluded for lifetimes between 10 µs and 1000 s, assuming MCHAMPs with masses between 100 and 440 GeV and |Q| = 2e are excluded for lifetimes between 10 µs and 1000 s, assuming B(MCHAMP → µ ± µ ± ) = 100%. Summary A search has been presented for long-lived particles that stopped in the CMS detector after being produced in proton-proton collisions at a center-of-mass energy of 13 TeV at the CERN LHC. The subsequent decays of these particles to produce calorimeter deposits or muon pairs were looked for during gaps between proton bunches in the LHC beams. In the calorimeter (muon) search, with collision data corresponding to an integrated luminosity of 2.7 (2. livetime in 2016, no excess above the estimated background has been observed. Cross section (σ) and mass limits have been presented at 95% confidence level (CL) on gluino ( g), top squark ( t), and multiply charged massive particle (MCHAMP) production over 13 orders of magnitude in the mean proper lifetime of the stopped particle. In the calorimeter search, combining the results from the 2015 and 2016 analyses and assuming a branching fraction (B) of 100% for g → g χ 0 ( g → qq χ 0 ), where χ 0 is the lightest neutralino, gluinos with lifetimes from 10 µs to 1000 s and m g < 1385 (1393) GeV have been excluded, for a cloud model of R-hadron interactions and for the daughter gluon energy E g > 130 GeV (m g − m χ 0 160 GeV). Under similar assumptions, for the daughter top quark energy E t > 170 GeV and B( t → t χ 0 ) = 100%, long-lived top squarks with lifetimes from 10 µs to 1000 s and m t < 744 GeV have been excluded. These are the first limits on stopped long-lived particles at 13 TeV and the strongest limits to date.
12,208
2017-12-31T00:00:00.000
[ "Physics" ]
Distributed Strategies Made Easy Distributed/concurrent strategies have been introduced as special maps of event structures. As such they factor through their “rigid images”, themselves strategies. By concentrating on such “rigid image” strategies we are able to give an elementary account of distributed strategies and their composition, resulting in a category of games and strategies. This is in contrast to the usual development where composition involves the pullback of event structures explicitly and results in a bicategory. It is shown how, in this simpler setting, to extend strategies to probabilistic strategies; and indicated how through probability we can track nondeterministic branching behaviour, that one might otherwise think lost irrevocably in restricting attention to “rigid image” strategies. Introduction Traditionally in understanding and analysing a large system, whether it be in computer science, physics, biology or economics, the system's behaviour is thought of as going through a sequence of actions as time progresses.This is bound up with our experience of the world as individuals; in our conscious understanding of the world we experience and narrate our individual history as a sequence, or total order, of events, one after the other.However, a complex system is often much more than an individual agent.It is better thought of as several or many agents interacting together and distributed over various locations.In which case it can be fruitful to abandon the view of its behaviour as caught by a total order of events and instead think of the events of the systems system as comprising a partial order.The partial order expresses the causal dependency between events, how an event depends on possibly several previous events.The view that causal dependency should be paramount over an often incidental temporal order has been discovered and rediscovered in many disciplines: in physics in the understanding of the causal structure of space time; in biology and chemistry in the description of biochemical pathways; in computer science, originally in the work of Petri on Petri nets, and later in the often more mathematically amenable event structures. Interacting systems are often represented mathematically via games.A system operates in an unknown environment, so often a prescription for its intended behaviour can be expressed as a strategy in which the system is Player against (an unpredictable) Opponent, standing for the environment.Games and their strategies are ubiquitous.They appear in logic (proof theory, set theory, . . .), computer science (semantics, algorithmics, . . .), biology, economics, etc..They codify the mathematics of interacting systems.But they almost always follow the traditional line of representing the history of a play of the game as a sequence of moves, most often alternating between Player and Opponent.Until recently there was no mathematical 81:2 Distributed Strategies Made Easy theory of games based on partial orders of causal dependency between move occurrences.This handicapped their use in modelling and analysing a system of distributed agents. What was lacking was a mathematical theory of distributed games in which Player and Opponent are more accurately thought of as teams of players, distributed over different locations, able to move and communicate with each other.Although there are glimpses of such a mathematical theory of distributed games in earlier work of Abramsky, Mellies and Mimram [1,13], Faggian and Piccolo [8], and others, a breakthrough occurred with the systematic use of event structures to formalise distributed games and strategies [14].This meant that we could harness the mathematical techniques developed around event structures in an early mathematical foundation for work on synchronising processes [18]; the move from total to partial orders brings in its wake a lot of technical difficulty and potential for undue complexity unless it's done artfully. But here we meet an obstacle for many people.Distributed/concurrent strategies have been based on maps of event structures and composition on pullback, which in the case of event structures has to be defined rather indirectly.Then, one obtains not a category but a bicategory of games and strategies.At what seems like an increasingly slight cost, a more elementary treatment can be given.Its presentation is the purpose of this article.The maps and pullbacks are still there of course, but pushed into the background. The realisation that a more elementary presentation will often suffice has been a gradual one.It is based on the fact that a strategy, presented as a map of event structures, has a "rigid image" in the game and that in many cases this image can stand as a proxy for the original strategy [25].True some branching behaviour is lost, just as it, and possible deadlock and divergence, can be lost in the composition of strategies.But extra structure on strategies generally remedies this.For example, the introduction of probability to strategies allows the detection of divergence in composition, or hidden branching, through leaks of probability.One can go far with rigid images of strategies.They permit the elementary development presented here. In their CONCUR'16 paper [2] Castellan and Clairambault used the simple presentation of "rigid image" strategies here.Meanwhile rigid images of strategies had come to play an increasing role in Winskel's ECSYM notes [25].Before this, Nathan Bowler recognised essentially the same subcategory of games and "rigid image" strategies, within the bicategory of concurrent games and strategies.(At the time, Winskel thought that too much of the nondeterministic branching behaviour would be lost irrecoverably to be very enthusiastic.) Finally, an apology: we obtained the results here by specialising more general results on strategies to their rigid-images [25]; elementary proofs of the results would be desirable for a fully self-contained presentation, and should be written up shortly.the relations satisfy several axioms: [e] = def {e ′ e ′ ≤ e} is finite for all e ∈ E, {e} ∈ Con for all e ∈ E, Y ⊆ X ∈ Con implies Y ∈ Con, and There is an accompanying notion of state, or history, those events that may occur up to some stage in the behaviour of the process described.A configuration is a, possibly infinite, set of events x ⊆ E which is: consistent, X ⊆ x and X is finite implies X ∈ Con ; and down-closed, e ′ ≤ e ∈ x implies e ′ ∈ x. Two events e, e ′ are considered to be causally independent, and called concurrent if the set {e, e ′ } is in Con and neither event is causally dependent on the other; then we write e co e ′ .In games the relation of immediate dependency e e ′ , meaning e and e ′ are distinct with e ≤ e ′ and no event in between, plays a very important role.We write [X] for the down-closure of a subset of events X. Write C ∞ (E) for the configurations of E and C(E) for its finite configurations.(Sometimes we shall need to distinguish the precise event structure to which a relation is associated and write, for instance, ≤ E , E or co E .) We can describe a computation path by an elementary event structure, which is a partial order p = ( p , ≤ p ) for which the set {e ′ ∈ p e ′ ≤ p e} is finite for all e ∈ p .We can regard an elementary event structure as an event structure in which the consistency relation consists of all finite subsets of events.There is a useful subpath order of rigid inclusion of one elementary event structure in another.Let p = ( p , ≤ p ) and q = ( q , ≤ q ) be elementary event structures.Write p ↪ q iff p ⊆ q & ∀e ∈ p , e ′ ∈ q .e ′ ≤ p e ⇐⇒ e ′ ≤ q e .We shall often view a configuration x of E as an elementary event structure, viz. a partial order with underlying set x and partial order the causal dependency of E restricted to x. In an interactive context a configuration x may be subject to causal dependencies beyond those of E. It will become an elementary event structure p = ( p , ≤ p ) comprising an underlying set p = x with a partial order ≤ p which augments that from E: Write Aug(E) for the set of such augmentations associated with E. The order of rigid inclusion of one augmentation in another expresses when one augmentation is a sub-behaviour of another. It will be useful to combine augmentations, in effect subjecting a configuration simultaneously to the causal dependencies of the two augmentations -provided this does not lead to causal loops.Define a key partial operation Distributed Strategies Made Easy In fact we can see Aug(E) as an event structure in its own right.Its events are those augmentations with a top event, their causal dependency and consistency induced given by rigid inclusion [20].The remark is an instance of a general fact: Proposition 2. A rigid family R comprises a non-empty subset of finite elementary event structures which is down-closed w.r.t.rigid inclusion, i.e. p ↪ q ∈ R implies p ∈ R. A rigid family determines an event structure Pr(R) whose order of finite configurations is isomorphic to (R, ↪).The event structure Pr(R) has events those elements of R with a top event; its causal dependency is given by rigid inclusion; and its consistency by compatibilty w.r.t.rigid inclusion.The order isomorphism θ R ∶ C(Pr(R)) ≅ R is given by θ R (x) = ⋃ x, the union of (the consistent) augmentations in x ∈ C(Pr(R)). Event structures with polarity An event structure with polarity comprises (A, pol) where A is an event structure with a polarity function pol A ∶ A → {+, −, 0} ascribing a polarity + (Player), − (Opponent) or 0 (neutral) to its events.The events correspond to (occurrences of) moves.It will be technically useful to allow events of neutral polarity; they arise, for example, in a play between a strategy and a counterstrategy.A game shall be represented by an event structure with polarity in which no moves are neutral. Notation 3. In an event structure with polarity (A, pol), with configurations x and y, write x ⊆ − y to mean inclusion in which all the intervening events are moves of Opponent.Write x ⊆ + y for inclusion in which the intervening events are neutral or moves of Player. Operations We introduce two fundamental operations on event structures with polarity.We shall adopt the same operations for elementary event structures, and also for configurations, regarding a configuration as an elementary event structure with the order of the ambient event structure. Dual The dual, A ⊥ , of A, an event structure with polarity, comprises the same underlying event structure A but with a reversal of polarities, events of neutral polarity remaining neutral. We shall implicitly adopt the view of Player and understand a strategy in a game A as strategy for Player.A counterstrategy in a game A is a strategy for Opponent in the game A, i.e. a strategy (for Player) in the game A ⊥ . Simple parallel composition This operation simply juxtaposes two event structures with polarity.Let (A, ≤ A , Con A , pol A ) and (B, ≤ B , Con B , pol B ) be event structures with polarity.The events of A∥B are ({1} × A) ∪ ({2} × B), their polarities unchanged, with the only relations of causal dependency given by The empty event structure with polarity, written ∅, is the unit w.r.t.∥. Strategies A strategy in a game will be a (special) subset of plays in the game. Definition 4. A play in A, an event structure with polarity, comprises an augmentation, a finite elementary event structure p = ( p , ≤ p ) with underlying set p ∈ C(A), which may augment with extra causal dependencies provided it does so courteously: Note A, and so p, may involve neutral moves. If A is a game, so with no neutral moves, the only augmentations allowed of a play p to the immediate causal dependency of A are those of the form ⊖ ⊕. The order of rigid inclusion between plays, p ↪ q, expresses that p is a subplay of q.We shall write p ↪ + q iff p ↪ q & p ⊆ + q , so when the extension only involves neutral or Player moves, and similarly p ↪ − q when only Opponent moves are involved.Definition 5. A bare strategy in A, an event structure with polarity, is a rigid family of plays, so a nonempty subset σ ⊆ Plays(A) satisfying (Note that q is unique by courtesy.)Write σ ∶ A when σ is a bare strategy of A. When A is a game, so an event structure with polarity without neutral moves, we say σ is a strategy. One simple example of a strategy σ ∶ A in a game A is got by taking σ to consist of all the finite configurations of A regarded as elementary event structures in which their order of causal dependency is inherited from A. (Bare strategies, with neutral events, have been called "partial strategies" in [25] and an "uncovered strategies" in [16].) We shall regard a strategy in the compound game A ⊥ ∥B, where A and B are games as a strategy from the game A to the game B [7,12]. Copycat We shall shortly define the composition of strategies.Identities w.r.t.composition are given by copycat strategies.Let A be a game.The copycat strategy cc A ∶ A ⊥ ∥A is an instance of a strategy.We obtain copycat from the finite configurations of an event structure CC A based on the idea that Player moves, of +ve polarity, in one component of the game A ⊥ ∥A always copy previous corresponding moves of Opponent, of −ve polarity in the other component. Let A be an event structure with polarity.Then, CC A is an event structure with polarity.Moreover, The copycat strategy cc A ∶ A ⊥ ∥A is defined by taking In other words, cc A consists of all the finite configurations of CC A , each understood as a finite partial order through inheriting the causal dependency of CC A . Composition of strategies A play of a strategy σ in a game A ⊥ ∥B and a play of a strategy τ in a game B ⊥ ∥C can interact at the common game B, where the two strategies adopt complementary views, in which one sees a move of Player the other sees a move of Opponent, and vice versa.In effect, the two plays synchronise at common moves in B, one strategy being receptive to the Player moves of the other.Together they produce a play in the event structure with polarity A ⊥ ∥B 0 ∥C -the event structure with polarity B 0 has the same underlying event structure as B but where all events now carry neutral polarity.This is because the interaction over the game B produces moves which are no longer open to Player or Opponent.We can express this interaction through a partial operation defined as follows.Let p ∈ Plays(A ⊥ ∥B), q ∈ Plays(A ⊥ ∥B) with p = x A ⊥ ∥x B and q = y B ⊥ ∥y C .Take where we understand the configurations y C and x A ⊥ to inherit the partial order of their ambient event structures.Notice that q ⊛ p is defined only if x B = y B ⊥ , and then only if no causal loops are introduced. Define the projection of a play p in A ⊥ ∥B 0 ∥C,with p = x A ⊥ ∥x B ∥x C , to a play p↓ in A ⊥ ∥C, to be the restriction of the order on p to the set x A ⊥ ∥x C . Let σ ∶ A ⊥ ∥B and τ ∶ B ⊥ ∥C be strategies.Define their composition It is sometimes useful to consider their composition without hiding, the interaction which is like the strategy τ ⊙σ, but before hiding the neutral moves over the game B. Lemma 10.The interaction of strategies σ ∶ A ⊥ ∥B and τ ∶ B ⊥ ∥C yields a bare strategy τ ⊛ σ ∶ A ⊥ ∥B 0 ∥C.Theorem 11.The composition of strategies σ ∶ A ⊥ ∥B and τ ∶ B ⊥ ∥C yields a strategy τ ⊙σ ∶ A ⊥ ∥C.Taking objects to be games and arrows from a game A to a game B to be strategies in the game A ⊥ ∥B, with composition as above, yields a category in which copycat is identity.(This is in contrast to the bicategory of [14].) Deterministic strategies Let A be an event structure with polarity.A bare strategy σ ∶ A is deterministic iff The interaction of deterministic bare strategies is deterministic.Similarly, the composition of deterministic strategies is deterministic.However, for general games A, the copycat strategy need not be deterministic.It will be deterministic iff A is race-free, i.e., Restricting to race-free games as objects and deterministic strategies as arrows we obtain a category.Deterministic strategies coincide with the receptive ingenuous strategies of Melliès and Mimram [13] and are closely related to the strategies of Faggian and Piccolo [8], and Abramsky and Melliès' strategies as closure operators [1]. The subcategory of deterministic strategies on games which countable and purely positive, i.e. for which there are no Opponent moves, is isomorphic to that of Berry's dI-domains and stable functions.If we restrict the subcategory further to objects in which causal dependency is simply the identity relation we obtain Girard's qualitative domains with linear maps and if yet further insist that consistency Con is determined in a binary fashion, i.e.X ∈ Con ⇐⇒ ∀a 1 , a 2 ∈ X. {a 1 , a 2 } ∈ Con , his coherence spaces.In this sense we can see strategies as extending the world of stable domain theory.The relationship with the broader world of traditional domain theory, following in the footsteps of Scott, is more subtle.In [23], it is shown how a strategy determines a presheaf and a strategy between games a profunctor, giving a relationship with a form of generalised domain theory [10,4]. Strategies as maps of event structures A strategy σ in a game A is a rigid family and so, by Proposition 2, determines an event structure S whose events are those plays in σ which have a top element.Each top element is an event of the game A so there is a function from the events of S to those of A; this function is a total map of event structures and indeed a concurrent strategy in the sense of [14].Not all the concurrent strategies of [14] are obtained this way.But any concurrent strategy of [14] has a rigid image [25] which corresponds to a strategy as presented here. Though not essential to the rest of the paper, we now explain this summary of the relation with the concurrent strategies of [14] in more detail.Recall a (total) map of event structures f ∶ E → E ′ is a function f from E to E ′ such that the image of a configuration x is a configuration f x and any event of f x arises as the image of a unique event of x.Maps compose as functions.Write E for the ensuing category. A map f ∶ E → E ′ reflects causal dependency locally, in the sense that if e, e ′ are events in a configuration x of E for which f (e ′ ) ≤ f (e) in E ′ , then e ′ ≤ e also in E; the event structure E inherits causal dependencies from the event structure E ′ via the map f .Consequently, a map f ∶ E → E ′ preserves concurrency: if two events are concurrent, e 1 co E e 2 , then their images are also concurrent, f (e 1 ) co E ′ f (e 2 ).In general a map of event structures need not preserve causal dependency; when it does we say it is rigid.Write E r for the subcategory of rigid maps. The inclusion functor E r ↪ E has a right adjoint ([20], Proposition 2.3): There is an obvious map of event structures B ∶ Pr(Aug(B)) → B taking an event of Pr(Aug(B)) to its top element.Post-composition by B yields a bijection furnishing the data required for an adjunction.Hence Pr(Aug(_)) extends to a right adjoint to the inclusion E r ↪ E. From the bijection of the adjunction, we have a correspondence between maps f ∶ A → B and rigid maps f ∶ A → Pr(Aug(B)).The adjunction is unchanged by the addition of polarity to event structures; maps are assumed to preserve polarity. A strategy determines a map and indeed a "concurrent strategy"as in [14]: Proposition 12. Let σ ∶ A be a strategy in a game A. The function f σ ∶ Pr(σ) → A, taking an event of Pr(σ) to its top element, is a map of event structures with polarity.It is a concurrent strategy in the sense of [14], viz. a map which is courteous, s ′ s and pol(s [14]), and receptive, f σ x ⊆ − y in C(A), for x ∈ C(Pr(σ)), implies there is a unique Not all the concurrent strategies of [14] are obtained in the manner of Proposition 12.However, from any concurrent strategy f ∶ S → A in a game A there is σ ∶ A obtained as the image of the finite configurations of S as augmentations of A; recall from Proposition 2, the order isomorphism θ ∶ C(Pr(Aug(A))) ≅ Aug(A).From the definition of σ, the rigid map f ∶ S → Pr(Aug(A)) cuts down to a rigid map f ∶ S → Pr(σ).The concurrent strategy f factors through its "rigid image" where the rigid image f σ is itself a concurrent strategy.The simple strategies of this article correspond to such rigid image strategies.The determination of a strategy, call it σ f , from a concurrent strategy f is functorial: identity, copycat, strategies are preserved and if concurrent strategies f and g are composable then σ g⊙f = σ g ⊙σ f .Often extra structure on a concurrent strategy f can be pushed forward along the rigid map f from to its rigid image, so to a simple strategy of this article.For example, probabilistic structure (in the form of a valuation -see the next section) making a concurrent strategy probabilistic can be pushed forward along the rigid map f from S to Pr(σ f ), and so to σ f [25].As a consequence, in the next section, we are able to develop probabilistic strategies in the simpler framework of this paper. A major result of [14] is that receptivity and courtesy (called innocence there) are necessary and sufficient conditions in order for copycat to behave as identity w.r.t.composition; this motivated the definition of concurrent strategy there.That article directly spawned work on games with winning conditions and payoff [5,6], imperfect information [21], probabilistic strategies [24], "stopping configurations" [3] and "essential events" [16] -the latter two concerned with capturing the liveness behaviour of concurrent strategies viewed as processes.(Concurrent strategies are currently being extended to cope with quantum computation of the kind addressed in the quantum lambda calculus [15].)As an indication of how much of the work ensuing from [14] could be reformulated in terms of the simple strategies on which this article concentrates we next address the issue of how to make strategies probabilistic.Probabilistic strategies developed in this simpler framework, instead of that of concurrent strategies [14], do not suffer from any loss of information e.g. with regard to expected payoff. Probabilistic strategies As a first step we describe how to make event structures probabilistic, in itself an issue, as event structures lie outside the models of probabilistic processes most commonly considered. Probabilistic event structures A probabilistic event structure essentially comprises an event structure together with a continuous valuation on the Scott-open sets of its domain of configurations. 1 The continuous valuation assigns a probability to each open set and can then be extended to a probability measure on the Borel sets [11].However open sets are several levels removed from the events of an event structure, and an equivalent but more workable definition is obtained by considering the probabilities of sub-basic open sets, generated by single finite configurations; for each finite configuration x this specifies Prob(x) the probability of obtaining events x, so as result a configuration which extends the finite configuration x.Such valuations on configuration determine the continuous valuations from which they arise, and can be characterised through the device of "drop functions" which measure the drop in probability across certain generalised intervals.The characterisation yields a workable general definition of probabilistic event structure as event structures with configuration-valuations, viz.functions from finite configurations to the unit interval for which the drop functions are always nonnegative [22]. In detail, a probabilistic event structure comprises an event structure E with a configurationvaluation, a function v from the finite configurations of E to the unit interval which is (normalized) v(∅) = 1 and satisfies the (drop condition) d v [y; x 1 , ⋯, x n ] ≥ 0 when y ⊆ x 1 , ⋯, x n for finite configurations y, x 1 , ⋯, x n ; where the "drop" across the generalized interval starting at y and ending at one of the x 1 , ⋯, x n is given by -the index I ranges over nonempty I ⊆ {1, ⋯, n} such that the union ⋃ i∈I x i is a configuration.The "drop" d v [y; x 1 , ⋯, x n ] gives the probability of the result being a configuration which includes the configuration y and does not include any of the configurations x 1 , ⋯, x n . If x ⊆ y in C(E), then, provided v(x) ≠ 0, the conditional probability Prob(y x) is v(y) v(x); this is the probability that the resulting configuration includes the events y conditional on it including the events x. Probability with an Opponent This prepares the ground for a definition of probabilistic distributed strategies.Firstly though, we should restrict to race-free games, in particular because without copycat being deterministic there would be no probabilistic identity strategies.A probabilistic strategy in a game A, is a strategy σ ∶ A in which we endow σ with probability, while taking account of the fact that in the strategy Player can't be aware of the probabilities assigned by Opponent.To this end we notice that σ, being a rigid family, has the form of a family of configurations.We can't just regard σ as a probabilistic event structure however.This is because Player is oblivious to the probabilities of Opponent moves beyond those determined by causal dependencies of σ.An appropriate valuation for σ needs to take account of Opponent moves.It turns out to be useful to extend the concept of valuation to bare strategies, which may also have neutral moves. Let σ ∶ A be a bare strategy in A, an event structure with polarity; so both A and σ may involve neutral moves.A valuation on σ is a function v, from σ to the unit interval, which is (normalized) v(∅) = 1, (oblivious) v(p) = v(q) when p ↪ − q for p, q ∈ σ , and satisfies the (drop condition) d v [q; p 1 , ⋯, p n ] ≥ 0 when q ↪ + p 1 , ⋯, p n for elements of σ. When p ↪ + q in σ, we can still express Prob(q p), the conditional probability of the additional neutral or Player moves making the play q given p, as v(q) v(p), provided v(p) ≠ 0. The game being race-free and the valuation being oblivious ensure the probabilistic independence of Player or neutral moves and Opponent moves with which are concurrent. For a race-free game A, the copycat strategy is deterministic and we obtain a valuation on cc A by taking v cc A to be the function which is constantly 1. Composing probabilistic strategies Let A, B and C be race-free games.Assume σ ∶ A ⊥ ∥B, with valuation v σ , and τ ∶ B ⊥ ∥C, with valuation v τ , are probabilistic strategies.To define their interaction and composition we must define the valuations v τ ⊛ v σ on τ ⊛ σ and v τ ⊙v σ on τ ⊙σ, respectively.Lemma 13.For r ∈ τ ⊛ σ, defining yields a valuation on τ ⊛ σ. Theorem 15.For race-free games A, B and C, we define the composition of probabilistic strategies σ from A to B, with valuation v σ , and τ from B to C, with valuation v τ , to be τ ⊙σ, with valuation v τ ⊙v σ .Taking objects to be games and arrows from a game A to a game B to be probabilistic strategies in the game A ⊥ ∥B, with composition as above, yields a category in which copycat, with the constantly-1 valuation, is identity. The next example illustrates how through probability leaks we can track deadlocks, or divergences, that can arise in the composition of strategies.(Such branching behaviour might otherwise be lost in the composition of strategies and through concentrating on rigid images.)Example 16.Let B be the game consisting of two concurrent Player events b 1 and b 2 , and C the game with a single Player event c.We illustrate the composition of two probabilistic strategies σ from the empty game ∅ to B and τ from B to C. The strategy σ ∶ ∅ ⊥ ∥B plays b 1 with probability 2 3 and b 2 with probability 1 3 (and plays both with probability 0).The strategy τ ∶ B ⊥ ∥C does nothing if just b 1 is played and plays the single Player event c of C with certainty, probabilty 1, if b 2 is played.Their composition yields the strategy τ ⊙σ ∶ ∅ ⊥ ∥C which plays c with probability 1 3, so has a 2 3 chance of doing nothing. One way in which the probabilistic interaction of strategies is important is in calculating the expected outcome of the competition between a probabilistic strategy and a counterstrategy, the subject of the following example. Example 17.Given a probabilistic strategy σ ∶ A, with valuation v σ , and a counterstrategy τ ∶ A ⊥ , with valuation v τ , we obtain a valuation v τ ⊛v σ on their interaction τ ⊛σ ∶ A 0 , where now all the events of the interaction are neutral.Via the order isomorphism θ ∶ C(Pr(τ ⊛σ)) ≅ τ ⊛σ we obtain a configuration-valuation (v τ ⊛ v σ ) ○ θ, making Pr(τ ⊛ σ) a probabilistic event structure.As such we get a probability measure µ σ,τ on the Borel sets of its configurations.Assuming a payoff given as a Borel measurable function X from C ∞ (A) to the real numbers, the expected payoff is obtained as the Lebesgue integral E σ,τ (X) = def x∈C ∞ (Pr(τ ⊛σ)) X( x ) dµ σ,τ (x) , where x ∈ C ∞ (A) is the configuration of A over which x ∈ C ∞ (Pr(τ ⊛ σ)) lies. Conclusion We have provided an elementary account of a form of distributed strategies by choosing only to represent the rigid images of concurrent strategies.Is anything irredeemably lost through this simplification?(In the sense that it can't be regained through adding extra structure, in the way that probabilistic structure recovers hidden branching.)Not obviously.Though, for M F C S 2 0 1 7 81:12 Distributed Strategies Made Easy instance, we couldn't exactly reproduce the result of [3], establishing a bijection between events of a strategy and derivations in an operational semantics.Though an elementary account is more accessible, a more abstract, categorical account can be helpful too.As often, there are pros and cons.To some extent, one pays for the elementary treatment in not seeing the abstract picture, the wood for the trees.On another tack, the account of strategies here reveals an alternative way to develop strategies while capturing noneterministic branching explicitly, viz.as (pre)sheaves over plays rather than subsets, in the form of rigid families.For instance, we can recover the concurrent strategies of [14] as certain separated presheaves in the manner of [19]; this brings us close to the developments of Hirschowitz and Pous [9] and Ong and Tsukada [17]. a 2 . For c ∈ A ⊥ ∥A we use c to mean the corresponding copy of c, of opposite polarity, in the alternative component, i.e. (1, a) = (2, a) and (2, a) = (1, a) .Define CC A to comprise the event structure with polarity A ⊥ ∥A together with extra causal dependencies c ≤ C C A c for all events c with pol A ⊥ ∥A (c) = +.Take a finite subset to be consistent in CC A iff its down-closure w.r.t. the relation ≤ C C A is consistent in A ⊥ ∥A.Example 6.We illustrate the construction of CC A for the event structure A comprising the single immediate dependency a 1 a 2 from an Opponent move a 1 to a Player move The event structure CC A is obtained from A ⊥ ∥A by adjoining the additional immediate dependencies shown: 1 A Scott-open subset of configurations is upwards-closed w.r.t.inclusion and such that if it contains the union of a directed subset S of configurations then it contains an element of S. A continuous valuation is a functionw from the Scott-open subsets of C ∞ (E) to [0, 1] which is ((normalized) w(C ∞ (E)) = 1; (strict) w(∅) = 0; (monotone) U ⊆ V ⇒ w(U ) ≤ w(V ); (modular) w(U ∪ V ) + w(U ∩ V ) = w(U ) + w(V ); and(continuous) w(⋃ i∈I U i ) = sup i∈I w(U i ), for directed unions. C S 2 0 1 7 81:4 q e or e ′ co q e)] or [e ′ q e & (e ′ p e or e ′ co p e)] .M F
7,918.4
2017-08-21T00:00:00.000
[ "Mathematics" ]
SizeEffect onRecycledConcrete Strengthand ItsPredictionModel Using Standard Neutrosophic Number School of Civil and Transportation Engineering, Ningbo University of Technology, Ningbo 315211, China Engineering Research Center of Industrial Construction in Civil Engineering of Zhejiang, Ningbo University of Technology, Chongqing, China Key Laboratory of New Technology for Construction of Cities in Mountain Area, Ministry of Education, School of Civil Engineering, Chongqing University, Chongqing 400045, China Introduction Demanding by environmental protection, some research on recycled aggregate concrete (RAC) has been carried on in many countries in recent years. At present, the research on recycled aggregate concrete mainly focuses on its mechanical properties or durability. Barhmaiah et al. [1] investigated the effect of recycled aggregate on strength of concrete and the results were compared with virgin aggregate concrete. Wu and Jin [2] studied the compressive fatigue behavior of compound concrete containing demolished concrete lumps and recycled aggregate concrete. It was found that satisfactory compressive strength can be attained when the total waste content in RLAC reaches 54.6%. Akono et al. [3] investigated the basic creep and fracture response of fine recycled aggregate concrete using nanoscale mechanical characterization modules integrated with nonlinear micromechanical modelling and machine learning methods. It has been shown that the fracture toughness of fine recycled aggregate concrete is 8% lower than that of plain concrete. Sasanipour et al. [4] investigated the effects of the surface pretreatment method by soaking recycled concrete aggregates in silica fume slurry on the mechanical and durability properties of recycled aggregate concrete. Results revealed that using pretreated recycled aggregates significantly improved the durability properties of mixes especially chloride ion penetration and electrical resistivity. Zhu et al. [5] investigate the long-term performance of recycled aggregate concrete beams for a period of 3045 days and the bending behavior of test beams after the sustained load is removed. It was found that the RAC beams exhibit more significant stiffness degradation characteristics in the flexural test. Mi et al. [6] studied the influences of the compressive strength ratio between original concrete and recycled aggregate concrete on the slump, compressive strength, and carbonation resistance of recycled aggregate concrete. Results revealed that adjusting the compressive strength ratio can furnish different slump, compressive strength, and carbonation depth values, while also reducing mortar inhomogeneities. Wang et al. [7] studied the influences and mechanisms of the single and coupled effect of carbonation, dry-wet cycles, and freeze-thaw cycles on the durability of three types of recycled aggregate concrete. e results showed that carbonation and dry-wet cycles can improve the pore tortuosity and reduce the connectivity of pores. As is well known, the strength of quasibrittle materials like concrete and rock is size dependent due to the heterogeneity [8][9][10][11]. Generally, the geometrically similar samples will not behave similarly for different sizes; this is named as size effect (or scale effect). In the past few decades, the size effect problem has been widely investigated by many scientists and engineers. Overall, the existing scale effect laws can be divided into three types: (1) statistical size effect [12,13], (2) energetic size effect [14][15][16][17][18], and (3) fractal size effect [19,20]. Additionally, some researchers used the artificial neural network (ANN) technique to forecast the size effect of concrete strength [21,22]. In these laws, the energetic size effect proposed by Bazant [11,[14][15][16][17][18][23][24][25][26][27][28][29] has been shown very effective and promising for those quasibrittle materials. Although much progress has been made in the size effect mechanism, much further research is needed for new type concrete and recycled aggregate concrete. On the other hand, it is well known that most of the physical quantities in engineering practice cannot be correctly expressed by using crisp numbers due to the limitation of experimental test techniques and the complexity of objective things. Apparently, the tested concrete strength is just the physical quantity that always fluctuates within a certain range. It is difficult to express these strength parameters only by using determined values. As a result, it is very necessary to extend the existing size effect law to tackle the indeterminacy in the tested strength data. To handle indeterminate information in practice, Smarandache [30][31][32] presented the concept of a neutrosophic number for the first time. e neutrosophic number, which consists of a determinate part and an indeterminate part, is very suitable for the expression of data with indeterminacy. However, little progress has been made for handling indeterminate problems by neutrosophic numbers in scientific and engineering areas in the past two decades. Recently, Ye [33,34] used the neutrosophic number as a tool for solving the group decision-making and fault diagnosis problems, respectively. It has been shown that the neutrosophic number can effectively deal with real problems with indeterminacy. In this paper, a standard neutrosophic number is firstly proposed for the improvement of the multiplication of neutrosophic numbers to a certain extent. And then the standard neutrosophic number is used to modify the size effect laws on the compressive and tensile strengths of the recycled aggregate concrete. e proposed size effect law based on the standard neutrosophic number provides a simple and effective way to tackle the indeterminacy in the strength parameters. e presentation of this work is organized as follows: Section 2 presents the size effect experimental scheme and material properties used in the recycled aggregate concrete. Section 3 gives the testing results of the compressive and tensile strengths for the cube specimens. In Section 4, the neutrosophic number is briefly reviewed and a standard neutrosophic number is developed and used to improve the size effect law for reflecting the indeterminacy in data. Finally, the conclusions of this work are summarized in Section 5. Experimental Scheme and Material Properties As shown in Figure 1, four sets of recycled concrete cube specimens with different sizes are designed to investigate the strength-size effect. Each group has six test blocks (three for compressive test, three for splitting tensile test) and the total number of these specimens is 24. e side lengths of these cube specimens are 70 mm, 100 mm, 150 mm, and 200 mm, respectively. Tables 1-3 present the main material properties of the cement, fine aggregate, and recycled coarse aggregate used in this experiment, respectively. e cement is the ordinary PM32.5 Portland cement and the fine aggregate is the natural river sand. e recycled coarse aggregate as shown in Figure 2 is manufactured from the waste concrete in the process of old building dismantling. Table 4 gives the mixture ratio, water-cement ratio, and replacement ratio of recycled coarse aggregate used in the experiment. In all these factors, the mixture ratio design is the key factor to determine the strength grade of concrete. Finally, these recycled concrete cube specimens as shown in Figure 3 are produced in the laboratory. After curing in water for 28 days, the compressive and splitting tensile strength tests are carried out. Figure 4 presents the experiment equipment used for the compressive and splitting tensile strengths, respectively. e experiment equipment is called the STYE-3000E automatic pressure testing machine. e detailed testing steps strictly complied with the norms of "standard for test methods of mechanical properties of ordinary concrete (GB/T 50081-2002)" [35]. Table 5 and Figure 5 present the test results of the compressive strengths, respectively. Table 6 and Figure 6 present the test results of the splitting tensile strengths, respectively. Experiment Test and Result Analysis From Tables 5 and 6, it has been shown that the compressive and tensile strengths of recycled concrete both have obvious size effects. In general, the mean value of strength decreases gradually with the increase of specimen size. Taking the specimens with a side length of 70 mm as the reference group, the degrees of size effect for other groups can be computed as 2 Advances in Civil Engineering Bulk density (kg/m 3 ) 1211 5 Water ratio (%) 6.8 6 Gradation III 7 Modulus of finenessμ f 1.83 8 Water absorption (%) 2.9 (1) Advances in Civil Engineering From Tables 5 and 6, it was found that the discreteness of data of the compressive strength is far greater than that of the splitting tensile strength. e possible reasons for this phenomenon are as follows: (1) the compressive strength of the test block is closely related to that of recycled aggregate. It is known that the compressive strengths of recycled aggregates in different specimens fluctuate greatly. is leads to the great discreteness of compressive strength data. (2) e splitting tensile strength is mainly affected by the cohesive force between cement and aggregate. At this point, there is no obvious difference for different recycled aggregates. e Neutrosophic Number and Its Standard Form. As stated before, the above strength values always contain some randomness due to the limitation of experimental techniques and the complexity of objective things. In order to better describe the randomness in data, the neutrosophic number will be introduced in this work to describe strength since it is a powerful tool for the expression of data with indeterminacy. Smarandache defined a neutrosophic number for the first time in neutrosophic probability [30][31][32]. A neutrosophic number, which can be divided into a determinate part and an indeterminate part, is expressed as in which x and y are real numbers, I is indeterminacy, such that I 2 � I, 0 · I � 0, and I/I � undefined. (1) e neutrosophic number more highlights the determinate part, which is the common concerned point in engineering application. (2) e neutrosophic number is similar to the imaginary number in form. us, the operational rule of the neutrosophic number is more convenient to implement. Letting N 1 � x 1 + y 1 I and N 2 � x 2 + y 2 I be two neutrosophic numbers, the operational relations of neutrosophic numbers are given by Smarandache [30][31][32] as Advances in Civil Engineering However, the above operations may have potential conflicts with those operations for interval numbers. For example, assume that two neutrosophic numbers are N 1 � 3 + 2I and N 2 � 4 + 3I, where I ∈ [2,3]. en, they are equivalent to two interval numbersN 1 � [7,9] and N 2 � [10, 13], respectively. According to the multiplication of neutrosophic numbers, one has On the other hand, the following result can be obtained by using the multiplication of interval numbers as Obviously, the results in equations (4) and (5) are different. In order to eliminate such conflicts, we propose some improvement on the multiplication of neutrosophic numbers in this section. In the first place, we define the standard form of a neutrosophic number as Accordingly, the multiplication includes the following steps. First, we transform the arbitrary neutrosophic number into its standard form N s � x s + y s I s , where I s ∈ [0, 1]. For example, the above two neutrosophic numbers N 1 � 3 + 2I and N 2 � 4 + 3I (I ∈ [2, 3]) can be rewrote as N s 1 � 7 + 2I s and N s 2 � 10 + 3I s , respectively. en, one can obtain N 1 × N 2 � N s 1 × N s 2 � 7 × 10 +(7 × 3 + 2 × 10 + 2 × 3)I s � 70 + 47I s � [70, 117]. (7) Apparently, the same result is achieved by equations (5) and (7). erefore, it is important to transform an arbitrary neutrosophic number into its standard form in practice. For an arbitrary neutrosophic number N � x + yI, I ∈ [z 1 , z 2 ], the conversion formula is given as Similarly, the conversion formula between an arbitrary interval number [x 1 , x 2 ] and the standard neutrosophic number is expressed as Next, we use the proposed standard neutrosophic number to describe the strength data as shown in Tables 5 and 6. As is well known, the standard deviation δ in Tables 5 and 6 is a commonly used measure of the degree to which a variable is dispersed around its mean value. en, the strength data of these recycled concrete cubic specimens can be considered as a set of interval numbers [S c − δ, S c + δ] and [S t − δ, S t + δ]. Using equation (9), these interval numbers can be transformed into the standard neutrosophic numbers as For example, the cubic compressive strength for the specimens with the side length of 100 mm in Table 5 can be expressed as an interval number [33.51-0.905, 33.51 + 0.905] or a standard neutrosophic number 32.605 + 1.81I s . Table 7 shows all the compressive and tensile strengths of these cubic specimens using the form of standard neutrosophic numbers. Improved Size Effect Law. e size effect law denotes the strength-size functional relationship. In this section, the existing size effect laws have been improved in two areas: one is using a new ridge estimation method to compute the fitting coefficients of the formula for size effect law, the other is using the standard neutrosophic number to reflect the indeterminacy. For the compressive strength, the commonly used size effect law is where d is the side length of the concrete cube block ε 0 and ε 1 are the two constants which can be determined by providing fits to experimental data. For the splitting tensile strength, the common formula of size effect law is where S ∞ denotes the nominal strength when the specimen size tends to infinity and d 0 denotes the characteristic size. eoretically, S ∞ is an independent value of the specimen size. From the point of view of dimensional homogeneity, it is more reasonable to replace the side length d with the side length ratio r, wherer is the ratio of the cube specimen size to the minimum size in the experiment, that is, r � d/d min . at is because r is a dimensionless parameter. en, the size effect law for the compressive strength can be rewritten as Similarly, the size effect law for the splitting tensile strength is revised as For a particular size ratio r, a unique strength value can be calculated by any one of equations (13) and (14). Generally, the least squares estimate (LSE) [36] is used to obtain the fitting coefficients in equations (13) and (14). However, the ill condition of the equation may lead to serious distortion of the fitting results. To solve this problem, a new ridge estimation method is used to compute the fitting coefficients. Taking equation (13) as an example, the following linear regression model can be obtained from equation (13) with the test data of the specimens as Advances in Civil Engineering As stated before, LSE is often used for solving equation (15) to obtain the fitting coefficients; that is, As is well known, LSE is very inaccurate if the coefficient matrix of the equation is ill-conditioned. e ridge estimation (RE) [37][38][39][40] method is often used to solve the illconditioned equation. For equation (15), the RE solution is where E is the identity matrix and λ is the ridge parameter that can be determined by the L-curve method [37][38][39][40]. In general, the determination process of ridge parameter requires complex calculation and it is difficult to obtain the optimal ridge parameter. us, a new ridge estimation (NRE) method is proposed in this section to solve the ill-posed least squares problem. e main formulas of NRE are derived as follows. Letting b ij denotes the (i, j)th element in the matrix B, one has Assume b max is the maximum value in all the diagonal elements of B, that is, en, a new regularization matrix R used in NRE is designed as where ς is an adjustable parameter (ς ⊂ [0, 0.2]), which can be adjusted according to the condition number of coefficient matrix B. ς � 0.1 is used in this work. Finally, the NRE of ε { } can be obtained as Compared with RE, the advantages of NRE lie in the following aspects. (1) e complex operation of ridge parameter selection is avoided. (2) e calculation accuracy is further improved by automatically adjusting diagonal elements as shown in equation (21). Using NRE and the data in Tables 5 and 6, the size effect laws for the compressive and tensile strengths in this experiment can be obtained by data fitting as As stated before, the standard neutrosophic number N s � x s + y s I s , which consists of a determinate part x s and an indeterminate part y s I s (I s ∈ [0, 1]), is very suitable for expressing those parameters with indeterminacy in practice. In view of this, we further improve the above size effect laws by using the standard neutrosophic number in order to reflect the indeterminacy in data. For the compressive strength, letting ε 0 � x 0 + y 0 I s and ε 1 � x 1 + y 1 I s be two neutrosophic numbers for x 0 , y 0 , x 1 , y 1 ∈ [0, +∞), equation (13) can be rewritten as For the splitting tensile strength, letting S ∞ � x 2 + y 2 I s and r 0 � x 3 + y 3 I s be two neutrosophic numbers for x 2 , y 2 , x 3 , y 3 ∈ [0, +∞), equation (14) can be rewritten as In equations (25) and (26), the positive constants x 0 , y 0 , x 1 , y 1 , x 2 , y 2 , x 3 , y 3 can be determined by the proposed NRE method. Equations (27) and (28) For comparison, Figures 7 and 8 present the fitting curves of size effect laws for the compressive and tensile Figure 7, the black curves and the scatter points indicated by " * " are obtained by equation (23) and the data in the fifth column of Table 5. e red curve and the green curve in Figure 7 indicate the lower boundary and the upper boundary obtained from equation (27). In Figure 8, the black curves and the scatter points indicated by " * " are obtained by equation (24) and the data in the fifth column of Table 6. e red curve and the green curve in Figure 8 indicate the lower boundary and the upper boundary obtained from equation (28). One can see from Figures 7 and 8 that the size effect laws in the form of the standard neutrosophic number can provide a certain range of the strength value for a particular size. Obviously, the size effect law based on the standard neutrosophic number is more realistic than the existing size effect law. Comparing equations (12) and (11), one can find that the two size effect formulas for the compressive and tensile strengths seem to be similar. It may be valuable to propose a unified formula of the size effect. In this paper, a unified formula for size effect law is proposed as where S denotes the physical quantity such as compressive and tensile strengths and δ 0 and δ 1 are the two constants which can be determined by providing fits to experimental data. δ 3 is called the fractal dimension, which is mainly determined by the characteristics of the material itself. δ 3 can also be obtained from a large number of test data statistics. In this work, δ 3 � 0.4 is used for the compressive strength and δ 3 � 1 is used for the splitting tensile strength. Equation (29) can be used for other types of size effect laws. Conclusion In this study, four sets of recycled concrete cube specimens with different sizes are produced in the laboratory. e experiments on compressive and tensile strengths are carried out to obtain the rules of the strength value with the change of the specimen size. According to the experimental results, it was found that the compressive and tensile strengths of recycled concrete both have obvious size effects. In general, the strength value decreases gradually with the increase of specimen size. To reflect the uncertainty in the data, a standard neutrosophic number is proposed to improve the multiplication of neutrosophic numbers to a certain degree. Subsequently, the proposed standard neutrosophic number is used for modifying the size effect law on the compressive and tensile strengths. It has been shown that the size effect law based on the neutrosophic number is more realistic than the existing size effect law. e proposed method in this paper provides a simple and effective way to handle the indeterminacy in the testing data and can be extended to other types of size effect laws, which are our future research directions. Data Availability e data used to support the findings of this study are included within the article and also available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,867.8
2021-03-08T00:00:00.000
[ "Engineering", "Materials Science" ]
The Spectrin Cytoskeleton Is Crucial for Adherent and Invasive Bacterial Pathogenesis Various enteric bacterial pathogens target the host cell cytoskeletal machinery as a crucial event in their pathogenesis. Despite thorough studies detailing strategies microbes use to exploit these components of the host cell, the role of the spectrin-based cytoskeleton has been largely overlooked. Here we show that the spectrin cytoskeleton is a host system that is hijacked by adherent (Entropathogenic Escherichia coli [EPEC]), invasive triggering (Salmonella enterica serovar Typhimurium [S. Typhimurium]) and invasive zippering (Listeria monocytogenes) bacteria. We demonstrate that spectrin cytoskeletal proteins are recruited to EPEC pedestals, S. Typhimurium membrane ruffles and Salmonella containing vacuoles (SCVs), as well as sites of invasion and comet tail initiation by L. monocytogenes. Spectrin was often seen co-localizing with actin filaments at the cell periphery, however a disconnect between the actin and spectrin cytoskeletons was also observed. During infections with S. Typhimurium ΔsipA, actin-rich membrane ruffles at characteristic sites of bacterial invasion often occurred in the absence of spectrin cytoskeletal proteins. Additionally, early in the formation of L. monocytogenes comet tails, spectrin cytoskeletal elements were recruited to the surface of the internalized bacteria independent of actin filaments. Further studies revealed the presence of the spectrin cytoskeleton during SCV and Listeria comet tail formation, highlighting novel cytoplasmic roles for the spectrin cytoskeleton. SiRNA targeted against spectrin and the spectrin-associated proteins severely diminished EPEC pedestal formation as well as S. Typhimurium and L. monocytogenes invasion. Ultimately, these findings identify the spectrin cytoskeleton as a ubiquitous target of enteric bacterial pathogens and indicate that this cytoskeletal system is critical for these infections to progress. Introduction The manipulation of the host cytoskeleton is a crucial step during infections caused by a variety of enteric bacterial pathogens including EPEC, S. Typhimurium and L. monocytogenes. EPEC attach to host intestinal epithelial cells and remain primarily extracellular during their infections [1]. These microbes utilize a type III secretion system (T3SS) to inject bacteriallyderived effector proteins from the bacterial cytosol directly into the host cell cytoplasm [2]. One such effector, the translocated intimin receptor (Tir), is instrumental in anchoring EPEC to the host cell through its extracellular domains. Intracellularly, Tir recruits actin filaments through the binding of actin-related proteins to its cytosolic tail domains. The abundant polymerization of actin filaments beneath EPEC results in the bacteria rising off the natural surface of the cell on actin-rich membrane protrusions called ''pedestals'', which are hallmarks of the disease [3,4]. S. Typhimurium also utilize T3SS's as part of their pathogenesis. These invasive pathogens inject a variety of effector proteins, including SopB (SigD), SopE, SopE2 and SipA which cause the host cells to generate intense actin-based membrane ruffles at sites of bacterial invasion [5,6,7,8]. The membrane ruffling engulfs the bacteria into the host cell resulting in their encasement in a vacuole called a Salmonella containing vacuole (SCV), providing these microbes a protective niche for replication [9,10]. L. monocytogenes, another invasive pathogen, does not utilize a T3SS but rather deposits its effector proteins on its surface. These bacteria utilize a number of internalin proteins to efficiently enter non-phagocytic host cells; 2 well characterized invasion proteins are internalinA (InlA) and internalinB (InlB) [11]. Both proteins recruit clathrin and the clathrin associated endocytic machinery to sites of bacterial attachment [12,13]. This collection of proteins initially internalizes the bacterium into a vacuole within the host cytoplasm [14] [15]. Once within the host cell, L. monocytogenes quickly disrupts the vacuole that encapsulates it, then initiates the up-regulation and polarized distribution of the ActA effector on the bacterial plasma membrane [16]. ActA mimics N-WASp, thus recruiting the Arp2/3 complex, causing an actin-based comet tail to be generated at one end of the bacterium [16]. This comet tail propels the bacterium within the host cytosol and enables the microbe to disseminate to neighbouring cells [17]. The spectrin cytoskeleton is a well characterised, ubiquitously expressed sub-membranous cytoskeletal system that was first discovered in erythrocytes and has since been identified in a variety of epithelial cells [18,19,20]. The cornerstone of this cytoskeletal system is the filamentous polymer spectrin. Unlike other cytoskeletal systems, the spectrin cytoskeleton is thought to be restricted to membranous regions of the cell. Spectrin filaments provide stability and mechanical support to the plasma membrane as well as the Golgi, Golgi associated vesicles, ER and lysosomal membranes of the cell [21,22,23]. Spectrin interacts directly with actin filaments as well as the spectrin-associated proteins adducin, protein 4.1 (p4.1) and ankyrin, which provide a bridge between the spectrin-actin cytosketal network and the plasma membrane [24]. Additionally, the spectrin cytoskeleton co-localizes with actin accessory proteins, acting as a ''membrane protein-sorting machine'' [22] at specific sub-membranous regions of the cell during dynamic membrane remodelling events such as during cell migration [19,22,25]. The sub-membranous localization and known actin associations of the spectrin cytoskeleton, together with the dramatic reorganization of the host cell plasma membrane and related cytoskeletal networks during various enteric bacterial infections, suggest that the spectrin cytoskeletal system may also be a target of these pathogens. To examine this, we investigated the role of the spectrin cytoskeleton during EPEC, S. Typhimurium and L. monocytogenes infections. Our findings show that a set of spectrin cytoskeletal components are targeted by these pathogens and the involvement of this cytoskeletal system is crucial for their pathogenesis. Results The EPEC effector Tir recruits spectrin, p4.1 and adducin to pedestals To examine the role of the spectrin cytoskeleton during bacterial infections, we initially infected cultured cells with EPEC and immunolocalized ß 2 -spectrin. We found that spectrin was distinctly recruited to EPEC pedestals, while primary antibody controls showed non-specific staining and no localization at pedestals (Figures 1a HeLa cells, S1 polar Caco cells and S2 controls). To determine whether proteins that are known to interact with spectrin were also present at these sites we immunolocalized the spectrin associated proteins a-adducin and p4.1 and found that they were also present at EPEC pedestals ( Figure 1a adducin/p4.1 and S2 controls). When their organization within these structures was analyzed, a slight separation between the bacteria and the spectrin cytoskeleton was observed. Although spectrin-associated proteins co-localized with the actin filaments at certain parts of the pedestals, they were primarily positioned at the basal regions of these structures (Figure 2 and S3). To determine whether bacterial contact or effector translocation was responsible for spectrin cytoskeletal proteins being concentrated beneath EPEC, we used an EPEC T3SS mutant (EPEC DescN), mutated in a crucial ATPase needed for effector translocation [26]. Host cells infected with EPEC DescN did not recruit any components of the spectrin cytoskeleton to sites of bacterial attachment, suggesting that an effector was required (Figures 1b, S4 and S5). Because the EPEC effector Tir is needed for pedestal formation, we examined whether Tir mutants of EPEC concentrated any spectrin-associated proteins at sites of bacterial contact. Infections using EPEC Dtir, did not recruit any components of the spectrin cytoskeleton beneath the bacteria, whereas complemented bacteria (EPEC Dtir:tir) restored the wildtype phenotype (Figure 1b spectrin, S4 adducin and S5 p4.1). Although there are a variety of phosphorylation sites on the EPEC Tir protein that are involved in pedestal formation to varying degrees, by far the most crucial is the tyrosine 474 (Y474) phosphorylation site [27,28]. To determine whether this site was needed for spectrin cytoskeletal recruitment we used an EPEC Dtir strain complemented with tir containing a point mutation at that site (Y474F)(EPEC Dtir:tirY474F) and examined the localization of spectrin during those infections. Here we again observed a lack of spectrin/adducin/p4.1 recruitment, demonstrating that Tir Y474 phosphorylation is crucial for their positioning during these infections (Figure 1b spectrin, S4 adducin and S5 p4.1). As other EPEC effectors such as EspH, EspZ, Map, EspG and EspF are also proposed to be involved in pedestal formation [29], we examined the recruitment of spectrin/adducin/p4.1 beneath the bacteria during infections with EPEC mutated in each of those effectors and found that in all cases, all three spectrin cytoskeletal proteins were present at pedestals (Figures S6 spectrin, S7 adducin and S8 p4.1). Depletion of spectrin cytoskeletal proteins severely impairs EPEC infections Because the spectrin cytoskeleton appeared to be a significant component of EPEC pedestals, we sought to functionally perturb individual host components to examine their roles in pedestal generation. To accomplish this, we separately transfected HeLa cells with siRNA targeted against b 2 -spectrin, a-adducin and p4.1. Knockdowns were confirmed by western blot analysis (Figures 1c spectrin, S9a adducin and c p4.1). SiRNA pre-treated cells were then infected with wild-type EPEC to examine pedestal formation. In cells with undetectable levels of b 2 -spectrin or p4.1, attached EPEC were unable to form pedestals (Figures 1c, d spectrin, S9d and e p4.1). Despite this, the ability of the bacteria to attach to the host cells was not significantly altered by these treatments ( Figure S10). Interestingly, adducin knockdowns resulted in an inability of EPEC to attach to the host cell, thus subsequent pedestal presence was not observed ( Figure S9b). To ensure the siRNA treatments were not having adverse effects on the cell, we performed cell viability assays and found no difference in the viability of cells treated with control pool siRNA when compared to spectrin, adducin or p4.1 siRNA treated cells ( Figure S11). Furthermore, the actin cytoskeleton of spectrin knocked-down cells was morphologically similar to untreated cells with cortical actin and stress fibers present ( Figure S12). S. Typhimurium usurp the spectrin cytoskeleton during multiple stages of infection Based on our findings with EPEC, we investigated a potential role for the spectrin cytoskeleton during the pathogenesis of another T3SS dependent microbe, S. Typhimurium. We found that spectrin was recruited to the actin-rich membranous ruffles at sites of S. Typhimurium invasion, but only partially colocalized with actin when examined in detail (Figure 3a HeLa cells, S13 Caco cells and S14 another HeLa cell example). This lack of complete colocalization suggests that the presence of spectrin at these sites was not merely a byproduct of actin recruitment ( Figure S13 and S14). The disconnect of actin and spectrin cytoskeletons was confirmed in uninfected cells which showed a lack of spectrin recruitment to a number of stress fibers ( Figure S15). In addition to spectrin, the same spectrin associated proteins that were identified at EPEC pedestals (adducin, and p4.1) were also recruited to invasion sites (Figure 3a). To investigate the bacterial factors responsible for this recruitment, we utilized a S. Typhimurium DsopE/sopE2/sopB mutant, deficient in the effectors primarily responsible for membrane ruffling and bacterial invasion during these infections [30]. Infections with this mutant did not generate actin-mediated membrane ruffling and concomitantly the recruitment of the spectrin cytoskeleton to sites of bacterial contact was absent ( Figure S16). S. Typhimurium contains the bacterial effector, SipA, which is known to bundle actin and increase efficiency of invasion [31]. To determine if this effector influenced spectrin cytoskeletal protein recruitment to sites of invasion, we immunolocalized spectrin, adducin and p4.1 together with actin during infections with a S. Typhimurium sipA mutant. Infections with S. Typhimurium DsipA showed that the spectrin and actin cytoskeletons were independently recruited; as actin-rich membrane ruffles remained present but often did not concentrate spectrin or adducin at sites of invasion (Figure 3b spectrin, S17 adducin, and S18 enhanced images). When compared to WT S. Typhimurium, S. Typhimurium DsipA invasion sites showed a significantly decreased ability to recruit spectrin and adducin to invasion sites [43% and 89% reduced respectively] ( Figure S19). S. Typhimurium DsipA complimented with sipA restored the recruitment of spectrin and adducin to the membrane ruffles (Figures 2b spectrin, S17 adducin). P4.1 remained at membrane ruffles irrespective of the presence or absence of SipA (Figures S20). To investigate the potential involvement of the spectrin cytoskeleton at later time points of infections, when S. Typhiurium reside within the SCVs [10], we immunolocalized the spectrin cytoskeletal proteins at 90 minutes post invasion. We found that spectrin, but not adducin or p4.1, was recruited to SCVs (Fig 3c sectrin and S21 adducin/p4.1). We observed distinct localization of spectrin surrounding multiple bacteria within the protective vacuole ( Figure 3c). Spectrin, adducin and p4.1 were not observed localizing to bacteria at earlier time points during the intracellular stage of the infections (data not shown). Typhimurium invasion To determine the role of spectrin cytoskeletal components during S. Typhimurium invasion, we knocked down individual components of this cytoskeletal system in cultured cells and studied the effects on invasion. Knockdown of spectrin, adducin, or p4.1 proteins in host cells resulted in the near complete cessation of S. Typhimurium invasion (Figures 3d spectrin and S22 adducin/ p4.1). Quantification of S. Typhimurium invasion was assessed by immunofluorescent imaging in which cells were first identified that had undetectable levels of the targeted protein, then the number of bacteria that had infected those cells was counted. Microscopy counts of cells with undetectable levels for each of the three proteins showed an average of 8% invasion compared to control treatments (Figure 3d spectrin and S22 adducin/p4.1). We then quantified invasion efficiencies using classical invasion assay methods. Invasion assays with siRNA pretreated cells resulted in a significant decrease in invasion with an average of 35%/65%/ 60% (spectrin/adducin/p4.1 RNAi treated) invasion as compared to controls (Figure 3d spectrin and S22 adducin/p4.1). As expected, microscopic analysis showed that our siRNA transfection efficiencies were not %100, with some cells having incomplete knockdown of the targeted protein. The observed increase in invasion efficiencies using the classical invasion assay method as compared to the microscopy-based counts can be attributed to the invasion of unsuccessfully transfected cells and those with only partial knockdowns being present in these assays. Listeria monocytogenes requires the spectrin cytoskeleton for efficient invasion We further characterized the role of the spectrin cytoskeleton during bacterial invasion by studying L. monocytogenes infections. Infections of cultured cells, which allow only the InlB invasion pathway to ensue [12], showed spectrin/adducin/p4.1 lining the characteristic actin-rich sites of L. monocytogenes internalization ( Figure 4a) [32]. Individual siRNA-based depletion of spectrin/ adducin/p4.1 nearly abolished the ability of L. monocytogenes to invade the host cell (Figure 4b and S24). Microscopy counts of cells L. monocytogenes recruits Spectrin and p4.1 to initial stages of comet tail formation using the ActA effector Following entry into host cells, L. monocytogenes up-regulate the ActA effector to initiate the formation of the characteristic actinrich comet tails [33]. We found that spectrin and p4.1 were recruited to the initial stages of comet tail formation, whereas adducin was not (Figure 4d spectrin in HeLa cells, S23 spectrin in polar Caco cells and S25 adducin/p4.1 in HeLa cells). Detailed analysis revealed that in some instances spectrin was localized to the bacteria independent of actin ( Figure 5 HeLa cells and S23 Caco cells). At 30 minutes post infection, 70% of the bacteria had spectrin lining the membrane in the absence of actin, whereas after 90 minutes of infection only 7% of internalized bacteria were associated with spectrin alone ( Figure S26). Infections with L. monocytogenes ActA mutants (L. monocytogenes DactA) resulted in the absence of spectrin and p4.1 with the internalized bacteria, suggesting that ActA is needed for their recruitment (Figures 4d spectrin and S25 p4.1). Upon mature, full-length comet tail formation, spectrin as well as adducin and p4.1 were absent ( Figure S27). Discussion In this study we have shown that a set of spectrin cytoskeletal proteins are co-opted during a variety of enteric bacterial infections. We have demonstrated that spectrin, adducin and p4.1 are crucial proteins involved in EPEC pedestal formation, S. Typhimurium and L. monocytogenes epithelial cell invasion and subsequent stages of their intracellular life cycles. By functionally perturbing these host proteins, infections were efficiently halted demonstrating that this cytoskeletal system is integral to the pathogenesis of these bacteria. During our examination of EPEC infections we showed that spectrin was specifically concentrated at the base of pedestals, partially colocalizing with actin. This basal localization resembles that of other membrane protruding structures, namely microvilli and filopodia. Such structures contain a spectrin-based scaffold that provides a secure foundation for protein machinery localization, thus enabling the remodeling of the plasma membrane [25,34]. Consequently, spectrin may be providing a similar function during EPEC pedestal formation, by providing a substratum for membrane protrusion and pedestal formation. This is supported by evidence demonstrating that attached EPEC were unable to recruit actin beneath the bacteria when any of the spectrin cytoskeletal components were knocked-down. S. Typhimurium internalization is heavily dependant on actinbased membrane ruffles, however evidence presented here demonstrates that spectrin cytoskeletal components are also needed for maximal invasion. When any of the three spectrin cytoskeletal proteins were knocked-down, we observed ,8% invasion efficiencies when invaded bacteria were counted by microscopy in cells with undetectable levels of those cytoskeletal components. S. Typhimurium use a multitude of effector proteins to efficiently invade non-phagocytic cells. During infections with S. Typhimurium mutated in SipA, an effector involved in actin bundling that is known to aid in invasion [8], we found that actinrich membrane ruffles remained present but often lacked spectrin or adducin. Those results suggested that the presence of SipA was required for the efficient targeting of those 2 components to the ruffles. Others have shown that infections using S. Typhimurium DsipA resulted in ,60% invasion efficiency compared to wild-type infections [8]. Our classical invasion assay results demonstrated similar invasion efficiencies when spectrin cytoskeletal components were knocked-down. Taken together these results support an important role for the SipA effector in spectrin/adducin recruitment and suggest that S. Typhimurium posses strategies to control the spectrin cytoskeleton independently of the actin cytoskeleton. L. monocytogenes utilize clathrin-mediated endocytosis (CME) to gain entry into non-phagocytic cells [12,13]. The involvement of the spectrin cytoskeleton during CME has been examined by others and shown to be excluded from clathrin-coated pits to encourage budding from the plasma membrane [23,35,36]. Based on this, we expected that spectrin would be absent from L. monocytogenes invasion sites in a similar fashion to classical CME. However, we found that spectrin was recruited to sites of L. monocytogenes invasion. Furthermore, when we knocked-down spectrin using siRNA, infections were inhibited; demonstrating that spectrin is needed for clathrin mediated L. monocytogenes uptake. Although entry of L. monocytogenes into epithelial cells involves the internalization of a structure that is large in comparison to a classically formed endocytic particle [12,13], our results contradict the traditional views of spectrin's role in CME and require further scrutiny. The spectrin cytoskeleton has been extensively characterized as a network restricted to the eukaryotic plasma membrane and membrane domains of the Golgi, Golgi associated vesicles, ER and lysosomes [19,37]. Accordingly, we anticipated that internalized bacteria found within the host cell cytosol would not associate with the spectrin cytoskeleton. However, we observed that after internalization, L. monocytogenes were able to recruit spectrin and p4.1 to sites of initial comet tail formation suggesting that this cytoskeletal system is not restricted to membranous regions of eukaryotic cells as previously thought. Clues to understanding the function of spectrin during L. monocytogenes comet tail formation may lie in other systems. During cell migration, spectrin associates with actin machinery to facilitate actin polymerization for subsequent motility [22,25]. Although this potential function provides a likely role for spectrin during L. monocytogenes infections, we were unable to directly investigate bacterial motility in the absence of spectrin cytoskeletal components due to the severe defects of L. monocytogenes invasion in cells knocked-down in any of the spectrin cytoskeletal proteins. Despite this we were able to determine whether spectrin cytoskeletal components required any bacterial surface protein for their recruitment to the bacteria. During, infections with L. monocytogenes DactA, the bacteria were able to invade cells but were unable to recruit spectrin and p4.1 to internalized bacteria, suggesting that the spectrin cytoskeleton was not simply recruited to the bacterial membrane, but required the presence of the ActA effector to initiate its recruitment at the bacteria for subsequent comet tail formation. Although our findings have demonstrated an integral role for the spectrin cytoskeleton during a variety of pathogenic infections our findings have opened the door to many important questions that will require future examination. First will be to investigate the crucial domains of spectrin, adducin and p4.1 that are responsible for their recruitment to sites of infection. In addition to this, further exploration into the dynamics of spectrin cytoskeletal protein recruitment in relation to actin cytoskeletal components during these infections is required. Finally, understanding how the depression of adducin expression interferes with EPEC binding to the host cells and mechanistically elucidating the precise influence that SipA has on the spectrin cytoskeleton during S. Typhimurium infections will require further scrutiny. Ultimately, our identification of the spectrin cytoskeleton as a target during key stages of adherent, triggering and zippering enteric bacterial pathogenesis, demonstrates that this previously overlooked cytoskeletal system is integral to a variety of infections. This recruitment, coupled with the demonstration that the depletion of spectrin cytoskeletal proteins from host cells during these infections results in the inhibition of bacterial attachment and invasion, highlights the importance of this cytoskeletal system in disease progression. Accordingly, the broad involvement of the spectrin cytoskeleton with enteric microbial pathogens reveals a new potential target for therapeutic treatments of these infections. Caco-2 human colon epithelial cells were polarized using the BIOCOATH HTS Caco-2 Assay System as per manufacturers instructions (BD Biosciences). Briefly, cells were grown to 100% confluency and maintained for 2 days prior to seeding on 1.0 mm fibrillar collagen coated PET membranes. Seeding was performed in the seeding basal medium provided, then replaced 24 hours later by the Entero-STIM Medium provided. All media was supplemented with the provided MITO+serum extender. After 48 hours the cells established a polarized monolayer [41]. At this point, the media was replaced with DMEM (with 10% FBS), and the cells were used for experiments. Infections For HeLa cell infections, cells were grown to approximately 70% confluency, whereas Caco-2 cells were fully confluent. Following overnight cultures, EPEC was used to infect host cells at a multiplicity of infection (MOI) of 10:1 for 6 hours and followed procedures previously described [42]. For S. Typhimurium studies of initial invasion, subcultures of overnight bacteria were back-diluted 30X in fresh LB and grown at 37uC (shaking) for 3 hours to activate the Salmonella, cells were infected at an MOI of 100:1 and the infections were carried out for 15 minutes. For L. monocytogenes studies, overnight bacterial cultures were diluted 10X, then cultured until A 600 nm = 0.8. The cells were then infected at an MOI of 50:1. For initial invasion studies, we infected the cells for 15 minutes prior to fixation, whereas for comet tail studies infections persisted for 30 minutes at which point the media was swapped with warm media containing gentamicin for 1 hour (initial comet tail formation) or 4 hours (established comet tail studies). Invasion Assays To perform invasion assays L. monocytogenes or S. Typhimurium were incubated on host cells for 30 minutes. This was followed by a 1-hour incubation in media containing 50 mg/ml gentamicin (to kill external bacteria). Cells were then washed 5 times in PBS (Supplemented with magnesium and calcium; Hyclone), and then permeabilized with 1% triton for 5 minutes. Serial dilutions were then prepared, spread on LB plates and incubated for 24 hours at 37uC prior to enumeration. Antibodies and Reagents Antibodies used in this study included a mouse monoclonal antib-Spectrin II antibody (used at 2.5 mg/ml for immunofluorescence and 0.25 mg/ml for western blots) (Becton Dickinson), rabbit antia-adducin (used at 2 mg/ml for immunofluorescence and 0.2 mg/ ml for western blots) (Santa Cruz), rabbit anti-EPB41 (protein 4.1) (used at 1.7 mg/ml for immunofluorescence and 0.17 mg/ml for western blots) (Sigma), rabbit anti-calnexin (Becton Dickinson) (used at 1:2000). Secondary antibodies included a goat anti-mouse (or rabbit) antibody conjugated to AlexaFluor 568/594 (use at 0.02 mg/ml) (or HRP used at 1 mg/ml for western blotting) (Invitrogen). For F-actin staining AlexaFluor 488 conjugated phalloidin (Invitrogen) was used according to the manufacturers instructions. Immunofluorescent Localizations Cells were fixed on cover slips with 3% paraformaldehyde for 15 minutes at room temperature, permeabilized using 0.1% Triton for 5 minutes at room temperature, then washed 3 times (10 minutes each) with PBS -/-(Hyclone). Samples were blocked in 5% normal goat serum in TPBS/0.1% BSA (0.05% Tween-20 and 0.1% BSA in PBS) for 20 minutes. Antibodies were then incubated on the cover slips overnight at 4uC. The next day the cover slips were washed three times (10 minutes each) with TPBS/ 0.1% BSA. After the final wash, secondary antibodies were applied for 1 hr at 37uC. This was followed by three additional washes (10 minutes each) with TPBS/0.1% BSA. The cover slips were then mounted on slides using Prolong Gold with DAPI (Invitrogen). Transfection of siRNA b-Spectrin II, protein 4.1, a-adducin and a control pool of siRNA (Dharmacon) were transfected using the InterferIN transfection reagent (PolyPlus Transfection) according to the manufactures instructions. Transfections were incubated for 48 hours. The media was changed prior to the infections. Western Blots for RNAi confirmation Infections were performed as described above. Following the infections, the samples were placed on ice and 120 ml of ice-cold RIPA lysis buffer (150 mM NaCl, 1 M Tris pH 7.4, 0.5 M EDTA, 1% Nonidet P-40, 1% Deoxychloric acid, 0.1% SDS) with EDTA Free COMPLETE protease inhibitors (Roche). Protein lysate concentrations were determined using a bicinchoninic acid assay. The samples were processed and loaded into 6% (or 10% for Adducin and protein 4.1) poly-acrylamide gels and were run at 100 V. The proteins were then transferred to nitrocellulose membranes (Trans-Blot transfer medium, Bio-Rad). Membranes were blocked with 5% Blotto (Santa Cruz Biotechnology) for 20 minutes prior to incubation with primary antibodies (for concentrations see 'Antibodies and Reagents' section) overnight at 4uC. Blots were then washed three times with TPBS-BSA (1% Tween-20 in PBS, with 0.1% BSA) then incubated with HRP (at 1 mg/ml) for five minutes and visualized using chemiluminescence BioMax film (Kodak). Blots were then stripped (with 2%SDS, 12.5% Tris pH 6.8, 0.8% b-mercaptoethanol) for 45 minutes at 50uC, re-probed with antibodies used for loading controls and visualized by chemiluminescence. Quantifying bacterial pathogenic events during siRNA knockdowns using microscopy Quantification of EPEC, S. Typhimurium and L. monocytogenes experiments in which specific proteins were knocked down by siRNA in host cells were performed by initially identifying cells with undetectable levels of the knocked down proteins (spectrin, adducin or p4.1). After identifying these cells, we manually counted the number of bacteria that had successfully generated pedestals (EPEC) or invaded (S. Typhimurium and L. monocytogenes) those cells. Cell viability assays for siRNA treated cells Cell viability assays performed on siRNA treated (or untreated) cells were performed using the LIVE/DEADH Cell Viability Assay kit (Invitrogen), as per manufacturers instructions. Controls Primary antibody controls were performed by replacing the primary antibody with normal mouse IgG (Jackson ImmunoResearch) at the identical concentration to what the primary antibody was used at. Secondary antibody controls were performed by replacing the primary antibody with TPBS/ 0.1%BSA (the carrier buffer for the primary antibodies), while all other procedures remained unchanged. We tested for autofluorecence in cells and bacteria by replacing the primary and secondary antibodies with buffer and then mounting the cover slips with Prolong Gold (with Dapi). Statistics Statistical analysis to compare the means of two samples, comprised an un-paired, single tailed, student t-tests, with P values as indicated. Figure S1 Spectrin is recruited to EPEC pedestals on polarized Caco-2 cells. Polar Caco-2 monolayers were infected with EPEC and stained for spectrin, actin and DAPI. Arrow points to area of actin and spectrin recruitment that is magnified within the inset. Scale bars are 5 mm. (TIF) Figure S2 Primary antibody controls show no specific staining at EPEC pedestals. HeLa cells were infected with EPEC for 6 hours. Cells were treated with antibodies specific to spectrin or p4.1 and compared to cells stained with normal mouse IgG (NMsIgG) or normal rabbit IgG (NRbIgG), at identical concentrations to the spectrin and p4.1 antibodies respectively. Primary antibodies or non-specific IgG were co-localized with probes for DAPI and actin to identify attached EPEC and their pedestals. Scale bars are 5 mm. (TIF) Figure S3 Spectrin localizes to the basal region of EPEC pedestals. HeLa cells were infected with EPEC and stained for spectrin and actin. Arrows indicate a concentration of spectrin at the pedestal but it is not recruited to areas of actin filament concentration. Supporting Information (TIF) Figure S4 The role of EPEC effectors in adducin recruitment to pedestals. HeLa cells were infected with EPEC or EPEC effector mutants, and immunolocalized with adducin antibodies, as well as actin and DAPI. Arrows indicate areas of interest that are found in the insets. Images examining adducin localization in uninfected (UI) or infections with WT EPEC, EPEC DescN, EPEC Dtir, EPEC Dtir:tir, and EPEC Dtir:tirY474F. Scale bars are 5 mm. Figure S12 Actin cytoskelton morphology is unaltered during spectrin knockdown. HeLa cells were treated with spectrin siRNA for 48 hours. Cells were stained for actin, spectrin and DAPI. The actin cytoskeleton morphology appears normal, with characteristic cortical actin and stress fibers present in the cells. Scale bar is 5 mm. (TIF) Figure S13 Spectrin is recruited to membrane ruffles during S. Typhimurium invasion of Caco-2 cell monolayers. Polarized Caco-2 cells were infected with S. Typhimurium for 15 minutes and immunolocalized with spectrin, actin and DAPI. Arrows indicated regions where spectrin is present peripheral to actin at the membrane ruffles. Scale bar is 5 mm.
6,834.8
2011-05-16T00:00:00.000
[ "Biology" ]
Analytic and numerical solutions for linear and nonlinear multidimensional wave equations Abstract We develop three reliable iterative methods for solving the nonlinear 1D, 2D and 3D second-order wave equation and compare the results with a discretization-based solver. The iterative Tamimi–Ansari method (TAM), Daftardar–Jafari method (DJM) and the Banach contraction method (BCM) are used to obtain the exact solution for linear equations. For nonlinear equations and practical problems, however, one obtains the approximate solutions that converge to the exact solution, if one exists. The convergence analysis of the three methods is shown using the fixed-point theorem. The methods prove to be quite efficient and well suited to solve this kind of problems. We present several examples that demonstrate the accuracy and efficiency of the methods. We also compare the methods with a method based on discretization (Boundary Domain Integral Method (BDIM)). The BDIM uses a standard domain grid and discretizes the integral form of the governing equations. The iterative methods were developed with Mathematica® 10, while BDIM is a proprietary development. The main objective of this article is to implement the three iterative methods TAM, DJM and BCM, to find an approximate solution of the wave equation.The iterative methods proposed in this article can be considered as alternatives to the established discretization approaches, such as finite differences, finite elements or the boundary-domain integral method.In this study, we compare the results of the iterative methods with the Boundary Domain Integral method (Ravnik & Tibaut, 2018) to assess their accuracy.There are also many analytical and numerical techniques that have proven to be effective and efficient in solving such problems (Bhatter, Mathur, Kumar & Singh, 2020;Goswami, Singh & Kumar, 2019;Gupta, Kumar & Singh, 2019;Kumar, Singh & Baleanu, 2018;Kumar, Singh, Purohit & Swroop, 2019). This article was organized as follows: In Section 2 the standard formula of the wave equation is presented.In Section 3 the basic concepts of the proposed methods are shown.In Section 4 the convergence of the proposed methods is examined.In Section 5, the methods are demonstrated using several test cases.The conclusions are presented in the last section. The formulation of the wave equations, approximate and numerical methods Wave phenomena and the wave equation are extensively studied because of their importance for technical applications and for the understanding of many natural phenomena.Linear and nonlinear wave equations are studied by engineers, physicists and mathematicians (Biazar & Ghazvini, 2008;Keskin & Oturanc, 2010).In our study, we consider one-dimensional (1D), two-dimensional (2D) and three-dimensional (3D) nonlinear wave equations, which can be expressed for the 3D problem by the following formula with initial conditions and appropriate Dirichlet type boundary conditions F u ð Þ can be linear or nonlinear. In this section, we introduce the basic concepts of iterative methods TAM, DJM and BCM as well as the discretization type method Boundary Domain Integral (BDIM). The basic idea of the TAM Let us introduce the following nonlinear partial differential equation (Al-Jawary, Azeez et al., 2018 with the boundary conditions where x is the independent variable, t is time, u x, t ð Þ is an unknown function, g x, t ð Þ the inhomogeneous term, L is a linear operator, N is a nonlinear operator and Bð:Þ is the boundary operator.We begin by assuming that u 0 ðx, tÞ is an initial guess to solve the problem uðx, tÞ and the solution algorithm starts by solving the following initial value problem Next, an iterative procedure is set up to evaluate subsequent approximations u n x, t ð Þ by solving the following problem Then, the solution for Equation ( 4) is given by the following limit u x, t ð Þ ¼ lim n!1 u n : The basic idea of the DJM In this section, consider the following general functional equation (Yaseen et al., 2012) where N is nonlinear operator and g is known function. A solution uðx, tÞ of Equation ( 5) is given by the following series The nonlinear operator N can be decomposed as Considering Equations ( 6) and ( 7) we observe that Equation ( 5) is equivalent to We define the recurrence relation and Finally, the solution is recovered by taking the following sum where u x, t ð Þ is an unknown function, N is nonlinear operator and gðx, tÞ is a known function. We define successive approximations as follows: 2.4.The basic idea of the BDIM The BDIM (Ravnik & Tibaut, 2018) is based on the fact that the fundamental solution of the problem is used to derive an integral formulation of the problem.The main advantage of BDIM is the use of the fundamental solution of the underlying physical problem as a weighting function in the derived integral formulation of the governing equations. Standard discretization methods such as FEM use shape functions to facilitate the derivation of the integral formulation and therefore do not take into account the physics of the phenomena.BDIM uses the fundamental solution and is able to detect physical effects on coarser meshes in comparison to FEM.The wave equation (Equation ( 1)) has a diffusive (Laplacian) operator and can be rewritten as follows where f is in general a nonlinear forcing term on the right hand side.The second order derivative over time is approximated using a second order finite difference approximation u tt ¼ u À1 À2uÀu þ1 Dt and included into the forcing term.We assume that initial conditions and mixed Dirichlet/Neumann boundary conditions are known.A time step Dt is introduced.Such a Poisson type equation can be written into integral form using a source point h and the fundamental solution of the Laplace equation u à ¼1/(4pjrÀhj) as The free coefficient c(h) is determined using the solid angle at the source point position.To write a discrete version of Equation ( 20), we have to interpolate the unknown function u and its flux ru over boundary and domain elements.In BDIM, the integral equation contains the boundary flux and the domain function.In our implementation of BDIM, we use quadratic interpolation of the function and linear interpolation of the boundary flux to achieve higher accuracy for simulation problems with high gradients in the solution.We use hexahedral domain elements and quadratic boundary elements.Finally the discrete version of Equation ( 20) can be written.A Gaussian quadrature algorithm is used to calculate the integrals. A collocation scheme is used to write a system of linear equations for the unknown values of function and flux.The source point h is placed in boundary and inner nodes.Since the boundary domain integral method requires domain discretization and since the matrix of domain integrals is full, we avoid excessive memory and computational time consumption by using a domain decomposition technique.Domain decomposition results in a sparse system of equations.In this work, we consider the subdomains as domain mesh elements.Connection between the subdomains is made by the fact that the function and the flux must be continuous across the boundaries of the subdomains.The described procedure leads to a sparse and overdetermined system of linear equations.We use a least squares solver with diagonal preconditioning to find the solution. Since the problems considered in this article are nonlinear, we have set up an iteration procedure where we estimate the forcing using function values in the previous iteration.An under-relaxation of 0.1 was used to achieve the convergence.Since the problems considered are 1D, 2D and 3D and the BDIM method is written in 3D, we also used appropriate (zero flux) boundary conditions on the sidewalls.Further details on BDIM can be found in the work by Ravnik and Tibaut (2018) and references therein. The convergence of the proposed iterative methods In this section, we demonstrate the convergence of the proposed methods for the linear and nonlinear wave equation.We define new iterations as follows where F is the operator defined by The term S k represents the solution of the following problem using the given conditions of the problem.In this way, we have u So, the solution of the problem can be represented by using Equations ( 19) and ( 20) in the following series According to this procedure, sufficient conditions for convergence of our proposed iterative methods are presented below.The main results are stated in the following theorems. Theorem 3.1.Let F be an operator defined in Equation ( 22) from a Hilbert space H to H.The solution in a series formula u n x, t This theorem is not only a special case of the Banach fixed-point theory, but it is a sufficient condition to study the convergence. Þ be convergent, then this series will represent the exact solution of the current nonlinear problem. Proof.See (Odibat, 2010).w Theorem 3.3.Suppose that the series solution is used as an approximation to the solution of the current problem, then the maximum error E n ðx, tÞ is estimated by Proof.See (Odibat, 2010). w Theorems 3.1 and 3.2 state that the solutions obtained by one of the presented methods, i.e. the relation (4) (for the TAM), the relation (11) (for the DJM), the relation (20) (for the BCM) or ( 21) converges to the exact solution under the condition 9 0 < r < 1 such that 2, :::: .In other words, for each i, if we define the parameters then the series solution 2, :::: .Furthermore, as shown in Theorem 3.3, the maximum truncation error is estimated to be u x, t ð ÞÀ Numerical examples In this section, we proposed methods to solve several examples of the 1D, 2D, 3D linear and nonlinear wave equations. Example 1.Consider the following 1D linear wave equation given by Wazwaz (2010) with the following initial conditions: Solution of Example 1 by TAM: We first begin by solving the following initial problem as follows: The primary problem can be written as We can get the following problems from the generalized general relationship We have by integrating both sides of Equation ( 31) twice from 0 to t, with u 0 In the same way, the rest of the iterations can be evaluated, the first iteration being Then, the solution for Equation (32) will be: We find the second iteration u 2 ðx, tÞ by solving the following problem: Then, by solving Equation ( 33) we get Similarly, the third iteration u 3 x, t ð Þ can be obtained by solving the following equation giving: In a similar way, we get subsequent iterations as Finally, by taking the limit We arrive at the exact solution of the problem.Solution of Example 1 by the DJM: We integrate both sides of Equation ( 27) twice from 0 to t using the given initial condition and obtain By reducing the integration in Equation ( 35) from double to single (Wazwaz, 2015), we obtain then, The DJM algorithm gives the following iterations: we find the rest of the iterations in the same way: is the same fifth iteration u 5 of the TAM solution. The exact solution can be obtained by Solving Example 1 by the BCM: We consider Equation 27ð Þ with initial conditions: u x, 0 ð Þ ¼ x 2 , u t x, 0 ð Þ ¼ sinx and integrate both sides of Equation ( 27) twice from 0 to t using the given initial condition.We get Reducing the integration in Equation ( 37) from double to single (Wazwaz, 2015), we find Applying the BCM, we obtain: is the same fifth iteration u 5 in the TAM. The exact solution is obtained by taking a limit Example 2. Let us consider the 1D nonlinear wave equation (Wazwaz, 2007) with initial conditions: In order to solve Equation ( 40) by TAM with the initial conditions given, we have the following form The initial problem is We make use of the generalized iterative formula By solving Equation ( 42) we get The first iteration u 1 x, t ð Þ can be evaluated by solving Applying the same process for u 2 , we have By solving this problem, we get þ :::: This series converges to the exact solution when Solving Example 2 by the DJM: Consider Equation ( 40) with the initial con- Integrating both sides of Equation ( 40) twice from 0 to t, we get and reducing the integration in Equation ( 44) from double to single (Wazwaz, 2015), we find (45) Therefore, we have the following recurrence relation By applying the DJM, we find þ ::: ... þ :::: , 3, :::: This is the same as the approximate solution in Equation ( 43) which converges to the exact solution Solving Example 2 by the BCM: Consider Equation ( 43) by following the same way as in the DJM, we get Equation (45) So, let By applying the BCM, we obtain: þ :::: ... is the same of the approximate solution in Equation ( 43) We see that the approximate solutions obtained from the three proposed techniques are the same. To prove the convergence analysis for the proposed methods, we will use the process given in Equations( 21)-( 24).The iterative scheme for Equation ( 43) can be formulated as Applying the TAM, the operator F½v k as defined in Equation ( 22) with the term S k which is the solution for the following problem, it will be then Also, when applying the BCM, the S k represents the solution for the following problem, Iterative approximations can be used directly when applying the DJM.Therefore, we have the following terms þ :::: We use the above duplicates in computing the values of b i for the equation as in Equation ( 26) we obtain where, the b i values for i !0 and 8ðx, tÞ : x 2 R, 0 < x 1 are less than 1, so the proposed iterative methods satisfy the convergence. We calculate the absolute error Absr n ¼ N Abs wÀu n ½  à , to check the accuracy of the approximate solution ðu n Þ, where w ¼ xt is the exact solution.Figures 1 and 2 show the 3D plotted graph of the Absr n , for the approximate solution obtained by the suggested iterative methods and BDIM.The results show that BDIM accuracy grows with shortening of the time step.This kind of behaviour is expected, since shorter time step enables better time resolution and captures the solution development more accurately.Similarly, by increasing the number iterations for iterative methods, the errors are decreasing and the precision of the approximate solution increases. Solving Example 3 by the TAM: By applying the TAM, we obtain the following iterations with initial conditions: Equation ( 48) will be solved by the three iterative methods with the initial conditions. Solving Example 4 by the TAM: x 3 y 3 þ :::: Continue to till n ¼ 4 þ :::: This is the same as the approximate solution in Equation ( 49) and converges to the exact solution Solving Example 4 by the BCM: x 5 y 3 4762800 À t 14 y 4 5896800 x 5 y 3 4762800 À t 14 y 4 5896800 þ ::: is the same approximate solution as in Equation 49ð Þ: To prove the convergence analysis for the proposed methods, we can find the b i values for the problem as in Equation (48) Hence, the terms of the series P 1 i¼0 v i x, y, t ð Þgiven in Equation ( 24) we have where, the b i values for i !0 and 8ðx, yÞ 2 R 2 , 0 < x, y, t 1 are less than 1, so the proposed iterative methods satisfy the convergence.In order to test the accuracy of the approximate solution, we calculate the Absr n where w ¼ xyt is the exact solution. Figures 3 and 4 show the absolute error Absr n for approximate solutions obtained by the iterative methods and BDIM.It can be seen clearly that by increasing the number of iterations the error of iterative methods is reduced and the solution becomes more accurate.The same conclusion can be drawn ÞðbÞ and u 4 x, t ð Þ ðcÞ, at the time instant t ¼ 0:1: Very good accuracy increase is observed when the number of iterations n is increasing. for BDIM, when computational mesh density is increased (Figure 3). Example 5. Let us take the following 3D linear wave equation given as (Wazwaz, 2010). with initial conditions: Equation (50) will be solved by the three proposed iterative methods Solving Example 5 by the TAM: t 5 sinz, ... is the exact solution. Solving Example 5 by the DJM: , 4, ::: is the same as the solution in Equation ( 51), the exact solution can be obtained by Solving Example 5 by the BCM: with the initial conditions: Solving Example 6 by the TAM: t 6 y 2 z 2 À 1 252 t 7 x 3 y 3 z 3 À t 10 x 4 y 4 z 4 12960 , x 2 z 2 þ :::: ... This is the same as the approximate solution in Equation ( 53) and converges to the exact solution when Solving Example 6 by the BCM: This is the same as the approximate solution in Equation ( 53) and converges to the exact solution. To prove the state of convergence we find values of b i for the problem : Hence, the terms of the series P 1 i¼0 v i ðx, y, z, tÞ given in Equation ( 24) we get where, the b i values for i !0 and 8ðx, y, z, tÞ : x, y, z 2 R 3 , 0<x, y, z, t 1 are less than 1, so the proposed iterative methods satisfy the convergence. To examine the accuracy of the approximate solutions for this example, we to calculate the absolute error of the approximate solution, where the exact solution is u ¼ txyz: The results are presented in Figures 5 and 6.The Figures show the absolute error Absr n for the approximate solution obtained by the proposed iterative methods and BDIM.We note that by increasing the number of iterations, the error decreases and the accuracy of the approximate solutions is increased.Shortening of the time step has a similar effect for BDIM. In order to study the accuracy of the proposed methods, we measure the difference between the exact and numerical solution in terms of the RMS norm.The RMS norm is defined as RMS ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P Here e i is the exact solution in node i and n i is the numerical solution at the same node at a certain time.This allows us to display RMS time diagrams for Examples 2, 4 and 6 in Figures 7-9.Iterative methods BCM, DJM and TAM have similar RMS difference properties.The accuracy is very high at the beginning of the simulation.For long periods of time the accuracy deteriorates.Since these approaches lead to an expansion of the solution if we increase the time, we actually go further away from the initial point.Therefore, the accuracy decreases as with the Taylor expansion.At the beginning of the simulation, when the accuracy is better than 10 À15 , we notice some oscillations in the accuracy of the TAM method.The accuracy of the BDIM method is not dependent on time, but is defined by the mesh size and the length of the time steps.The best results are obtained with a short time step and a dense mesh.Because of these properties, the BDIM is more accurate than iterative methods for long periods of time. It is worth mentioning that the main advantage of using TAM, DJM and BCM compared to other numerical methods is that no linearization or discretization is required, thus avoiding the large computational effort and rounding errors.The implementation does not include a restrictive assumption for the nonlinear terms and it overcomes the difficulties encountered in the calculation of Adomian polynomials to handle the nonlinear terms, which is a disadvantage of the Adomian Decomposition Method (ADM).It does not require calculation of the Lagrange multiplier in Variational Iteration Method (VIM), where the terms of the sequence become complex after several iterations, so that the analytical evaluation of the terms becomes very difficult or impossible.There is also no need to construct a homotopy in Homotopy Perturbation Method and solve the corresponding algebraic equations. Summary and conclusion In this work, we developed three iterative methods TAM, DJM and BCM and a discretization-based BDIM method to find approximate solutions for the wave equation in 1D, 2D and 3D.The iterative methods provide the solutions in the form of a series.The accuracy of the solutions has been investigated by absolute error diagrams and the study of RMS error propagation in time.The convergence of the methods was investigated and the efficiency and accuracy was demonstrated. We have shown that the accuracy of TAM, DJM and BCM increases with the number of iterations used and decreases over time when a constant number of iterations is used.From this we conclude that the number of iterations chosen must correspond to the time when the solution is needed.We have compared the accuracy of the iterative methods with BDIM, which is a domain-based method.We could achieve better accuracy with iterative methods as long as the number of iterations was large enough.On the other hand, we have observed that the accuracy of BDIM depends strongly on the grid discretization and the time step.The choice of a fine grid and a short time step leads to a better accuracy, but results in an increased computational effort. Disclosure statement No potential conflict of interest was reported by the authors. Figure 1 . Figure 1.Absolute errors obtained by the BDIM method using 17 equidistant nodes and three different time steps (0.1, 0.01 and 0.001) for text Example 2 are shown.In all three cases, we observe that the error decreases with time. Figure 2 . Figure 2. We present the absolute error Absr n versus time and x for test Example 2 at n ¼ 1, 3, 4: Of the panes present u1 x, t ð Þ a ð Þ, u 3 x, tðÞðbÞ and u 4 x, t ð Þ ðcÞ, at the time instant t ¼ 0:1: Very good accuracy increase is observed when the number of iterations n is increasing. Figure 3 . Figure 3.The panels show absolute errors obtained by the BDIM solution of Example 4 using a time step of Dt ¼ 0:01 for three different mesh discretizations -5 2 nodes (a), 9 2 nodes (b) and 17 2 nodes(c).We observe a substantial improvement in results accuracy, when a more fine computational grid is used.Results are shown at t ¼ 0:1: Figure 4 . Figure 4. (a-c):The absolute error Absr n of the solution of Example 4 at t ¼ 0:1 for n ¼ 1, 3, 4: The panels show u 1 x, y, t ð Þ a ð Þ, u 3 x, y, t ð ÞðbÞ and u 4 x, y, t ð Þ ðcÞ: We observe an increase of accuracy as the number of iterations n increases. Figure 5 . Figure 5. Absolute errors of the solution of Example 4 obtained by the BDIM using 17 3 equidistant nodes and three different time steps (0.1, 0.01 and 0.001).Results are shown on the z ¼ 0:1 plane at t ¼ 0:1: We observe that the error decreases with shortening of the time step. Figure 6 . Figure 6.Absolute error Absr n of the solution of Example 4 obtained by the iterative methods for different number of iterations n ¼ 1, 3, 4: The panels show u 1 x, y, z, t ð Þ a ð Þ, u 3 x, y, z, t ð ÞðbÞ and u 4 x, y, z, t ð Þ (c) at time t ¼ 0:1 and z ¼ 0:1: We observe the increase of accuracy when the number of iterations is increased. Figure 7 . Figure 7. Plots of the RMS difference versus time for the solution of Example 2. Top row: dependence of mesh density (BDIM) and the number of iterations (BCM, DJM and TAM).Bottom row: dependence of time step size. Figure 8 . Figure 8. Plots of the RMS difference versus time for the solution of Example 4. Top row: dependence of mesh density (BDIM) and the number of iterations (BCM, DJM and TAM).Bottom row: dependence of time step size.
5,637.8
2020-01-01T00:00:00.000
[ "Mathematics" ]
STABILITY OF PERIODIC ORBITS IN THE AVERAGING THEORY: APPLICATIONS TO LORENZ AND THOMAS’ DIFFERENTIAL SYSTEMS We study the kind of stability of the periodic orbits provided by higher order averaging theory. We apply these results for determining the k−hyperbolicity of some periodic orbits of the Lorenz and Thoma’s differential system. Introduction and statement of our main result The averaging theory is a classical method for studying the solutions of the non-linear dynamical systems, and in particular their periodic solutions.For a general introduction to the averaging theory see the book of Sanders, Verhulst and Murdock [9], and the references quoted there.Recently many works extending and improving the averaging method for computing periodic solutions were presented, see for instance [1,5,4,3].Most of these results enhance the number of periodic solutions that can be detected by averaging method.Although few comments are made about the stability of these periodic solutions.To fill this gap the present work provides an strategy to determine the stability of the periodic orbits that bifurcate from of periodic orbits which form a manifold, or from points inside a continuum set that vanish some averaging functions, see Theorem 2. The detection of such bifurcations is possible by applying the Lyapunov-Schimdt reduction method over higher order averaging functions, see Theorem 1.This theorem was already used in [2] and [4] without the stability analysis. This hypothesis is always true when the unperturbed system has a manifold of Tperiodic solutions.The standard method of averaging for finding periodic solutions consists in write the displacement map (2) as power series of ε in the following way Where for i = 0, 1, 2, 3, 4 we have The functions g 1 , g 2 , g 3 and g 4 will be called here the averaged functions of order 1, 2, 3 and 4 respectively of system (1). We say that system (1) has a periodic solution bifurcating from the point z 0 if there exists a branch of solutions z(ε) for the displacement function such that d(z(ε), ε) = 0 and z(0) = z 0 .Now we shall present our result about the existence and stability of the periodic solutions of system (1).The methodology used here was introduced for studying differential systems such that the unperturbed part has a sub-manifold of T -periodic solutions, see for instance [1] and [4].The main difference of this work with the previous ones is that the first nonzero averaged function vanishes over a graph. Let π : R m × R n−m → R m and π ⊥ : R m × R n−m → R n−m denote the projections onto the first m coordinates and onto the last n − m coordinates, respectively.For a point z ∈ U we also consider z = (a, b) ∈ R m × R n−m .Consider the graph The next theorem provides sufficient conditions for the existence of periodic solutions in the differential system (1).This theorem was proved in [3] here we also provide an scheme of its proof.We need this theorem for the statement of our main result in Theorem 2. Theorem 1.Let r ∈ {0, 1, 2} such that r is the first subindex such that g r ≡ 0. In addition to hypothesis (H) assume that (i) the averaged function g r vanishes on the graph (3).That is g r (z α ) = 0 for all α ∈ V , and (ii) the Jacobian matrix We define the functions Then the following statements hold. (a) If there exists α * ∈ V such that f 1 (α * ) = 0 and det (Df Theorem 1 shows that the function f 1 and f 2 provides sufficient conditions for the existence of periodic solutions of the differential system (1). For periodic solutions detected by statement (a) of Theorem 1 the next result reveals how the higher order function f 2 can be used for determining the stability of the periodic solution x(t, z(ε), ε). The same class of result can be obtained for periodic orbits detected by statement (b) of Theorem 1 using the bifurcation function of order 3.The expressions of such functions are explicitly given in [3]. Applications Lorenz differential system.Consider the differential system ẋ =a(x − y), ż =x y − cz, with a, b, c being real coefficients.In recent publication [2] the authors have found a periodic orbit bifurcating from the origin of system (9), see Figure 2. The next theorem completes this work giving the stability characterization of that periodic solution. Theorem 3. Let a = −1 + a 2 ε 2 and c = c 1 ε.Assume that b > 1, a 2 < 0, c 1 = 0 and |ε| = 0 sufficiently small.Then the Lorenz differential system (9) has a periodic orbit bifurcating from the origin.Furthermore for c 1 > 0 this periodic orbit is an attractor, otherwise for c 1 < 0 the periodic orbit has a stable manifold formed by two topological cylinders and an unstable manifold formed by two topological cylinders.Theorem 3 is proved in section 3 using Theorems 1 and 2. Thomas' systems.A circulant system is a differential system defined by a function f (x, y, z) having the variables cyclically symmetric according to ẋ = f (x, y, z), where the function f (u, v, w) is fixed and the variables are rotated.In 1999 René Thomas propose two circulant systems having cyclic symmetry ẋ = sin y − βx, 10) is defeined by the function f (u, v, w) = −au + sin v and system (11) is defined by f (u, v, w) = −au + bv − v 3 .The chaotic behaviour generated by these systems was presented in [12], system (10) was also studied by Sprott and Chlouverakis in [11].System (10) is sometimes called Thomas' system, see for instance [10,Chapter 3].The next results give sufficient conditions for the existence of periodic solutions on these differential systems. One can check that the origin is an equilibrium point of system (10), and that it has the eigenvalues 1 origin has a pair of complex eigenvalues on the imaginary axis and the bifurcation of a periodic orbit occurs. For ε > 0 sufficiently small and β 1 > 0 the differential system (10) has an isolated periodic solution bifurcating from the origin. Theorem 4 is proved in section 3 using Theorems 1 and 2 taking r = 0. System (11) has 27 stead states but we will be interested into the pair symmetric equilibrium Taking a = 5 √ 3ω/6 and b = √ 3ω/3 with ω > 0, these equilibrium points have the eigenvalues − √ 3ω and ±ωi.The next theorems show that periodic orbits are born at P − and P + . Theorem 5. Let a = 5 √ 3ω/6+εa 1 , b = √ 3ω/3+εb 1 with ω > 0 and (5b 1 −2a 1 ) < 0. Then for ε > 0 sufficiently small the differential system (11) has two periodic solutions such that φ + (t, ε) bifurcates from P + and φ − (t, ε) bifurcates from P − .Here The periodic orbit analytically found in Theorem 5 was detected numerically by Thomas in [12], he also shows for specific values of a and b that these periodic solutions give born to a strange attractors after a cascade of doubling.The following figures illustrate this phenomena.Here a 1 = 6, b 1 = 1 and ω = 1 the time interval is from 0 to 1000. Figure 2 shows the solution starting at (−0.8, −0.8, −0.45) being attracted by the periodic orbit φ − (t, ε), see equation ( 12).As we increase ε the periodic orbit grows in size and complexity, see Figures 3, 4. The approximation to the periodic orbit provided by ( 12) can be seen as a dashed curve.Figures 5, 6 and 7 shows the appearance of the strange attractor as ε increase. A fundamental notion in qualitative theory of differential equations is hyperbolicity.A constant matrix will be called hyperbolic if its eigenvalues lie out of the imaginary axis, in which case its index is the number of eigenvalues in the right half-plane.Consider a matrix function A k depending on a parameter ε.If A 0 is hyperbolic of index i, then one can see that for ε > 0 sufficiently small A(ε) will be hyperbolic with the same index i. If A 0 is not hyperbolic the placement of the eigenvalues of A(ε) may be hard to determine.To deal with this problem we use a method introduced by Murdock and Robinson, see [7] and [8].The matrix A(ε) is called k-hyperbolic of index i if for every smooth matrix function B(ε) there exists an ε 0 > 0 such that A(ε) is hyperbolic of index i for all ε in the interval 0 < ε < ε 0 .The next result will be needed for proving Theorem 2. Assume that there exists a matrix function S(ε) that block diagonalizes A(ε) into its left, right and center blocks L(ε), C(ε), R(ε) which for ε = 0 have their eigenvalues respectively in the left half-plane, on the imaginary axis, and in the right half-plane.Thus Theorem 6 ([6, Theorem 5.7]).Let C(ε) be the center block of A(ε) and let its size be m × m.Then A(ε) is k-hyperbolic provided that: (a) A 0 has no multiple eigenvalues on the imaginary axis. Using (21) we can write the Jacobian matrix of the displacement function at z(ε) as a power series of ε around ε = 0 as a classical result about ordinary differential equations says that when (22) is a hyperbolic matrix, the periodic solution x(t, z(ε), ε) will be hyperbolic with the same kind of stability.This is also referred as linear stability.Thus the proof of the theorem follows from applying Theorem 6 in the 1-jet (5) observing that hypothesis (s 1 ) and (s 2 ) are equivalent with the hypothesis (a ) and (b ) respectively.Thus the matrix is 2−hyperbolic and the theorem is proved. Proof of Theorem 3. The existence of such periodic orbit is proved in Theorem 4 of [2].Following the ideas of this proof we see that, after some changes of variables, system (9) can be put into the normal form for applying Theorem 1, given by equation ( 22) of [2], with z = (ρ, z) and the derivative with respect to θ.Thus calculating the higher order averaging functions of this system for i = 0, 1, 2, 3 we have g i (z) = (g i1 (z), g i2 (z)) where g 0 (z) ≡ 0 and Thus we can calculate the functions f i (α) for i = 1, 2 with respect to the averaging functions above and the graph . By the hypothesis of Theorem 3 one can check that α * = 2ω √ −2a 2 is a simple zero of function f 1 (α).Then we can apply Theorem 2 with r = 1.By (21) we can write the initial point of the periodic solution as z(ε) = z α * + εz 1 with and the matrix (5) becomes The matrix A(ε) has the two distinct eigenvalues As a 2 is negative by hypothesis, we have that for ε > 0 sufficiently small if c 1 > 0, Re(λ 1 ) < Re(λ 2 ) < 0 consequently the periodic orbit is an attractor.Otherwise, if c 1 < 0, Re(λ 2 ) < 0 < Re(λ 1 ) then the periodic orbit has a stable manifold formed by two topological cylinders, and an unstable manifold formed by two topological cylinders. System (24) is 2π-periodic and it is into the normal form for applying Theorem 1.
2,720.2
2018-04-12T00:00:00.000
[ "Mathematics" ]
Unprecedented dipole alignment in α-phase nylon-11 nanowires for high-performance energy-harvesting applications α-Phase nylon-11 nanowires exhibit unprecedented dipole alignment, leading to high-performance triboelectric generators. INTRODUCTION Remanent polarization in a ferroelectric polymer is the polarization that persists when an applied electric field is reduced to zero. Typically, ferroelectric polymer materials, such as poly(vinylidene fluoride) (PVDF) and its copolymers, exhibit nonzero remanent polarization following electrical poling (1,2). These materials have found use in various applications, including sensors, materials for tissue regeneration, and energy-harvesting devices (3)(4)(5)(6). In particular, ferroelectric materials have received substantial interest for mechanical vibrationbased energy-harvesting applications, such as in triboelectric generators, as the amount of accumulated charges (i.e., triboelectric charge) on the contact surface can be improved by increasing the intensity of the remanent polarization in the material (7)(8)(9)(10)(11). However, fabrication of polymers with strong and thermally stable remanent polarization, as required for next-generation high-performance energy harvesters, has been a long-standing issue (12)(13)(14). In ferroelectric polymers, remanent polarization can be generated via molecular alignment (i.e., preferential crystal orientation) and resulting dipole alignment by electrical poling, (i.e., the application of an electric field higher than the coercive field of the material). The remanent polarization (P r , C cm ─2 ) obtained in this way is always much lower than the saturation polarization, as the forcibly oriented polymer molecules relax back into an equilibrium conformation once the electric field is removed. Furthermore, above the Curie temperature (T C ), ferroelectric polymers readily lose their P r since the structural phase transition occurs. These phenomena indicate that the low intensity and limited thermal stability of remanent polarization can be attributed to the molecular structure of the ferroelectric polymer. Polymers with hydrogen bonds can potentially overcome such limitations, and odd-numbered nylons, especially nylon-11, are well-known ferroelectric polymers with hydrogen bonds. As a result, through extensive hydrogen bonding, nylon-11 can exhibit better packing and a more stable molecular configuration. (It must be noted that even with the hydrogen bonds, the P r of nylon-11 cannot exceed that of fluoropolymers because of its different constituents and molecular structures.) Furthermore, nylon-11 shows good thermal stability and ferroelectric properties, which are comparable to those of PVDF and their copolymers (15)(16)(17)(18). However, among the various crystal structures of nylon-11, ferroelectric properties, including remanent polarization, have only been achieved in the metastable ′-phase, with relatively sparse chain packing and random hydrogen bonds, as the only way to achieve polarization has been through electrical poling (a detailed explanation of the crystal structure of nylon-11 is provided in note S1) (14)(15)(16). In the case of the thermodynamically stable -phase, despite its outstanding thermal stability based on denser molecular packing and well-ordered hydrogen bonds, achieving dipole alignment via electric poling has not been possible (19,20). This is because tightly packed hydrogen bonds in the -phase restrain the rotation of dipoles up to the point of electrical breakdown, which is why the -phase has been known as a "polar" but "nonferroelectric" phase (21,22). Here, we have found exceptionally ordered and thermally stable dipole alignment in -phase nylon-11 nanowires. Through a nanoconfinement effect, namely, "thermally assisted nanotemplate infiltration (TANI) method," -phase nylon-11 nanowires with definitive dipolar alignment have been achieved spontaneously without the need for an external electric poling field. The ideal P r value of perfectly aligned -phase nylon-11 was confirmed through molecular simulations. To demonstrate the formation of preferential crystal orientation in -phase nanowires, we performed detailed x-ray diffraction (XRD) analysis using nanowires with and without the supporting nanotemplate. The remarkably high surface potential of the -phase nanowires, corresponding to unidirectional dipole alignment, was measured directly by Kelvin probe force microscopy (KPFM), indicating that -phase with fully aligned dipoles would have much greater net polarization than that of electrically poled ferroelectric ′-phase. The robust thermal stability of the dipole alignment in -phase nylon-11 nanowires was also confirmed through studies of surface potential and molecular structure changes before and after thermal annealing. Correspondingly, a triboelectric energy generator based on -phase nylon-11 nanowires fabricated via the TANI method showed 34 times higher output power density as compared to an aluminum-based device, when subjected to identical mechanical excitations. Intensity of ideal polarization As a semicrystalline polymer, nylon-11 has at least three crystal structures referred to as triclinic ( and ′), monoclinic (), and pseudo-hexagonal (, , and ′) (2,23). Among them, only the metastable pseudo-hexagonal phases, such as the ′-phase, display ferroelectric properties due to sparse chain packing and random hydrogen bonding. As Fig. 1 (A and D) illustrates, the unpoled ′-phase has randomly oriented nylon-11 chains within a pseudohexagonal unit cell, resulting in the cancellation of dipole moments (23). Additional mechanical drawing and subsequent electrical poling allow chains to rotate such that the amide groups point in the same direction, resulting in a net dipole moment (Fig. 1, B and E) (14)(15)(16). In contrast, the -phase can adopt a well-aligned molecular structure in the triclinic unit cell without stretching and/or highvoltage poling (Fig. 1, C and F). This is because the hydrogen bonds are organized into well-defined sheets held together by van der Waals interactions, with the amide groups of adjacent chains located at about the same height along the chain axis (23). As a result, the dipole moment perpendicular to the chain axis points along a single direction. However, it must be noted that such unidirectional dipole moments of -phase are limited to a localized crystalline region, and the net-polarized directions of such crystalline regions are randomly determined during the crystallization process. Furthermore, it is also impossible to align every dipole in the -phase structure because of the constraints on rotation of the hydrogen-bonded molecules up to the point of electrical breakdown. As a result, the net polarization of pristine -phase (bulk) sample is much smaller than that of electrically poled ′-phase sample. So how high can the polarization be when the dipoles in bulk -phase are fully aligned? To evaluate the possible maximum polarization in nylon-11, we estimate the "ideal" P r values of perfectly aligned ′-phase and -phase by conducting molecular-scale simulations (24) (details of simulation process are discussed in note S2). To a first-order approximation, the P r scales linearly with the dipole moment and the crystallinity (12,25). Using molecular simulation, the dipole moment for individual molecules can be measured by assigning partial atomic charges to the atoms. For a minimum repeating unit, the dipole moments per unit cell of 88 × 10 ─30 and 101 × 10 ─30 C m were obtained for the ′and -phase, respectively. In the case of perfectly aligned ′-phase, assuming a crystallinity of about 40%, the calculated P r was 3.2 C cm ─2 ; this is in good agreement with the experimentally determined value of about 5.0 C cm ─2 (2). In contrast, the -phase with the same crystallinity showed P r of 7.5 C cm ─2 , meaning that -phase with fully aligned dipoles would have much greater net polarization than that of electrically poled ferroelectric ′-phase. This is because the fully stretched chain structure and well-aligned hydrogen bonded sheets in the -phase maximize the dipole moment per monomer unit (25). In addition, considering the crystal structure, the distance between adjacent molecules in the triclinic -phase is much smaller than that in the pseudo-hexagonal ′-phase (26). Nanoconfinement-induced preferential crystal orientation To align the dipoles in -phase nylon-11 nanowires, we developed the TANI method as an effective nanoconfinement technique (details of the experimental process are given in Materials and Methods and note S3) (27,28). This is because, to date, nylon-11 nanowires with thermodynamically stable -phase have never been realized by conventional template-wetting methods because of the difficulties associated with this synthetic route (11,29). Figure 2A shows scanning electron microscope (SEM) images of an anodized aluminum oxide (AAO) nanoporous template. The top surface and crosssectional images indicate that the pore size is around 200 nm. The morphology of the nanowires fabricated by the TANI method is displayed in Fig. 2B. Long chain-shaped nanowires with uniform width (200 nm) and length (60 m) were detected after the AAO was dissolved using mild acid. These nanowire dimensions are similar to that of the template pore channels. The surface morphology of the nanowire was measured by atomic force microscopy (AFM). Compared to the nylon-11 film, the nanowires showed a uniform and smooth surface topography without grain boundaries (figs. S2 and S3). Note that this fabrication method can be scaled up as TANI is a nonvacuum and relatively low temperature process, and -phase nanowires within the AAO template are relatively easy to handle (note S4). Detailed crystal structure characterization was carried out by XRD. It has been reported that an -phase nylon-11 film shows diffraction peaks at 2 = 7.8°, 20°, and 24.2° (23). However, the nanowires fabricated by a conventional template-wetting method were found to generate diffraction patterns with weak intensity of peaks at 2 = 21.6° and 22.8° (Fig. 2C, left, black) (29). This result indicates that the -phase nylon-11 nanowires with desirable crystallinity could not be obtained through a conventional nanoconfinement method. In contrast, the nanowires fabricated by our TANI method displayed the same peak position, as the reported -phase film, with much stronger diffraction intensities than that of conventionally generated nanowires (Fig. 2C, left, red). This means that the solvent-vaporfilled closed-heating system of the TANI method effectively mimics the slow crystallization process of typical -phase film fabrication techniques by suppressing the speed of solvent evaporation within the nanopores. Furthermore, the TANI method allowed more precise control of the crystal structure, wherein we were able to manipulate the rate of crystallization by adjusting both the solution concentration and the processing temperature. As shown in Fig. 2C (left and right top), the relative intensity of the peak at 2 = 24.2° gradually decreased with decreasing the solution concentration, while that of the 20° peak was maintained within the error range. This is because the more dilute solution enabled further decrease of the crystallization speed by increasing the free volume of the polymer chains. Considering that each peak corresponds to a specific lattice plane, the resulting diffraction pattern with only one distinct peak from 5 weight % (wt %) solution indicates that more aligned molecular structures could be achieved through the TANI method. Additional heating also allowed us to control the polymer crystallization process within the nanopores. The changes in the crystallite size perpendicular to (200) plane (D (200) ) of 5 wt % samples as a function of processing temperature showed that the average D (200) gradually increased with processing temperature (Fig. 2C, right bottom). This indicates that the additional heating enabled an increase in the chain mobility, which is a driving force for molecular reorientation and alignment. (It must be noted that the errors in the D (200) plot are attributed to deviations between different samples. Experimental errors, originating from XRD measurement setting and profile fitting, are within a 2-nm range.) These XRD results imply that the desired -phase nanowires could be achieved by the TANI process. To confirm the crystallography of the nanowires, -phase nylon-11 films were fabricated for comparison. Figure 2D shows the XRD patterns of the -phase film (black) and template-freed nanowires fabricated by the TANI method, using 5 wt % solution and 80°C heating (red). The -phase nylon-11 film displayed two distinct peaks at 2 = 20° and 24.2° and one small peak at 7.8° corresponding to (200), (210/010), and (001) planes, respectively (23). In nanowires without the AAO template, identical peak positions with -phase film were also observed, indicating that the TANI method enabled fabrication of -phase nylon-11 nanowires. Notably, the diffractogram of the template-freed nanowires showed a much sharper peak for the (200) plane with a smaller full width at half maximum (FWHM) than that of -phase film, resulting in much larger D (200) in nanowires (25 nm) than that of the film (16 nm). Furthermore, both -phase nanowires and film samples showed a similar degree of crystallinity of ~48% (details of crystallinity calculation are discussed in note S5). Considering that most nanowires fabricated via a conventional template-wetting process showed poor crystallinity than that of films with the same crystal structures, these results suggest that the TANI method does, in fact, enable the generation of highly crystalline -phase nanowires with larger crystal sizes compared to the -phase film (30,31). The thermal behavior of developed nylon-11 nanowires also confirms that the TANI method synthesized the nanowires on the basis of the nanoconfinement effect (note S6). The chemical bonds in -phase nanowires were studied using Fourier transform infrared (IR) spectroscopy measurements (note S7). In fig. S8, within the same IR spectrum, much higher relative peak intensity at both N─H stretching and amide I regions was observed from the -phase nanowires. Considering that the N─H stretching (3300 cm ─1 ) and the amide I (1635 cm ─1 ) bands reflect the overall distribution of hydrogen-bonded strengths and local ordering of hydrogen bonds, respectively (16,32,33), it can be inferred that the TANI method further enabled well-ordered crystal growth, based on the formation of hydrogen bonds. The direction of molecular orientation was verified by detailed XRD analysis of the nanowires within the nanoporous AAO template (Fig. 2D, orange). In principle, a single crystal sample examined in reflection mode would produce only one family of lattice planes with scattering vectors (q) normal to the sample surface. This means that if the nanowires have preferential crystal orientation, then the discrepancy in the diffractograms between the vertically aligned nanowires and randomly positioned nanowires would indicate the direction of crystal orientation. As discussed, the template-freed -phase nylon-11 nanowires showed two distinct peaks at 2 = 20° and 24.2° and one small peak at 7.8° corresponding to (200), (210/010), and (001) planes, respectively (Fig. 2D, red). In contrast, -phase nanowires within the AAO template had only one distinct diffraction peak at 2 (200) = 20° (Fig. 2D, orange). The lack of intensity from (001) and (210/010) planes implies that the nanowires were, in fact, oriented, such that these peaks were not visible in reflection mode geometry. A rocking curve on the (200) reflection with a peak width of ~8° also confirmed this observation (note S8). These results indicate that the nanowires fabricated by the TANI method had preferential crystal orientation with the molecular chain axis perpendicular to the nanowire length direction, consistent with previous reports (34)(35)(36). The molecular simulation results validate the determinations of crystallite size (D p ) and preferential crystal orientation in -phase nanowires (Fig. 2E). With the assumption of an ideal D p (>100 nm) and random crystal orientation (37), the simulated powder diffraction pattern displayed the highest peak at 2 (210/010) = 24.2° with the second highest peak at 2 (200) = 20° (Fig. 2E, top). However, introducing the experimentally measured D 200 values of 25 nm changed the order of the highest peaks from 24.2° to 20° and broadened the diffraction pattern (Fig. 2E, middle). Considering that an amorphous region is likely to give rise to a broad diffraction peak at about 22.2°, the simulated data gave a good match to the experimental data from template-freed -phase nanowires (red dot). Note, however, that the other peaks were broader, which implies that the crystallite size in these directions is smaller than that perpendicular to (200) planes. Last, applying preferential crystal orientation calculated by Rietveld-Toraya equation (38,39) resulted in a diffractogram with a remarkably high and sharp peak at 2 (200) = 20° (Fig. 2E, bottom), showing good agreement with the experimentally measured diffraction patterns of -phase nanowires within the AAO template (orange dot). The agreement of relative peak intensity and FWHM between calculated and experimental results confirms that the relatively large crystals in the -phase nanowires were indeed preferentially aligned and that the (200) planes were perpendicular to the axis of the nanowires. Surface potential of -phase nanowires It is believed that surface potential and resulting triboelectric charge on the surface can be improved by increasing the intensity of the net dipole moment in the material (7)(8)(9)(10)(11). In addition, such net dipole moment in nylon-11 results mainly from alignment of dipoles because the P r value is close to zero when the dipole density approaches zero in oriented and poled nylon-11 (40,41). This indicates that dipole alignment generated by the preferential crystal orientation arising from the nanoconfinement effect can enhance the surface potential, and the intensity of surface potential has a close relationship with the degree of molecular orientation (42). To investigate the surface potential of -phase nanowires, detailed analysis was conducted by KPFM (details of the KPFM technique and measurement procedure are provided in note S9). Although the P r of ferroelectric materials has been observed by polarization-electric field (P-E) hysteresis loops, the net dipole moment of nonferroelectric materials cannot be measured in this way as they do not show such hysteretic behavior. In contrast, KPFM can measure the surface potential of a material, and it has been shown that net dipole moment contributes to the magnitude of the surface potential (7,43,44). When we compare the surface potential of nylon-11 nanowires and unpoled films, the nanowires showed much higher values than the films with the same crystal structure (Fig. 3 and fig. S11). [The top surface of nanowires within the AAO template sample is filled with nanowire tips; thus, the influence of AAO template on surface potential can be ignored ( fig. S12).] Both ′and -phase nanowire samples showed a 2-and 30-fold increase in surface potential compared to those of the corresponding film samples, respectively. These results indicate that the nanoconfinement effect during crystal growth effectively aligned the dipoles and generated a strong net dipole moment in the nanowires (45,46). It must be noted that the surface potential of the -phase nanowire (576 mV) was much higher than that of ′-phase nanowires (395 mV). This is in good agreement with our P r calculations from molecular simulations and indicates that the TANI method gives rise to strong surface potential in -phase nylon-11 by nanoconfinement-induced molecular ordering. (Note that when we compare the surface potential of ′-phase and -phase films, molecular ordering cannot be the major factor for the surface potential because both films are unpoled. Therefore, other minor factors, such as crystal size and surface roughness, should be considered to approximate the surface potential of those unpoled films.) Thermal stability tests confirm the notable contribution of hydrogen bonding to the changes in dipolar orientation. After a thermal annealing process at 165°C, the surface potential of the ′-phase nanowires dropped from 395 to 0 mV (Fig. 3B, orange bar). This is because the preferential crystal orientation in metastable ′-phase disappeared because of thermal agitations (23). In contrast, the -phase nanowires sustained their high surface potential (~570 mV) within the error range, even after thermal annealing at high temperature, indicating that dipole alignment in -phase nanowires can be maintained during annealing process. The changes in XRD patterns after thermal annealing further confirmed the thermal stability of molecular configuration in -phase nanowires ( fig. S13). In the case of ′-phase nanowires, the intensity of 21.6° peak, corresponding to the (hk0) planes, decreased after thermal annealing, while the intensity of peak at 2 = 20.4° increased ( fig. S13A). In contrast, the peak positions and intensities in diffractograms of -phase nanowires before and after 165°C annealing were found to be almost the same (fig. S13B). This means that the molecular configuration within -phase nanowires was thermally stable, and the resulting preferential crystal orientation could therefore be maintained up to near the melting temperature, as a result of the strongly hydrogen-bonded, well-ordered, and highly packed molecular structure. To verify the enhanced charge accumulation ability resulting from a higher net dipole moment, we measured the changes of surface potential before and after mechanical rubbing. This is because the accumulated charges on the surface can be transferred by contact with other materials having different work functions, and the friction of the AFM tip on the surface of the material induces such an effect (43). In particular, the intensity of remanent polarization in the ferroelectric polymer affects the amount of transferred charges during the rubbing process due to the direction of polarization and the degree of dipole orientation, as well as the change in charge affinity of the surface (8). We measured the surface potential of pristine ′and -phase nanowire samples before and after mechanical rubbing (Fig. 4). After rubbing, the average surface potential of ′-phase nanowires dropped from 510 to 469 mV, indicating that the accumulated charge on the surface of ′-phase nanowires (corresponding to a surface potential change of 41 mV) was transferred via the AFM tip. In the case of -phase nanowires, much larger changes in surface potential of 224 mV were obtained after rubbing (646 to 422 mV). These results indicate that -phase nanowires maximize the charge accumulation ability because of their dipole alignment. Comparison with the -phase film also confirms that the increased charge transfer could be attributed to the dipole alignment in the nanowires, which was otherwise absent in the film (fig. S14). Nanowire-based energy-harvesting devices As a practical demonstration, we propose triboelectric generators for energy-harvesting applications. To achieve high levels of energyharvesting performance, materials with electron-donating tendencies must be paired with those with electron-accepting tendencies, and nylon-11 belongs to the less explored family of synthetic and organic electron-donating materials (11). On the basis of -phase nylon-11 nanowires fabricated by the optimized TANI method, we developed a contact-separation mode triboelectric generator with an area of 3.14 cm 2 . An Al film and ′-phase nanowire-based devices were also prepared to compare the device performances (note S10). Figure 5A shows the short-circuit current density (J SC ) measured in response to the periodic impacting at a frequency of 5 Hz and amplitude of 0.7 N with 0.5 mm in an energy-harvesting setup that has been previously described (27). The Al-based device showed a peak J SC of ~13 mA m −2 . Because of the better charge-donating property of nylon and dipole alignment effect, higher device performance was observed from a ′-phase nanowire-based triboelectric generator with J SC of ~38 mA m −2 than from the Al-based device, consistent with our previous results (11). The -phase nanowirebased device displayed further enhanced output performance with a peak J SC of ~74 mA m −2 likely due to the much higher net dipole moment. The peak output power densities of 3.38, 1.03, and 0.099 W m −2 were observed from -phase nanowires, ′-phase nylon-11 nanowire, and Al-based device under impedance-matched conditions at a load resistance of ~5, 20, and 20 megohms, respectively ( Fig. 5B and fig. S16A). The observed output power from -phase nanowire-based triboelectric generator was ~3 times and ~34 times higher than those of ′-phase nanowires and the Al-based device, respectively. An output power comparison between and ′-phase nanowires-based devices illustrates that the much higher net dipole moment in closely packed and aligned molecular structure of -phase nanowire contributed to the enhancement of device performance, in good agreement with the results of modeling and surface potential analysis (a detailed explanation of the triboelectric charge transfer process is provided in note S11). It must be noted that those energy-harvesting performances are originated not from piezoelectricity but from triboelectricity of nylon-11 nanowires, considering the properties of nanowires and the design of energyharvesting setup (note S12). In terms of stability, the -phase nylon-11 nanowire-based triboelectric generator exhibited a negligible change in output current density over the entire period of fatigue testing (≈540,000 cycles) and during long-term reliability test (~2 weeks), Fig. 3. Surface potential analysis. Plots of the surface potential of various films and nanowires. Thermal stability of nanowire samples was also investigated by surface potential measurement before (white bar) and after (orange bar) thermal annealing at 165°C. Inset schematics indicate the way to measure the surface potential using KPFM. demonstrating the high mechanical stability of the dipole alignment in -phase nanowires and the robustness of the nanowire-based device, respectively (figs. S16D and S17). It must be noted that the mechanical stiffness of -phase nanowires is much higher than that of the ′-phase nanowires because of the large crystallite with well-ordered hydrogen bonding (47). Although no abrasion was observed during the fatigue test in either the or ′-phase nanowires, such a stiffness difference implies that the -phase nanowire device is more appropriate for use in friction-based devices, including triboelectric energy harvesters. DISCUSSION Nylon-11 nanowires exhibiting dipole alignment with an unprecedented intensity of net dipole moment and thermal stability have been fabricated. Through the nanoconfinement effect of our TANI method, -phase nylon-11 nanowires with dipole alignment were successfully achieved. The larger crystallite size and improved alignment of hydrogen bonds were confirmed by XRD and IR measurements, while molecular simulation was used to interpret the diffraction data and to shed light on the mechanism behind the preferential crystal orientation. The intensity of net dipole moment and thermal stability of dipole alignment were also investigated by analyzing the changes in surface potential through KPFM measurements. Consequently, we have verified that, because of the ordered crystalline regions and higher molecular packing density, the net dipole moment of -phase nylon-11 can be much higher than that of poled ferroelectric ′-phase. Furthermore, the strong hydrogen bonding, which has previously been considered as a serious disadvantage for the polarization of nylon-11, actually serves to enhance the stability of the molecular structure, resulting in a constant net dipole moment up to near the melting temperature. When -phase nylon-11 nanowires were incorporated in triboelectric generators, the resulting output power was observed to be ~3 times and ~34 times higher than those of ′-phase nanowires and Al-based device, respectively. This work provides a new insight for both nanomaterial and nanofabrication methods to develop strong and thermally stable dipole alignment for next-generation high-performance energyharvesting applications. Fabrication of nylon-11 nanowires To prepare the nanowires, 25-mm-diameter AAO templates (Anopore, Whatman) with a pore diameter of 200 nm and a thickness of 60 mm were placed on an 800-l nylon-11 solution droplet. In the case of a conventional template-wetting method, the AAO template was placed on the solution and was then left at room temperature for at least 24 hours with no protective covering (29). As the formic acid naturally evaporated through the pores, the solution was drawn up through the pores via capillary forces, and the nylon-11 was able to crystallize into nanowires. In the case of the TANI method for the -phase nanowires, the AAO template was attached to a square glass slide before placing on top of the solution to limit the exposure of the template's top surface to the air, thus limiting the rate of formic acid evaporation. In addition, a lid was placed over the sample to further reduce exposure to the surrounding air and to allow the local environment to become saturated with formic acid vapor, and the sample was then placed on a hot plate. The ′-phase nanowires were produced by maximizing the evaporation speed of the solution (11). The AAO template was placed on top of a drop of 17.5 wt % nylon-11 solution in accordance with the conventional template-wetting method. No additional protective layers were added, and the solution was not heated during the crystallization. To control the crystallization rate of the solution, assisted gas flow with a speed of ~3 m s ─1 was introduced upon the AAO template using a portable mini fan placed immediately next to the floating template. The rate of assisted gas was controlled by fan rotation speed and measured by an anemometer. The whole fabrication process proceeded under room temperature. After treatment For accurate characterization, the thin nylon-11 film that formed underneath the AAO template had to be removed. To do so, excess material was scraped off using a razor blade. Next, formic acid was warmed on a hot plate to 80°C and swabbed over the template bottom surface using a cotton bud. Once the thin nylon-11 films had been removed, the nanowire-filled template was washed in deionized water and dried at room temperature. To obtain the template-freed nanowires, the nanowire-filled template was immersed in a 40 volume % phosphoric acid solution for 4 hours. To achieve the assembled (and template-freed) nanowire film, acid-immersed nanowires should be lifted off by silicon wafer from the surface of acid solution. The assembled film was then washed carefully in deionized water and dried at room temperature. Fabrication of nylon-11 films The -phase nylon-11 films were produced by casting the nylon-11 solution onto the hot plate (~80°C), and a lid was placed over the sample to reduce the crystallization speed. Fabrication of triboelectric energy generators To fabricate the triboelectric energy generators, a 100-nm-thick Au layer was deposited on the bottom side of the nanowires-filled AAO template by using benchtop sputter (k550 Emitech). As a counterpart material, a 100-m-thick Teflon film was prepared, and Au was sputtered on the bottom side of the Teflon film. A 24-m-thick aluminum film with the same diameter of 2 cm was also prepared to compare the triboelectric generator performance with the nanowirebased devices. The mechanical input was generated using a vibrational impacting system, where a permanent magnetic shaker (LDS Systems V100) was connected to an amplifier (LDS Systems PA25E-CE) driven by a signal generator (Thurlby Thandar TG1304) to generate vibration motion of the impacting arm based on a programmed signal in the signal generator. The impacting arm underwent periodic oscillations at frequency f (27). Energy-harvesting data in the form of output voltages and currents were collected by two different data acquisition modules: multimeter (Keithley 2002) for voltage and picoammeter (Keithley 6487) for current measurement. A top-down schematic of this triboelectric generator system, including the actuating and data-collecting configuration, is shown in fig. S15. Characterization Three-dimensional molecular images of nylon-11 were rendered using BIOVIA Materials Studio (Dassault Systèmes BIOVIA). The morphology of the nanowires was investigated using field-emission scanning electron microscopy (FEI Nova NanoSEM) and AFM (Bruker MultiMode) with an antimony n-doped Si (tip radius, <35 nm; resonance frequency, 150 kHz). Detailed crystal structural characterization was carried out by an XRD machine (Bruker D8) with Cu K radiation ( = 0.15418 nm). The sample was placed on a highly p-doped silicon substrate during the XRD measurement. The size of corresponding crystals (D p ) was calculated from the diffraction peaks using Scherrer equation (48): D p = (K × )/(B cos ), where K is the Scherrer constant, B is the FWHM of diffraction peak (in radians), and  is the diffraction peak position (angle). To help optimize the process, the XRD data from more than 90 samples were measured to assess changes in D p and relative intensities of the peaks as a function of processing parameters. The degree of crystallinity () was calculated by calorimetry and XRD methods (49). The differential scanning calorimetry (DSC) data were measured at a scanning rate of 5°C/min using TA Instruments Q2000 DSC to determine the thermal and structural properties of nylon-11 nanowires, from which the melting temperature (T m ) and the melt crystallization temperature (T c ) were recorded from the first heating. KPFM measurements were carried out using Bruker MultiMode 8 in the noncontact amplitude modulation (AM-KPFM) mode with 2-V AC signal, and an antimony (n)-doped Si tip (MESP-RC-V2, Bruker) with a nominal radius of ~35 nm, a resonant frequency of ~150 kHz, and a nominal spring constant of 5 N m −1 was used. The KPFM measurements were performed under the same measuring conditions for all samples (temperature = 21°C, humidity = 17%). The rubbing procedure was pursued for one time in a contact mode with a scan rate of 1 Hz, a scan area of 5 m 2 , and a contact force of 30 nN. Film thickness was measured using a stylus surface profilometer (Veeco Dektak 6M).
7,379.4
2020-06-01T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Computation of the Spectral Density of Current Fluctuations in Bulk Silicon Based on the Solution of the Boltzmann Transport Equation Numerical simulation results for the spectral density of noise due to current fluctuations are presented. The mathematical framework is based on the interpretation of the equations describing electron transport in the semiclassical transport model as stochastic differential equations (SDE). Within this framework, it was previously shown that the autocovariance function of current fluctuations can be obtained from the transient solution of the Boltzmann transport equation (BTE) with special initial conditions. The key aspect which differentiates this approach from other noise models is that this approach directly connects noise characteristics with the physics of scattering in the semiclassical transport model and makes no additional assumptions regarding the nature of noise. The solution of the BTE is based on the Legendre polynomial method. A numerical algorithm is presented for the solution of the transient BTE. Numerical results are in good agreement with Monte Carlo noise simulations for the spectral density of current fluctuations in bulk silicon. I. INTRODUCTION Noise is generally characterized by the spectral density of current or voltage fluctuations. In this paper, we present numerical simulation results for the autocovariance function and spectral density of current fluctuations based on a new noise model [1 ]. Employing the machinery of SDE, this model shows that key computations for the noise autocovariance function are reduced to the solution of the BTE with special initial conditions. Based on this novel approach, we study the influence of temperature and electric field on noise spectral density in bulk silicon. We also compare our results with those obtained using the Corresponding author. 185 Monte Carlo technique [2]. The paper is organized as follows. Section II summarizes the noise model employed for the calculations. Section III describes the algorithm for the solution the transient BTE based on the Legendre polynomial method [3] and presents numerical results. The last section is devoted to the conclusions. II. NOISE MODEL BASED ON SDE According to semiclassical transport theory, an electron in a semiconductor drifts under the influence of electric field and experiences occasional random changes in its momentum due to the scattering mechanisms in the crystal. This process is described in terms of the following SDE" and r Zhffi)(t ti), (1) where x, v, p and k are the electron position, drit velocity, momentum and wave vector, respectively, E is the electric field, (k) is the energy-way>e vector relationship in the given energy band and Fr is the random impulse force on the electron due to scattering. The random force is characterized by the transition rate W(k, k'). Accordingly, the probability of scattering is given by (2) where )(k)is the scattering, rate. Therefore, given the electron wave vector k, )(k)At is the probability that a jump in momentum will occur in a small time interval At. Assuming that a scattering event has occurred at some time ti, the probability density function for the amplitude of the jump is given by, where k(t) ki and k(t[) ki+ui. Note that these are the same equations which are used to simulate the electron motion in Monte Carlo simulations. The stochastic differential equations (1) together with Eqs. (2) and (3) define a copound Poisson process which is discontinuous in k-space and is a Markov process. In stochastic differential equation theory, such a process is usually characterized by a transition probability density function, which satisfies the Kolmogorov-Feller forward equation. Employing the machinery of SDE theory it was previously shown that this function satisfies the BTE (see [4]). Semiconductor noise is usually characterized by the spectral density of current fluctuations. The spectral density of current fluctuations is by definition the time Fourier transform of its autocovariance function. Since the instantaneous electron current is proportional to the electron momentum, the autocovariance function of current fluctuations can be computed directly from the autocovariance function of the electron momentum. In order to compute the autocovariance function of a stochastic process k t, one needs to know the joint probability density function of the stochastic process at times + " and for all ,. Since the stochastic process is stationary, the reference time is arbitrary and can be chosen to be zero for convenience. Consequently, it is sufficient to compute the joint probability density function p(k,, k0). Employing conditional probabilities, the joint probability density function can be expressed as follows" 9(k,, k) 9(kzlk)Po(k). This conditional probability density function is nothing more than the transitional probability density function which characterizes the Markov process and it satisfies the Kolmogorov-Feller forward equation with the initial condition, ktlz=0 k. On the other hand, the probability density function P0(k) satisfies the stationary Kolmogorov-Feller equation since the process is stationary and the initial reference time is arbitrary. Since the Kolmogorov-Feller equation is identical to the BTE, we conclude that the key computations for the noise spectral density are reduced to special initial value problems for the BTE. The computation of the noise spectral density is based on this formalism and is described in [1]. According to this formalism, it is shown that the autocovariance matrix K],(,) of the electron wave-vector can be computed as follows. For bulk noise computations, the space-independent BTE, is ) solved subject to the )initial condition: a(k, ")[--0 kf(k), where f(k) denotes the stationary solution. Here, equation (4) should be understood in a component-wise sense. The above solution is substituted into the following equation in order to evaluate the autocovariance matrix: Since, the autocovariance function is a symmetric function of , the solution of Eq. for x > 0 is sufficient. III. SOLUTION OF THE BTE EMPLOYING LEGENDRE POLYNOMIALS It is assumed that a single band distribution function accurately represents the state of the momentum space. The Herring-Vogt transformation is employed to map the coordinate system k into k* which results in spherical equal energy surfaces: y(e) e + [3e 2 t2k*2/2mo In k* momentum space the direction of the average electron wave vector (]*) defines a symmetry axis () and the density function dependence on momentum can be expressed in terms of only two independent variables: E and 0. We expand f(k*, t) in Legendre polynomials according to: f(-*,t) fo(e, t) + k* g(13, t)cosO + k*Zh(e,t)(3cos20 1). (6) In terms of this representation for f(k*,t), the BTE can be formulated as follows Figure (1
1,475.6
1998-01-01T00:00:00.000
[ "Physics" ]
Games for Empathy for Sensitive Social Groups —Information and Communication Technology is part of almost everyone’s everyday life in a variety of ways and in many fields. All people should have access to ICTs including those with various disabilities and those with health problems. The studies presented in this article represent a body of work outlining positive effects of playing games in the area of special education and health care in order to cultivate empathy. INTRODUCTION The presence of information and communication technologies (ICT) in society is an evident reality and an area of special reflection and continuous evolution that has expanded in recent years due to the speed of technological advances and their impact on the world. Especially in healthcare and special education, technologies nowadays have a very important role, and has been proven to be helpful because thanks to their varied pace and combination of graphics, sounds and animation, create a dynamic, attractive and motivating environment for pupils with special educational needs and health problems. Technology and games have already proven viable and effective for supporting therapy, promoting intercultural communication, increasing understanding of ethnic, religious and historical funded conflicts, and representing different perspectives on issues such as global politics and foreign policy. In addition effects that are aimed to be achieved through games are changes in the knowledge, attitudes, cognitive skills, physical ability, health or mental wellbeing of the user. Furthermore, games are playing a significant role in increasing people's social abilities. One important social skill is empathy. McDonagh (2006) de!nes empathy as 'the intuitive ability to identify with other people's thoughts and feelings -their motivations, emotional and mental models, values, priorities, preferences, and inner con"icts' [1]. The construct of empathy originated in 1873 in art history, when Vischer used the term 'Einfühlung' (German for feeling into) to describe a process in which a woman projects her entire personality upon an object, and in some sense merges with this object. The psychologist Theodor Lipps (1851Lipps ( -1914 applied it to explaining aesthetic experiences 'Einfühlung [...] is the fact that the contrast between myself and the object disappears' [2] , and then applied the term to people's experience and knowledge of other people's mental states [3]. As Gallo (1989) puts it: ...an empathic response is one which contains both a cognitive and an affective dimension....the term empathy [is] used in at least two ways; to mean a predominantly cognitive response, understanding how another feels, or to mean an affective communion with the other. Carl Rogers (1975) wrote: ...the state of empathy or being empathic is to perceive the internal frame of reference of another with accuracy and with the emotional components and means which pertain thereto as if one were the person, but without ever losing the aeas if' condition (Quoted in Gallo 1989) [4]. Difficulties with social interaction, reciprocal communication and emotion recognition are widely acknowledged as key characteristics of individuals diagnosed with an Autism Spectrum Disorder (ASD) or other disabilities. Moreover people with serious health problems often face emotional difficulties or people that surround them lack of empathy for pupils who face health problems. Interaction with digital games increases possibilities of interacting with the environment, thus improving quality of life on the emotional level and increase the possibility of develop empathy and others socioemotional skills. The current article presents an overview with the most representative studies and focuses on games that have as a primary or secondary aim the detection and cultivation of empathy on important issues. The games have been expanded in the sectors mentioned underneath: Games for Special Education and Games for Health. A. Games for Special Education Schuller et al. (2014) introduced the gaming platform ASC Inclusion targeted to children aged 5 to 10 years with Autistic Spectrum Disorders. The running ASC-Inclusion project aims to help children with ASC by allowing them to learn how emotions can be expressed and recognized via playing games in a virtual world. The platform includes analysis of users' gestures, facial, and vocal expressions using standard microphone and web-cam or a depth sensor, training through games, text communication with peers, animation, video and audio clips [5] Serret (2012) developed a serious game, "Jestimule", to improve social cognition and empathy in ASD. ICT was also used to facilitate the use of the game by young children or by children with developmental delays (e.g., haptic joystick for feed-back). One of the main aims of the game was to teach ASD individuals to recognize facial emotions, emotional gestures and emotional situations. The game was tested on a group of 40 individuals (aged form 6 to 18) at the hospital. Results showed that participants improved their recognition of facial emotions, emotional gestures and emotional situations in different tasks. These results have clear education and therapeutic implications in ASD and should be taken into account in future training [6]. Alves et al. (2013) presented the "LIFEisGAME" prototype-Ipad version which is a game that promotes facial recognition and helps individuals with ASD to understand SHORT PAPER GAMES FOR EMPATHY FOR SENSITIVE SOCIAL GROUPS emotions in order to develop empathy using real-time automatic facial expression analysis and virtual character synthesis. It includes five games modes. "LIFEisGAME" prototype was tested on 11 children with ASD, with ages varying from 5-15 years old and was played during a 15 minute game session. The results were promising and indicated the usefulness of the game to promote emotional understanding, bringing positive outcomes to quality of life for children with autism [7], [8]. Beumont and Sofronoff (2008) developed "The Junior Detective Training Program" an intervention program that included a computer game, small group sessions, parent training sessions and teacher handouts to teach social skills and emotional understanding to children with Asperger syndrome. The computer game was developed such that the user was a detective who specialized in decoding other's mental and emotional states. Playing through different levels, the participants practice recognizing facial expressions, body postures and prosody of speech, in which they learn to recognize complex emotions. Both human and computer-animated characters were utilized to teach emotion recognition and social problem solving. Support and mission outcomes were individualized and varied depending on how a user completed a given task. The study was tested on 49 children with Asperger syndrome between 7.5 and 11 years of age. Overall, findings from this study suggest that the Junior Detective Training Program may be an effective tool for teaching social functioning and emotion recognition to children with Asperger syndrome. However, although components of the intervention were developed specifically to enhance skill generalization, this study did not measure the generalization of targeted skills to real life social contexts [9]. Finkelstein et al. (2009) presented "cMotion", a game in development, which uses virtual characters to reinforce empathy, emotion recognition and logical problem solving to both normally developed children and high-functioning children with autism. "cMotion" consists of a playable introduction which focuses on social skills and emotion recognition, an interactive interface which focuses on computer programming, and a full game which combines the first two stages into one activity [10]. Gibbons (2015) presented "Auti-Sim", a game in prototype stage that simulates what it feels like to have sensory hypersensitivity disorder, the way children with autism do. It was developed during Hacking Health Vancouver 2013 hackathon in 12 hours, by a team of three people. The game puts the player in the shoes of a child with autism in a busy playground, and leaves them to explore this environment on their own and feel empathy for. The player quickly finds that prolonged exposure to sources of sensory stimulation can cause sensory overload, represented in the form of visual noise and blur, as well as audio distortion [11]. University of Southern California (2014) developed "Social Clues", a game with the purpose to teach autistic children about appropriate behaviors and how to change their behavior through activities based on real-world situations and environments. Players take on the role of com-muniKate or particiPete, learning about the meaning of facial expressions, the importance of eye contact, and the value of empathy [12]. Gerling et al (2014) designed "Birthday Party", a wheelchair-controlled persuasive game in which players have to complete a series of wheelchair-related challenges. The players are controlling an avatar with a wheelchair and the aim is to navigate him in order to arrive at a friend's birthday party on time. On the way, the player has to stop at different locations to pick up items for the party, but the player is running late, so completing all tasks quickly is important. The game evokes empathy and a positive attitude regarding people with disabilities [13]. Pivik et al. (2002) developed "Barriers: The Awareness Challenge", a program used desktop Virtual Reality to simulate the experiences of a child in a wheelchair, in an environment familiar to most children-an elementary school. The program provides opportunities where the child without a disability literally experiences different situations, viewpoints, perceptions, and interactions from the perspective of a child with a disability. The specific objectives of this project were to increase children's knowledge of accessibility and attitudinal barriers that impact individuals with disabilities and to promote more positive attitudes and feelings towards children with disabilities [14]. Ballesta et al. (2011) designed the educational software "Aprende con Zapo. Propuestas didácticas para el aprendizaje de competencias emocionales y sociales". The program aims to teach students with autism spectrum disorders facial expression recognition of basic and complex emotions (5 levels) and action prediction according to beliefs (true or false) (5 levels). The pupil interact with the main character (the clown Zapo) while performing the various tasks in the program so as to improve their understanding of social and emotional skills including the important skill of empathy [15]. Tanaka et al. (2008) designed "Let s Face It! Skills Battery (LFI! Battery)", a computer -based assessment, a series of interactive computer games, organized into a theoretical hierarchy of face processing domains that reinforce the child's ability to attend to faces, recognize facial identity and emotional expressions and interpret facial cues within a social context. The LFI! Battery was tested on participants with ASD and typically developing control (TDC) participants that were matched for age and IQ. Findings show that participants with ASD were able to label the basic facial emotions (with the exception of angry expression) on par with age-and IQmatched typically developing participants. This set of games aims reinforce the child's ability to attend to faces, recognize facial expressions and interpret facial cues in a social context. However, participants with ASD were impaired in their ability to generalize facial emotions across different identities and a tendency to recognize the mouth feature holistically and the eyes as isolated parts. The results also indicate that a relatively short-term intervention program can produce measurable improvements in the face recognition skills of children with autism [16], [17]. Golan et al. (2006) presented "Mind Reading", an interactive multimedia program developed to teach adults with Asperger syndrome and high-functioning autism about emotions and mental states. It is based on a taxonomic system of 412 emotions and mental states, clustered into 24 emotion groups, and six developmental levels from four years old to adulthood. Mind Reading uses video, audio and written text to systematically introduce and teach basic and complex emotions. Users were able to explore emotions in the emotion library, partake in lessons and quizzes in the learning center and play games about SHORT PAPER GAMES FOR EMPATHY FOR SENSITIVE SOCIAL GROUPS emotions in the game zone. Results showed that following 10-20 hours of using the software over a period of 10-15 weeks, users significantly improved their ability to recognize complex emotions and mental states from both faces and voices, when compared to their performance before the intervention and compared with a control group [18]. Silver and Oakes (2001) investigated the use of a multimedia software program, the Emotion Trainer, to teach individuals with ASD to recognize and anticipate emotions in others. The Emotion Trainer had five sections and utilized photographs of real people, as well as animated emotional expressions, to teach about emotions. Consistent feedback, prompting and reinforcement were provided and were contingent upon the level of success or difficulty an individual experienced while progressing through the program. It involves tasks focused on facial expression recognition, emotions prediction and interpretation based on context. Twenty-two individuals with ASD, ranging from ages 10 to18, were matched based on age, gender, and school class. One member of the pair was randomly assigned to the intervention condition of 10 computerized sessions over 2-3weeks, while the second member was placed in the no-intervention control condition. Both groups showed signi"cant improvements in the ability to identify emotion or mental state from photographs of facial expression from pre-to post-intervention [19]. Hughes (2014) designed a game called WUBeeS to aid young children with ASD (Autism Spectrum Disorder) in perspective taking and empathy by placing the player in the role of a caregiver to a virtual avatar. It is hypothesized that through the playing of this game over a series of trials, children with ASD will show an increase in the ability to discriminate emotions, provide appropriate responses to basic needs (e.g. feeding the avatar when it is hungry), and be able to communicate more clearly about emotions. Game data included response time to avatar needs, time spent playing incentive games, response to changes in emotional or physical expression of the avatar and other yet-to-be-determined game-play behavior [20]. Gotsis et al. (2010) described a novel game-based SST (Social Skills Training) intervention for ASD termed the Social Motivation Adaptive Reality Treatment Games (SMART-Games). The player manipulate an avatar in order to affect its moods, needs and behavior. The game emphasizes empathy and related social skills [21]. Cosgray et al. (1990) presented a simulation game called, "A Day in the Life of an Inpatient", to influence the attitude of staff toward those with mental illness. The game was designed so that staff could personally experience the situations sometimes experienced by psychiatric patients in a hospital. It was hoped that experience in the patient role would increase empathy of staff for patients, and that the experience would relate to positive changes. The game was designed to expose participants to specific staff approaches and rules/policies in the institution that could negatively affect patients. Results from 900 hospital staff indicated that the game raise staff sensitivity and staff with less patient contact felt more benefit [22]. McCallum et al. (2013) presented the educational game "Into D'mentia" by Ijsfontein. The game consists of a physical, interactive space where the world of a person with dementia is visualized using Virtual Reality and players are able to experience the limitations and obstacles that a dementia patient faces on his/her daily life. The game uses a simulation platform and it takes place inside a speci!cally customized truck. The goal of the game is to stimulate empathy for people with dementia and to raise awareness for the dicculties faced by these people [23], [24]. Brown et al. (1997) designed an interactive video game for health called "Packy and Marlon". The game is aimed at children with diabetes. The characters in the game are two elephants that are at a diabetes summer camp. They have to get rid of a gang of marauding rats that are keeping the campers from healthy food and diabetic supplies. To win, players have to successfully manage their insulin levels and food intake while keeping their characters' glucose levels within an acceptable range. This game was evaluated in a randomized trial in which participants in the treatment group played the game for 6 months (Brown et al., 1997). By the end of the study, patients who had access to the game showed greater perceived self-ef!cacy for diabetes self-management, increased communication with parents about diabetes, and improved daily diabetes self-management behaviors. Moreover the game can cultivate empathy for the patients of this chronic illness and the disease itself if "Packy and Marlon" is played by other people too [25]. Lieberman (2001) introduced "Bronkie the Bronchiasaurus", a video game that was made for young children with asthma. The game is set in prehistoric times and the world is covered in dust. A fan that usually keeps the dust at bay has broken. Players help the two in-game characters, Bronkie and Trakie, keep their asthma at bay by avoiding triggers such as dust and smoke while they go on their quest. There are some textual question-and-answer inserts in the game along the way that need to be answered correctly in order to proceed. A series of studies on the game found that patients' asthma-related self-concepts, social support, knowledge, self-care behaviors, and selfef!cacy improved after playing the game compared with a control. Furthermore the game affect empathy, an important social skill [26]. Gerling et al. (2011) presented a game for health named "Cytarious" which aims to illustrate cancer treatment and to convey information about the disease through its background story and game mechanics. The background story is set in space and evolves around the four planets Haima, Enképhalon, Blaston and Cytarius. The inhabitants of Enképhalon and Haima live in peace but the inhabitants of Blaston have been excluded from the intergalactic community due to selfish behavior. To take revenge, they try to infiltrate the community by rapidly reproducing themselves and gaining control over the other planets. To defeat the intruders, the inhabitants of Cytarius -genetically engineered Cytowarriors who are leading by the playertry to defend the two peaceful planets. The game engage patients and healthy children in play and beyond information about the disease it can develop empathic understanding to parents, medical staff and children [27]. Tate et al. (2009) created a video game called "Re-Mission" where the player enters the game world as nanobot which fights the disease from within young patients' bodies. The game aims to convey basic information about common cancer symptoms and treatment strategies through game mechanics, e.g. enemy and weapon design. "Re-Mission" tries to increase feelings of self-efficacy and SHORT PAPER GAMES FOR EMPATHY FOR SENSITIVE SOCIAL GROUPS self-esteem to the patients and evoke emotions and empathy for a better communication and interaction with the young patients [28], [29]. Rusch et al. (2011) presented "Elude", a single player game intended to inform friends and relatives of people with depression about what their loved ones are going through. Singapore-MIT GAMBIT Game Lab created Elude in order to help the patient's relatives to understand what it means to be depressive. "Elude's" metaphorical model for depression serves to bring awareness to the realities of depression by creating empathy with those who live with depression every day. The game takes place in a forest meant to represent a neutral mood. The goal is to climb trees until the tree tops where you reach "happiness". On the way the player will come up with different "passions" objects and must overcome the obstacles so as to make it to the tree tops and fly through the sky [30], [31]. B. Games for Health Sherida Halatoe (Tiger & Squid) developed "Beyond Eyes", a beautiful game about Rae, a blind girl who uses her remaining senses to visualize the world around her. Rae lost her eyesight in an accident and the experience left her traumatized. Fearing loud noises and public places, she hardly ever leaves her house. However, all that changes when her cat Nani unfortunately goes missing. The player must now guide Rae on her moving journey to be reunited with Nani, guard her from the dangers she may encounter along the way and learn her to overcome her fears and find beauty and possible new friends outside of her golden cage. Through this game experience the player feels empathetic for people with visual impairments and understands their behavior and actions better [32]. III. CONCLUSIONS What is clear is that we live in an increasingly technological age and the in"uence of that technology is not just at a super!cial level but pervades every aspect of our lives at a practical level but also at a more fundamental level of our very being. Exposure to ICT in some way contributed to raising the quality of life for the participants. Empathy is an important social skill which should be developed in a small or a large extent to people. It allows us to interact in the social world and helps us become aware of many significant issues. The articles reviewed above discussed the application of innovative computer games to assessment, intervention and cultivation of empathy to people with special educational needs and to people who have other health problems. In the first occasion pupils with disabilities lack of empathy and emotion recognition but by playing digital games could evoke these socioemotional skills with a playful manner. In addition, people with health problems find it also difficult to have empathy. But these games help as well people who have no health problems because they foster empathy to them in order to understand and feel the other people who are facing serious health problems. Thus, while we can point to some encouraging research, more studies are needed to determine whether and how games can help develop empathy, the role of identification in building empathy, and whether empathy and identification are associated with increased interest in global learning. Games may be just one of many such avenues for this purpose, and towards this goal more games should be designed carefully to succeed in the growth of empathy. A consistent finding in the research literature is that empathy improves people's attitudes and behaviors towards other individuals or groups, while a lack of empathy is associated with more negative attitudes and behaviors. To sum up, considering the enormous development of digital tools the review underlines that ICT tools do play a significant role in ensuring and enhancing empathy to achieve more in special education, in health, in humancomputer interaction etc.
5,087.6
2016-10-26T00:00:00.000
[ "Computer Science", "Education", "Psychology" ]
Evolution of Disease Defense Genes and Their Regulators in Plants Biotic stresses do damage to the growth and development of plants, and yield losses for some crops. Confronted with microbial infections, plants have evolved multiple defense mechanisms, which play important roles in the never-ending molecular arms race of plant–pathogen interactions. The complicated defense systems include pathogen-associated molecular patterns (PAMP) triggered immunity (PTI), effector triggered immunity (ETI), and the exosome-mediated cross-kingdom RNA interference (CKRI) system. Furthermore, plants have evolved a classical regulation system mediated by miRNAs to regulate these defense genes. Most of the genes/small RNAs or their regulators that involve in the defense pathways can have very rapid evolutionary rates in the longitudinal and horizontal co-evolution with pathogens. According to these internal defense mechanisms, some strategies such as molecular switch for the disease resistance genes, host-induced gene silencing (HIGS), and the new generation of RNA-based fungicides, have been developed to control multiple plant diseases. These broadly applicable new strategies by transgene or spraying ds/sRNA may lead to reduced application of pesticides and improved crop yield. Introduction The arms race of plants and host-pathogens seems never to stop, and sometimes the race is very intense. During the evolutionary process, plants have had to evolve multiple immunity mechanisms to survive danger signals in extracellular and intracellular milieus. Plants are able to enhance disease resistance and increase the food security, as well as to balance the resource allocation between growth and development. The prevalent defense mechanisms are categorized into three defense layers: the preliminary defense, pathogen-associated molecular pattern (PAMP) triggered immunity (PTI) [1], the secondary defense, effector-triggered immunity (ETI) [2], and the additional defense, the exosome-mediated cross-kingdom RNA interference (CKRI) system [3]. It is well-known that PTI functions in basal defense. Using the cell surface-localized pattern recognition receptors (PRR), plants can detect the infection of invaders by recognizing the conserved microbe-associated or pathogen-associated molecular patterns (MAMPs or PAMPs) [1]. Plant PRRs are cell surface localized, and always are receptor-like kinases (RLKs) and receptor like proteins (RLPs). RLKs are comprised of extracellular domains, transmembrane domains, and intracellular kinase domains, which are required for transmitting the signals to the downstream defense responses, whereas RLPs are only comprised of the basic conformation without intracellular kinase domain. PTI with broad-spectrum defense is not sufficient to prevent most pathogens, and if plants have defect in PRRs, they often become more susceptible to microbes [4][5][6][7]. In turn, pathogens employ kinds of virulence effectors to overcome PTI and establish successful infection, termed effector-triggered immunity. Thus, ETI functions in the second defense of elicitor mediated defenses. Most of the genes involved in ETI pathway contain intracellular nucleotide-binding site and leucine-rich repeat domains (NBS-LRRs or NLRs), which are typically cytoplasmic receptor proteins. NBS-LRR genes can detect or recognize the polymorphic, strain-specific pathogen-secreted virulence effectors, and then transfer the signals to the downstream of defense genes. Thus, ETI-pathways belong to the species-specific disease resistance, and rapidly co-evolve with their pathogens. Plant species in eudicots and dicots have lots of NB-LRR genes. According to the N-terminal features and functions, the NB-LRR proteins in plants can be termed into two classes with the terminal Toll/interleukin-1receptor (TIR) or coiled-coil (CC)/resistance to powdery mildew8 (RPW8) domains [8][9][10]. The TIR, CC or RPW8 domains are crucial in signaling transmit in cellular targets for effector action or downstream signaling components [11]. Although the NB-LRR genes were demonstrated as the ancient and conserved genes in plants, their comparative genomic analyses have shown great structural diversity. For example, the CC domains are prevalent in eudicots and monocots, while the TIR domains are nearly absent in monocots [12]. Cross-kingdom RNA interference (CKRI) functions in the third layer, which protects plants by extracellular vesicles transport small RNAs or microRNAs (miRNAs) to microbial pathogens and then silence the virulence genes [3]. As one kind of typically small non-coding RNAs, miRNAs function in post-transcriptional gene regulation. Small miRNAs play big roles in a variety of biological processes, such as development, hormone responses and stress adaptations [13][14][15][16]. In PTI and ETI pathways, microRNAs as the classical regulators in post-transcript or translation level regulate defense/defense-associated genes [17,18], which can balance the benefits and costs of their targets. Plants employ miRNAs as shields against the pathogen attacks. MiRNAs respond to virus, bacteria and fungi by negatively regulating of mRNAs, which mainly function in both PTI and ETI. Until now, totally 153 disease resistance genes from PRGdb database [19], which involved in the plant immunity to biotic stresses, were validated by experiments in wet labs. Of them, 62.09% (95 from 153) genes, 17.65% (27 from 153) genes, 20.26% (31 from 153) genes were classified as NBS-LRR families, RLP/RLK, and other kinds of genes, respectively ( Figure 1). (RLPs). RLKs are comprised of extracellular domains, transmembrane domains, and intracellular kinase domains, which are required for transmitting the signals to the downstream defense responses, whereas RLPs are only comprised of the basic conformation without intracellular kinase domain. PTI with broad-spectrum defense is not sufficient to prevent most pathogens, and if plants have defect in PRRs, they often become more susceptible to microbes [4][5][6][7]. In turn, pathogens employ kinds of virulence effectors to overcome PTI and establish successful infection, termed effector-triggered immunity. Thus, ETI functions in the second defense of elicitor mediated defenses. Most of the genes involved in ETI pathway contain intracellular nucleotide-binding site and leucine-rich repeat domains (NBS-LRRs or NLRs), which are typically cytoplasmic receptor proteins. NBS-LRR genes can detect or recognize the polymorphic, strain-specific pathogen-secreted virulence effectors, and then transfer the signals to the downstream of defense genes. Thus, ETI-pathways belong to the species-specific disease resistance, and rapidly co-evolve with their pathogens. Plant species in eudicots and dicots have lots of NB-LRR genes. According to the N-terminal features and functions, the NB-LRR proteins in plants can be termed into two classes with the terminal Toll/interleukin-1receptor (TIR) or coiled-coil (CC)/resistance to powdery mildew8 (RPW8) domains [8][9][10]. The TIR, CC or RPW8 domains are crucial in signaling transmit in cellular targets for effector action or downstream signaling components [11]. Although the NB-LRR genes were demonstrated as the ancient and conserved genes in plants, their comparative genomic analyses have shown great structural diversity. For example, the CC domains are prevalent in eudicots and monocots, while the TIR domains are nearly absent in monocots [12]. Cross-kingdom RNA interference (CKRI) functions in the third layer, which protects plants by extracellular vesicles transport small RNAs or microRNAs (miRNAs) to microbial pathogens and then silence the virulence genes [3]. As one kind of typically small non-coding RNAs, miRNAs function in post-transcriptional gene regulation. Small miRNAs play big roles in a variety of biological processes, such as development, hormone responses and stress adaptations [13][14][15][16]. In PTI and ETI pathways, microRNAs as the classical regulators in post-transcript or translation level regulate defense/defense-associated genes [17,18], which can balance the benefits and costs of their targets. Plants employ miRNAs as shields against the pathogen attacks. MiRNAs respond to virus, bacteria and fungi by negatively regulating of mRNAs, which mainly function in both PTI and ETI. Until now, totally 153 disease resistance genes from PRGdb database [19], which involved in the plant immunity to biotic stresses, were validated by experiments in wet labs. Of them, 62.09% (95 from 153) genes, 17.65% (27 from 153) genes, 20.26% (31 from 153) genes were classified as NBS-LRR families, RLP/RLK, and other kinds of genes, respectively ( Figure 1). Figure 1. Categories of the genes/regulators in the three defense layers in plants. The data was downloaded from PRGdb database and the recent publications . PTI: pathogen-associated molecular patterns (PAMP) triggered immunity; ETI: effector-triggered immunity; CRKI: cross-kingdom RNA interference. In regard to defense genes, studies have shown a number of genes/small RNAs linked to anti-pathogen immunity. Here, we mainly summarize the current knowledge of the defense genes . PTI: pathogen-associated molecular patterns (PAMP) triggered immunity; ETI: effector-triggered immunity; CRKI: cross-kingdom RNA interference. In regard to defense genes, studies have shown a number of genes/small RNAs linked to anti-pathogen immunity. Here, we mainly summarize the current knowledge of the defense genes and their evolution paths regulated by miRNAs in plants, and then discuss their potential applications in crop improvements in the last section. Three Layers of Defense Mechanisms to Biotic Stresses in Plants 2.1. The First Layer of Defense: Defense Genes in PTI As one of the most important sensory protein groups, RLKs and RLPs in plants play crucial roles both in cell-cell and the plant-environment communications such as plant-pathogen interaction. In addition, RLKs and RLPs play fundamental roles in plant growth and development. Plants deploy a wide assay of RLKs and RLPs as the first layer of inducible defense to detect microbe-and hostderived molecular patterns ( Figure 2A, the first layer) [63]. Numbers of RLKs/RLPs have been cloned in plants [64]. The best classical example is FLAGELLIN-SENSITIVE2 (FLS2), belonging to RLK family, which have been verified to response to Flagellin fragment flg22 of bacteria in Arabidopsis [65], grapevine [66], tobacco [67], rice [68] and tomato [69]. As a "molecular glue", flg22 induces the activity of the heterodimerization complex FLS2-BAK1 (BRI1-ASSOCIATED RECEPTOR KINASE). In different plant species, FLS2 receptors display different affinities for the conserved part of flagellin from different bacteria, which possibly reflect the coevolution with specific-pathogens [66]. Except FLS2, EF-TU RECEPTOR (EFR), PEP 1 RECEPTOR (PEPR1), PEPR2, RLP23, RLP30 [70], the endogenous AtPep1 [71], NLPs [72], and SCFE1 [73], can also recognize bacterial EF-Tu, respectively. All of them are associated with the regulatory BAK1 that acts as a co-receptor for flg22/EF-Tu/AtPep1/nlp30/SCFE1 of pathogens and are crucial for signaling activation [74]. Long chitin oligomers as bivalent ligands, lead to the homodimerization of CHITIN ELICITOR RECEPTOR KINASE 1 (AtCERK1) and generate an active receptor complex in Arabidopsis, which directly trigger chitin-induced immune signaling [75]. The chitin perception system in rice is significantly different from the one in Arabidopsis. OsCERK1 dimmer does not bind chitin since the single LysM domain, while the dimer elicitor-binding LysM-RLP (OsCEBiP) can bind the chitin by ligand. The OsCERK1-chitin-OsCEBiP then forms a sandwich-type receptor dimerization for chitin oligomers [76]. There are a number of RLKs/RLPs involved in plant immunity, which have been well summarized by Tang et al [63]. After plant sensing of pathogen/microbe-associated molecular patterns, these pattern recognition receptors instantly trigger a number of downstream responses, such as the activation of mitogen-activated protein kinases (MAPKs) ( Figure 2A, the first layer), which is one of the earliest signaling events [77]. By phosphorylation to transmit response signals, MAPKKK actives MKK, and then MKK actives MPK [78]. MAPK cascades is involved in multiple signaling defense responses, including the biosynthesis/signaling of plant stress/defense hormones, reactive oxygen species generation, stomatal closure, defense gene activation, phytoalexin biosynthesis, cell wall strengthening, and hypersensitive response (HR) cell death ( Figure 2A, the first layer) [77]. The activation of MAPK cascades is essential for plant immunity. In addition, some transcription factors were found to regulate the defense-related genes that involved in signal transduction in rice. For example, a bZIP gene OsBBI1 in rice, is a major transcription factor to regulate the resistance spectrum for diverse groups of M. oryzae by altering the first level of innate immunity in host plants [79]. WRKY13 as another major regulatory factor was identified to transfer signals from WRKY45 to downstream WRKY42 as functioning WRKY-type transcription factors (TFs) [80]. Following the SA-pathway-dependent disease response mechanism, WRKY13 shows correlation of the defense to M. oryzae and Xoo [81]. By activation of NPR1 protein, the SA pathway plays a crucial role in the systemic acquired resistance response mechanism ( Figure 2A, the first layer) [82]. As a result, kinds of genes comprised of cellulase surface disease resistance genes and intracellular transcript factors could function in the complex PTI. The Second Layer of Defense: The Defense Genes in ETI In ETI pathway, plants have developed NBS-LRR proteins to recognize effectors and trigger the ETI response [2], which can cause programmed cell death together with the downstream of WRKY and lead to hypersensitive response (HR) ( Figure 2A, the second layer) [97]. NBS-LRRs as an interesting class of disease resistance genes own a larger member in plants. In Table 1 The Second Layer of Defense: The Defense Genes in ETI In ETI pathway, plants have developed NBS-LRR proteins to recognize effectors and trigger the ETI response [2], which can cause programmed cell death together with the downstream of WRKY and lead to hypersensitive response (HR) ( Figure 2A, the second layer) [97]. NBS-LRRs as an interesting class of disease resistance genes own a larger member in plants. In Table 1, about 1.19-3.48% of total coding genes were defined as NBS-LRR genes. Although NB-LRR genes are abundant in plants, only 93 genes are validated to play important roles in the innate immunity of plants up to now. Of the validated NBS-LRR genes, 65.59% (61 from 93) genes contain the CC domains, while only 19.35% (18 from 93) genes contain the TIR domains, and the others contain only one domain of either NBS, LRR, TIR, CC, or RPW8 ( Figure 1). The verified disease resistance genes with CNL or TNL domains are listed in Table 2. For example, seven CNLs and seven TNLs in Arabidopsis thaliana, eleven CNLs in Oryza sativa, five CNLs and one TNL in Solanum lycopersicium, seven CNLs in Triticum aestivum, three CNLs in Hodeum vulgare had been exemplified by experiments. These defense genes in plants can confer the resistance to fungi, oomycetes, bacteria, viruses, nematodes, and insects. 1 the percentage of the R-genes from the total coding genes; 2 percentage of the miRNA target genes from the R-genes. The disease resistance genes were abundant in the wild resource. In Triticeae for example, the defense genes Sr31 and Sr50 [133] from cereal rye (Secale cereale), can confer the resistance to stem rust disease caused by Puccinia graminis f. sp. tritici (Pgt). Sr35 gene from Triticum monococcum confers the resistance to Ug99 Stem Rust Race Group [134]. In addition, some non-NBS-LRR genes can also provide the defense to pathogens. For example, Stb6 in wheat can directly interacted with the effector AvrStb6 that produced by wheat pathogen Zymoseptoria tritici [135]. The X10 gene, which has four potential transmembrane helices in rice, can be induced by transcription activator-like (TAL) effector AvrXa10. The gene can confer disease resistance to rice bacterial blight by inducing programmed cell death in rice [136,137]. By introgression or transgene strategy, these defense genes confer the disease resistance in plants. For example, by overexpressing Pm3a/c/d/f/g in wheat, all tested transgenic lines showed the significantly more resistance than their respective non-transformed sister lines in field experiments [138]. The T0 and T1 transgenic lines with the Sr50 gene were resistant to Puccinia graminis f. sp. tritici (Pgt), while lines without the transgene were susceptible [133]. The Third Layer of Defense: Cross-Kingdom/Organism RNA Interference It had been demonstrated that plasmodesmata sRNAs can presumably move from cell to cell, and they systemically travel through vasculature [139]. Remarkably, sRNAs also move and induce their target gene silencing between interacted organisms and hosts. The phenomenon was defined as cross-kingdom/organism RNA interference (CKRI) [20,93,[140][141][142]. Pathogens can deliver sRNAs into plants. It was recently discovered as a novel class of pathogen effectors ( Figure 2A, the third layer). Botrytis cinerea can deliver small RNAs (Bc-sRNAs) to plant cells to silence host immunity genes [140]. Such small RNA effectors in B. cinerea are mostly produced by Dicer-like protein 1/2 (Bc-DCL1/2). In reverse, over-expressing sRNAs that target Bc-DCL1 and Bc-DCL2 in tomato and Arabidopsis, would silence Bc-DCL genes and inhibit fungal growth and pathogenicity. It exemplified bidirectional CKRI and sRNA trafficking between plants and fungi [93]. The easy traveling phenomenon suggests naturally occurring small RNAs might exchange each other across cross-kingdom/organism. Conversely, hosts also can transfer naturally occurring small RNAs into pests or pathogens to attenuate their virulence ( Figure 2A, the third layer). Recently, two reports have demonstrated that naturally occurring plant small RNAs might be delivered into pathogens to silence their target genes. In response to the infection of Verticillium dahliae, cotton plants increase the dose of miR159 and miR166 in expression level and then export both to the fungal hyphae for specific silencing. Two genes encoding an isotrichodermin C-15 hydroxylase and a Ca 2+ -dependent cysteine protease, were targeted by miR159 and miR166, respectively. Both of the target genes are essential for fungal virulence [20]. Another example is that host Arabidopsis cells by secreting exosome-like extracellular vesicles can also transfer small RNAs into fungal pathogen Botrytis cinerea. At the infection sites, these sRNA-containing vesicles accumulate and then are taken up by the fungal cells. Delivered host small RNAs induce the silence of fungal genes that is critical for pathogenicity. TAS1c-siR483 target two genes BC1G_10728 and BC1G_10508 from B. cinerea, and TAS2-siR453 targets BC1T_08464. All of the three genes involving in vesicle trafficking pathways are critical for pathogenicity [3]. Of them, BC1G_10728 encodes a vacuolar protein sorting 51 and plays a crucial role in Candida albicans virulence [21]. Thus, Arabidopsis has adapted exosome-mediated CKRI mechanism as part of its immune responses during the evolutionary arms race with the pathogens [3]. Based on the above description, since only two miRNAs and two small RNAs in plants were identified to function in CKRI, data are inefficient to deduce their evolution among species. Thus, in the next section, we only discussed the evolution of disease resistance genes and their regulator miRNAs in PTI and ETI. The First Layer of Defense Regulation: miRNAs Involved in the PTI Pathway During pathogen infection, plant small RNAs play key roles in gene regulation level. According to the targets of miRNAs that how to respond to the pathogen infection, miRNAs were divided into active and repressed regulation in basal resistance ( Figure 1A, Table 3). In the positive regulation, overexpression of miRNAs conferred the resistance to defense diseases in plants. For example, miR393 in Arabidopsis, was discovered to contribute to the antibacterial resistance by negatively targeting the transcripts of the F-box auxin receptors TIR1 [22]. Repressing auxin signaling through miR393 overexpression increases bacterial resistance; conversely, augmenting auxin signaling through over-expressing a TIR enhances susceptibility to virulent Pto DC3000. miR444/OsMADS directly monitors OsRDR1 transcription, and involves in the rice antiviral response [23]. Overexpression of miR444 enhanced rice resistance against rice stripe virus (RSV) infection by diminishes the repressive roles of OsMADS23, OsMADS27a, and OsMADS57 and concomitant by the up-regulation of OsRDR1 expression. Thus, miR444 can indirectly activate the OsRDR1-dependent antiviral RNA-silencing pathway. Over-expression of osa-miR171b conferred less susceptibility to rice stripe virus infection by regulating the target OsSCL6. OsSCL6-IIa/b/c was down-regulated or up-regulated in plants, where osa-miR171b was over-expressed or interfered [24]. In the negative regulation, overexpression their target genes could confer the resistance to pathogens in plants. miR169 suppresses the expression of NFYA in immunity against the infection of bacterial wilt Ralstonia solanacearum [25] and the blast fungus Magnaporthe oryzae in Arabidopsis and rice, respectively [26]. The transgenic lines of over-expressing miR169a, became hyper-susceptible to pathogens. MiR156 and miR395 regulate apple resistance to Leaf Spot Disease [27]. In apple, Md-miR156ab and Md-miR395 suppress MdWRKYN1 and MdWRKY26 expression, which decreases the expression of some pathogenesis-related genes, and results in susceptibility to Alternaria alternaria f. sp. mali. In Arabidopsis, miR396/GRF module mediates innate immunity against P. cucumerina infection without growth costs. Reduced activity of miR396 (MIM396 plants) was found to improve broad resistance to necrotrophic and hemibiotrophic fungal pathogens [28]. MiR319/TCP module involves in the rice blast disease. Increasing expression level of rice miR319 or decreasing expression level of its target TCP21, LIPOXYGENASE2 (LOX2) and LOX5 can facilitate rice ragged stunt virus (RRSV) infection [29], which caused the decreased endogenous jasmonic acid (JA) [30]. Inhibiting ath-miR773 activity accompanied with up-regulation of its target gene METHYLTRANSFERASE 2 increased resistance to hemibiotrophic (Fusarium oxysporum, Colletototrichum higginianum) and necrotrophic (Plectosphaerrella cucumerina) fungal pathogens in Arabidopsis [31]. By regulating the transcription of GhMKK6 gene in cotton, ghr-miR5272a involved in the immune response. Over-expressing ghr-miR5272a increased sensitivity to Fusarium oxysporum by decreasing the expression of GhMKK6 and the followed disease-resistance genes, which lead a similar phenotype to GhMKK6-silenced cotton [32]. In addition, miRNAs could also be involved in the resistance to nematode invasion. For example, miR827 in Arabidopsis down-regulated the expression of NITROGEN LIMITATION ADAPTATION (NLA) gene. It suppressed the basal defense pathway by enhancing susceptibility to the cyst nematode Heterodera schachtii [33]. Except these miRNAs indirectly regulation the PTI pathway, a few of miRNAs were predicted to directly regulate the receptor-like genes. For example, when osa-miR159a.1 was repressed, the expression of OsLRR-RLK2 was induced, which is responded to Xanthomonas oryzae pv. Oryzae in rice [31]. In future, some miRNAs regulation of pattern recognition receptors (PRR) genes may be validated by experiments. The Second Layer of Defense Regulation: The Defense Signal Small RNAs in ETI In addition to the basal defense, miRNAs are also involved in ETI pathway to directly and indirectly regulate the disease resistance genes (Figure 2A & Table 3). MiR393*, the complementary strand of miR393 within the sRNA duplex, by targeting a protein trafficking gene Membrin 12 promote the secretion of antimicrobial PR proteins, which functions in ETI during infection of Pseudomonas syringae pv. Tomato in Arabidopsis [34]. The miR863-3p is induced by the bacterial pathogen Pseudomonas syringae. During early infection, miR863-3p silences two negative regulators of plant defense, namely atypical receptor-like pseudokinase1 (ARLPK1) and ARLPK2, both of which trigger immunity through mRNA degradation. Later during infection, miR863-3p silences SERRATE, and positively regulates defense. And SERRATE is essential for miR863-3p accumulation by a negative feedback loop. Thus, miR863-3p targets both negative and positive regulators of immunity through two modes of action to fine-tune in the timing and amplitude of defense responses [35]. High expression of plant NBS-LRR defense genes is often lethal to plant cells, which is associated with the fitness costs. Thus, plants develop several mechanisms to regulate the transcript level of NBS-LRR genes. One of the key mechanism is the suppression of regulation network in microRNAs and NBS-LRRs, which may play a crucial role in plant-microbe interactions by sRNA silencing mechanism [18]. NBS-LRR genes confer defense against the pathogen infections in gene dosage dynamic expression level by multiple duplications and diversification, while miRNAs minimized the cost of gene copies by inhibiting their expression [36]. One miRNA can regulate dozens to hundreds of NBS-LRRs by targeting the similar motif sites [37], which make it more economical to balance the benefits and costs of these copies in genome. Until now, a few of miRNAs had been validated to be involved in the regulation of NBS-LRR genes. The regulation between miRNAs and CC-NB-LRR or TIR-NB-LRR gene classes was mostly characterized in eudicots. In most of the post-transcriptional regulation networks, the miRNA can trigger the 21-nt phased siRNA generation in NB-LRR transcripts, which were processed by RNA-dependent RNA polymerase 6 (RDR6) and DICER-LIKE 4 (DCL4) [38]. For example, in Brassica miR1885 were validated to induce by Turnip Mosaic Virus (TuMV) infection, which cleaved TIR-NB-LRR class genes [39]. In Tobacco, by cleaving TIR-NB-LRR immune receptors, both of nta-miR6020 and nta-miR6019 provide resistance to Tobacco mosaic virus (TMV) [40,41]. In tomato, sl-miR5300 and sl-miR482f controlled NB domain-containing proteins in mRNA stability and translation level, which involved in plant immunity [42]. In Arabidopsis, miR472 modulated the disease resistance genes mediated by RDR6 silencing pathway [43]. In Medicago, miR2109, miR482/miR2118 and miR1507 were found to influence NB-LRR gene family [37]. In legumes, miR482, miR1507, miR1510, and miR2109 suppressed NB-LRR gene class with CC or TIR domains, which were proposed to function in the regulation of defense response or host specificity during rhizobium colonization [38,44]. In addition, miR482/miR2118, miR946, miR950, miR951, miR1311, miR1312, miR3697, miR3701, and miR3709 were also mediated to generate phased siRNAs by targeting NBS-LRR gene class in Norway Spruce [45]. In monocots, miR2009 (also named miR9863 in miRBase) was first predicted in wheat to target the Mla alleles [46]. In barley, the miR9863 family was confirmed to trigger response to the Mla alleles [47]. Table 3. List of regulators involved in the immunity response to pathogens in plants. The Evolution of Defense Gene in PTI In land plants, RLKs expanded extensively and fulfilled these diverse roles including perceive growth hormones, environmental/danger signals derived from pathogens [143]. In Arabidopsis, 44 RLK subgroups were defined, and leucine-rich repeat receptor-like kinases (LRR-RLK) belong to the largest receptor-like kinase family and are focused by researchers [144]. According to characters of unique basic gene structures and protein motif compositions, plant LRR-RLKs constitute 19 subfamilies, most of which were derived from the common ancestors in land plants. The proportions of LRR-RLK genes in Lycophytes and moss genome are 0.30% and 0.36%, respectively, while the proportions of LRR-RLK genes in angiosperms are 0.67-1.39% [145], which indicated the special expansion of defense genes in angiosperm genomes. LRR-RLK involved in the defense/resistance-related genes was less conserved than that involved in development. Defense-associated LRR-RLKs undergone many duplication events, and most of them were massively lineage-specific expansion mainly by tandem duplication [143,144]. These discoveries provide important resources for future functional research for these critical signaling genes in PTI. The Evolution of Defense Gene in ETI NBS-LRR genes as a class of ancient and conserved genes have been detected in gymnosperms, angiosperm plants and animals to ensure immunity [12,146,147]. However, comparative genomic analyses have demonstrated that NBS-LRR genes have a great structural diversity in plants and animals. For example, TIR domains were established in the ancestor plants conifers and mosses, and also in animals shared functionality regarding innate immunity [148][149][150]. TIR genes specially expanded in dicot genomes, but are absent or at least rare in monocot genomes [8,147,[151][152][153]. For NBS-LRR genes, tandem duplication in genome is the major expansion mechanism in plants. More than 60% of NBS-LRR genes organized in a general pattern of clusters in plant genomes ( Figure 2B) [98]. During whole genome duplication, biased deletions happened in the duplicated paralogous blocks with NB-LRR genes, and it could be possibly compensated by their local tandem duplication mechanism ( Figure 2B). The miRNAs typically target highly duplicated NBS-LRRs, and families of heterogeneous NBS-LRRs were rarely targeted by miRNAs in Brassicaceae and Poaceae genomes [18]. miRNAs/NBS-LRR-genes interactions drove functional diploidization of structurally retained NBS-LRR genes duplicates by suppression regulation ( Figure 2B) [98]. Evolutionary shuffling events such as diploidization and tandem duplication, leaded to copy number variations and presence absence variations in the synteny collapse of NBS-LRR genes [154][155][156][157]. In addition, the polymorphisms often exist in a population [158]. A contrasted conservation of NBS-LRR genes was observed with only 23.8% for monocots and 6.6% for dicots. Thus, NBS-LRR genes as one of the most plastic gene family in plants have less conservation such as synteny erosion or alternatively loss in plants compared with the other coding protein genes [98]. The Evolution of miRNAs in PTI In the PTI pathway, most of miRNAs were very conserved and directly/indirectly involve multiple biological processes in the development and abiotic/biotic stresses. All of the MiR169, miR171, miR393, miR395, and miR396 were ancient miRNAs present in both dicots and monocots [48]. miR444 was specific in monocots [49], whereas miR773 and miR5272 were lineage-specific in Arabidopsis and Medicago. The miRNAs conserved in plants mostly regulate the important transcript factors. These transcript factors tend to involve multiple biological processes. Take miR169 and miR396 for example, miR169/NFYA in Arabidopsis indirectly affected lateral root initiation [50], nitrogen-starvation [51], drought stress [52], and biotic stress [25,26]. In Arabidopsis roots, miR396/GRF regulates the switch between stem cells and transit-amplifying cells [53], which affects rice yield by shaping inflorescence architecture [54], and biotic stresses [28]. Both of the miRNA/target regulation and their function are very conserved in plants. MiR169/NFYA module influences the Ralstonia solanacearum pathogenicity in Arabidopsis [25] and the resistance to M. oryzae strains in rice [26]. In addition, these conserved miRNAs' targets were expanded except for their classical miRNA/target model. For example, miRNA156 regulates of the SQUAMOSA-PROMOTER BINDING PROTEIN-LIKE (SPL) family involve in the timing of vegetative and reproductive phase change, which is highly conserved among phylogenetically distinct plant species [55]. miR395 by targeting a high-affinity sulphate transporter and three ATP sulfurylases involved in the sulfate homeostasis, is also conserved in plants [56,57]. Differently, both miR156 and miR395 regulate apple resistance to leaf spot disease by targeting WRKY. Thus, miRNAs involved in PTI pathway, are conserved in PTI defense pathway and in plant development such as miR393 vs TIR in auxin signal pathway [22] and miR319 vs. TCL in JA pathway [29]. Only few of miRNAs were reported to potentially regulating the RLK/RLP by osa-miR159a.1 [58], MiR5638 and miR1315 [59]. Genes involved in the PTI pathway were relatively conserved compared to these genes involved in ETI pathway. Thus, most of their regulator miRNAs were also conserved miRNAs or neofunctionalization of miRNAs in plants. The Evolution of miRNAs in ETI Although there are many miRNAs regulated NB-LRR genes, the conservation level of miRNAs is lower than the development associated miRNAs or PTI-associated miRNAs. In the eudicots and monocots, there is no conserved miRNAs targeting the NB-LRR genes. Lineage-or species-specific disease resistance-associated miRNAs were continually present and accompanies the continually varied pathogens. And some miRNAs with similar sequences had obvious functional diversity. miR482/miR2118 in eudicots mostly targeted NB-LRR genes, however, it only initiated the generation of 21-nt phased siRNAs in rice, and most of the target transcripts were noncoding sequences and specifically expressed in the rice stamen and the maize premeiotic and meiotic anther [60][61][62]. It clearly concluded that miR2118 initiated the phased siRNA in male reproductive organs. Therefore, a functional switch occurred in miR482/miR2118 between eudicots and dicots. Their expression level also varies in the lineage-related species. Tae-miR3117 was predicted to target the numbers of NBS-LRRs with higher expression in the tetraploid and hexaploid Triticum seedlings, while it had lower expression levels in Aegilops tauschii (not published data). And in rice, maize, and sorghum, miR3117 also displayed lower expression levels. Diverse miRNAs, as negative transcriptional regulators, inhibit NBS-LRRs in plants. The highly duplicated NBS-LRRs were typically targeted by miRNAs ( Figure 2B), while families of heterogeneous NBS-LRR genes were rarely regulated by miRNAs such as in Poaceae and Brassicaceae genomes. For example, some miRNAs also have a high duplication rate such as miR482/miR2118 in tandem duplication in genomes [60][61][62], which may enhance the expression dosage. Newly emerged miRNAs were periodically derived from duplicated/redundant NBS-LRRs from different gene families. And most of these new birth miRNAs target these NBS-LRR gene regions of conserved, encoded protein motif, which follow in the convergent evolution model ( Figure 2B). The miRNAs may drive the rapid diploidization of these NBS-LRR genes in polyploid plants. These NBS-LRR associated miRNAs had a rapid diversity. The nucleotide diversity of the target site region in the wobble position of the codons drives the diversification of miRNAs. These characters of high duplication rate and rapid diversity were similar to their target genes. The co-evolutionary model between NBS-LRRs and miRNAs in plants makes the plants balance the costs and benefits of disease resistance [18]. The First Strategy: Utilize the Disease Resistance Genes by a Molecular Switch Up to now, a number of genes were exemplified to be involved in plant immunity defense. By over-expressing such defense genes can dramatically enhance disease resistance in plants, while is often associated with significant penalties to fitness and make the resulting products undesirable. Thus, it is difficult in agricultural applications. Recently, it has been developed a strategy to utilize these disease defense genes from the angle of plant genes or their regulators [83]. The strategy is to introduce immunity-inducible promoter and other two pathogen-responsive upstream open reading frames of the TBF1 gene. It is called uORFsTBF1, which is a key immune regulator and its translation is transiently and rapidly induced upon pathogen challenge ( Figure 2C, uORF). It has been demonstrated that inclusion of the uORFsTBF1-mediated translational control over the production of AtNPR1 in rice and an auto-activated immunity receptor snc1-1 in Arabidopsis did not reduce the plant fitness in the laboratory or in the field [83]. This strategy using a molecular switch enables us to engineer more broad-spectrum disease resistance genes with minimal adverse effects on plant growth and development in the agriculture application. The Second Strategy: Host-Induced Gene Silencing (HIGS) Transgene-derived artificial sRNAs in plants can induce the target gene silencing in certain interacting insects [84,85], nematodes [86], fungi [87][88][89][90], oomycetes [91,92], and even plants-plants [141]. The phenomenon was called host-induced gene silencing (HIGS). The artificial sRNAs can travel from host plants to pathogens or pests and then function in trans ( Figure 2C, HIGS). It had been well used in many plants in the decades. By plant RNAi suppressing a bollworm P450 monooxygenase gene of cotton impaired larval tolerance of gossypol [85]. In transgenic plants, by RNAi silencing of a conserved and essential root-knot nematode parasitism gene engineered broad root-knot resistance [86]. HIGS of nematode fitness and reproductive genes decreases fecundity of Heterodera glycines Ichinohe. Double-stranded RNA complementary to cytochrome P450 lanosterol C14 alpha-demethylase-encoding genes of Fusarium in Arabidopsis and barley contributes to strong resistance to Fusarium species [90]. HIGS to the MAPKK gene PsFUZ7 in wheat enhance stable resistance to wheat stripe rust [159]. HIGS of an important pathogenicity factor PsCPK1 in Puccinia striiformis f. sp. tritici conferred resistance of wheat to stripe rust [160]. By transgene-mediated cross-kingdome RNAi mechanism, HIGS by transgene is a good and effect strategy to improve the crop disease resistance in a broad spectrum. The Third Strategy: Spray-Induced Gene Silencing (SIGS) The pathogens and pests are capable to take up the double RNAs or small RNAs from the plants or the environments [93]. Based on this and according to the mechanism of cross-kingdom/organism RNA interference, researchers have developed a strategy to control crop disease. It is spray-induced gene silencing (SIGS) that spraying dsRNAs and sRNAs on plant surfaces can target pathogen genes to repression pathogen virulence ( Figure 2C, SIGS). For modern crop protection strategies, it is a natural blueprint. Evidences suggest that nematodes [94], insects [84] and fungi [95] could uptake up the environmental dsRNA or sRNAs. Directly spraying the dsRNAs that target the fungal cytochrome P450 lanosterol C-14alpha-demethylases of fungal gene can suppress fungal growth [95]. On barley leaves, spraying CYP51-targeting dsRNA at a concentration range of 1-20 ng/mL, inhibited growth of Fusarium species [3]. It has been demonstrated that spraying naked sRNAs and dsRNA on plants was successful to protect fruits and vegetables against pathogens. However, pesticide effect of the naked sRNAs and dsRNAs can only last 5-8 days. Mitter, et al. developed a method to load dsRNAs on designer, non-toxic, degradable, layered double hydroxide (LDH) clay nanosheets. This LDH made the dsRNA does not be wash off, and can be sustained released for 30 days [96]. This SIGS broadly application of new strategy may contribute to reduced use of chemical pesticides and lightening of selective pressure for resistant pathogens. The new-generation of RNA-based fungicides and pesticides are powerful, eco-friendly, which can be easily adapted to control multiple plant diseases simultaneously. Conclusions Plants deployed PTI, ETI, and CKRI innate immune systems to arm race with different pathogen stresses. Pathogens developed more advanced effectors to defeat plant defense immunity. A number of genes have been exemplified to play important role between the host-pathogen interactions in plants. These signaling genes will be helpful to improve plant disease resistance against various pathogens. The sustainable and broaden spectrum resistance genes and their regulators such as miRNAs will be applied in developing crop varieties by introducing the molecular switch. From the cross-kingdom angle, the HIGS can also be used to crop breeding by transgenic approach, which can also confer the broaden spectrum resistance to hosts. The SIGS can also make plants yield the broaden spectrum resistance by spraying the designed dsRNAs/sRNA. Further function studies in plants will dissect more and more defense genes and hopefully unravel the intricate defense regulation network. More and more molecular technologies will be invented and adapted to help develop the eco-friendly disease-resistance cultivars. Author Contributions: S.Z. and R.Z. conceived and designed the project. F.Z., R.Z., S.Z., S.W., and P.C. downloaded the data and analyzed the data. R.Z., F.Z., and G.L. prepared and drafted the manuscript. S.Z. and P.C. revised the manuscript. All the authors read and approved the final manuscript.
8,416.8
2019-01-01T00:00:00.000
[ "Biology" ]
Electromagnetic structure of light nuclei The present understanding of nuclear electromagnetic properties including electromagnetic moments, form factors and transitions in nuclei with A $\le$ 10 is reviewed. Emphasis is on calculations based on nuclear Hamiltonians that include two- and three-nucleon realistic potentials, along with one- and two-body electromagnetic currents derived from a chiral effective field theory with pions and nucleons. Introduction A major goal in nuclear physics is to understand nuclear structure and dynamics in terms of underlying interactions occurring between individual nucleons. Studies grounded on this basic picture of the nucleus are referred to as ab initio. An exceptionally powerful tool to asses the validity of our theoretical models is to investigate nuclear electromagnetic (e.m.) observables, such as ground state properties, e.g., e.m. moments and form factors, as well as e.m. reactions, e.g., photo-and electroinduced reactions. In these processes, external e.m. probes interact with the nuclear charge and current distributions with a strength characterized by the fine-structure constant α ∼ 1/137. The small value of the fine-structure constant allows for a perturbative treatment of the e.m. interaction, while non-perturbative physics pertain only to the nuclear target. For light nuclei, terms that go beyond the leading order contribution in the Zα-expansion (where Z is the number of protons) can be safely disregarded, leaving us with relatively simple reaction mechanisms and manageable formal expressions. For example, at leading order, the cross section associated with inclusive electro-nucleus scattering processes is factorized into the leptonic tensor, which is completely specified by the measured electron's kinematic variables, and the hadronic one associated with the nuclear target, and proportional to matrix elements squared of the nuclear e.m. charge and current operators. A clear connection between measured quantities, i.e., cross sections, and calculated matrix elements is then realized. Experimental data of e.m. observables are, in most cases, known with great accuracy providing us with viable and strong constraints on our models. Likewise, for light nuclei, theoretical calculations are affected by relatively small statistical errors because for these systems the many-body problem can be solved exactly or within controlled approximations. This allows for solid comparisons between experimental data and theoretical predictions. In Fig. 1, a cartoon picture of the double differential cross section for electron scattering off nuclei is represented. Different values of energy ω transferred to the system, correspond to different excitation energies of the nucleus. By varying ω, we can access the ground state (elastic peak), low-lying a e-mail<EMAIL_ADDRESS>(discrete) nuclear excited states, giant resonance modes, and the quasi-elastic energy region up to the pion-production threshold. For each value of excitation energy ω, one can study the matrix elements' behavior as a function of the momentum |q| transferred to the nucleus. In particular, by varying |q| one can explore the e.m. charge and current distributions with a spatial resolution ∝ 1/|q|. In this talk, I will focus on ab initio calculations of ground state nuclear e.m. properties, that is e.m. moments and elastic form factors, as well as widths of e.m. transitions occurring between low-lying nuclear states. These studies have been recently reported in a topical review on e.m. reactions on light nuclei [1], where more details and references to original articles can be found. Recent developments on theoretical ab initio investigations on other very interesting e.m. processes in light nuclei, such as photo-absorption and radiative capture reactions, Compton scattering, sum rules ..., are well represented in this conference, see, e.g., contributions by X. Zhang, J. Dohet-Eraly, H. Griesshammer, M. Miorelli, S. Bacca, N. Barnea, D. Rozpedzik, and A. Lovato in these proceedings. A theoretical understanding and control of nuclear e.m. structure and dynamics is a necessary prerequisite for studies on weak induced reactions, such as neutrino-nucleus interactions. The experimental data acquisition for this kind of processes is comparatively more involved owing to the tinier cross sections and to the fact that neutrinos are chargeless particles and, thus, they are hard to collimate and detect. An important advance in this direction has recently been carried out by Lovato and collaborators [2], and for a status report on ab initio calculations of weak response functions in 4 He and 12 C I refer to the plenary talk of A. Lovato (the associated contribution can be found in these proceedings). Moreover, a theoretical understanding of the structure and dynamics of light nuclei is a necessary prerequisite for research projects aimed at studying larger nuclear systems. For these reasons, it is imperative to first validate our theoretical understanding of e.m. reactions on light nuclei. Nuclear Hamiltonians and electromagnetic currents In the ab initio framework, the nucleus is described as a system made of A non-relativistic point-like nucleons interacting among each other via many-body forces and its energy is approximated by the following Hamiltonian: where K i is the non-relativistic single-nucleon kinetic energy, while v i j and V i jk are two-nucleon (NN) and three-nucleon (3N) potentials, respectively. Implicit in the equation above is the assumption the four-nucleon forces and higher order terms in the many-body expansion are suppressed. The NN 21st International Conference on Few-Body Problems in Physics and 3N potentials are phenomenological in nature in that they involve a number of parameterssubsuming underlying Quantum Chromodynamics (QCD) effects-that are fixed by fitting experimental data. For example, NN potentials are constrained to reproduce a large number of NN scattering data, along with the deuteron binding energy. Nuclear forces belonging to this class of highly accurate nuclear potentials are referred to as 'realistic'. Most realistic potentials describe the long range (∝ 1/m π where m π is the pion mass) part of the nuclear interaction in terms of one-pion-exchange interaction mechanisms. Different dynamical schemes are implemented to account for intermediate and short range effects, among which multiple-pion-exchange, contact interactions, heavy-mesonexchange, or excitations of nucleons into virtual ∆-isobars. Here, the realistic potentials utilized to solve the Schrödinger equation H|Ψ = E|Ψ (where |Ψ is the nuclear wave function) are the Argonne v18 [3] (AV18) NN potential in combination with either the Urbana IX [4] or Illinois-7 [5] 3N potentials, as well as combinations of NN and 3N potentials derived from chiral effective field theory (χEFT) [6][7][8][9]. Nuclear charge (ρ) and current (j) operators describe the interactions of nuclei with external e.m. probes. They are also expanded in a series of many-body operators as where q is the momentum transferred to the nucleus. In Impulse Approximation (IA), that is retaining only one-body operators in the equations above, nuclear e.m. charge and current distributions are simply the sums of those associated with individual protons and neutrons. The non-relativistic charge operator for point-like nucleons is simply the proton charge, while the nucleon current consists of a convection term associated with the current generated by moving protons and a spin-magnetization term associated with the spins of both protons and neutrons. The IA picture of the nucleus is, however, incomplete as it fails to explain, e.g., the measured magnetic moments of light nuclei. Corrections that account for processes in which external e.m. probes couple to pairs of interacting nucleons, described by two-body current operators, need to be incorporated in the theoretical ab initio description. Meson-exchange currents (MEC)-postulated in the '40s by Villars [10] and Miyazawa [11]-follow naturally once meson-exchange mechanism are invoked to describe interactions between individual nucleons. They account for processes in which the external e.m. probe couples with mesons being exchanged between nucleons. The first evidence of meson-exchange effects in light nuclei can be traced back to the 1972 work by Riska and Brown [12], in which MEC were found to provide the missing 10% correction to the IA value necessary to reach agreement between the calculated and the measured cross sections for the radiative capture of proton on neutron at thermal neutron energies. Since then, MEC have evolved into highly sophisticated and accurate currents. In their most recent formulation [13,14], in order to assure consistency between nuclear forces and e.m. currents, MEC are constructed from realistic NN and 3N potentials so as to satisfy the continuity equation. Addition of these MEC corrections to the IA picture successfully explains a wide number of e.m. nuclear observables in light nuclei [15,16]. Recent years have witnessed the tremendous development and success of χEFT [17][18][19] that reinforces and grounds the achievements of conventional theoretical approaches. The relevant degrees of freedom of nuclear physics are bounds states of QCD, i.e., pions, nucleons, and ∆'s, . . . . On this basis, their dynamics is completely determined by that associated with the underlying degrees of freedom of quarks and gluons, that is QCD. However, at low energies, QCD does not have a simple solution because the strong coupling constant becomes too large and perturbative techniques cannot be applied to solve it. χEFT is a low-energy approximation of QCD valid in the energy regime where the typical momenta involved, generically indicated by Q, are such that Q ≪ Λ χ ∼ 1 GeV, where Λ χ is the chiral-symmetry breaking scale. χEFT provides us with effective Lagrangians describing the interactions between pions, nucleons, and ∆'s that preserve all the symmetries, in particular chiral symmetry, exhibited by the underlying theory of QCD at low-energy. These effective interactions, and the transition amplitudes derived from them, can be expanded in powers of the small expansion parameter Q/Λ χ , restoring, in practice, the possibility of applying perturbative techniques also in the low-energy regime. The unknown coefficients of this expansion in small momenta-referred to as low-energy constants (LECs)-while being tied to QCD effects and therefore attainable from QCD calculations, are, in practice, fixed by comparison with the experimental data. Due to the chiral expansion, it is then possible to evaluate nuclear observables to any degree ν of desired accuracy, with an associated theoretical error roughly given by (Q/Λ χ ) (ν+1) . This calculational scheme has been widely utilized to study both nuclear forces and nuclear electroweak currents. The many-body operators emerging from direct evaluations of the transitions amplitudes with interactions provided by χEFT Lagrangians involve multiple-pion exchange operators, as well as contact-like interaction terms. Nuclear two-and three-body interactions were first investigated in the late '90s by Ordòñez, Ray, and van Kolck using a χEFT with pions and nucleons [20][21][22]. Currently, chiral NN (3N) potentials commonly used in ab initio calculations include up to next-to-next-to-next-to leading order or N3LO (next-to-next-to leading order or N2LO) corrections in the chiral expansion [6,7,23]. Vector e.m. currents have been first derived from a χEFT with pions and nucleons by Park, Min, and Rho in Ref. [24]. The resulting operators account for two-pion exchange terms entering at N3LO in the chiral expansion. These currents have been utilized in a number of so called 'hybrid' calculations 1 of nuclear e.m. observables, including magnetic moments and M1 properties of A = 2-3 nuclei and radiative capture cross sections in A = 2-4 systems [25][26][27]. More recently, χEFT e.m. currents and charge have been derived up to one loop contributions included within two different implementations of time ordered perturbation theory: one is by the JLab-Pisa group (see Refs. [28][29][30][31]) and the other one is by the Bochum-Bonn group (see Refs. [32,33]). In this talk, I focus on results obtained using chiral e.m. currents, and compare, where possible, different theoretical evaluations against the experimental data. For results based on conventional e.m. MEC currents I refer the reader to the review articles of Refs. [1,15] and references therein. Before I proceed presenting applications to e.m. observables, I will briefly describe the e.m. operators as they emerge from a χEFT with pion end nucleons. I will start off with the vector e.m. current which is diagrammatically represented in Fig. 2. The leading order (LO) contribution to the e.m. current illustrated in panel (a) is simply given by the non-relativistic one-body current used in IA calculations, while the N2LO one-body operator of panel (d) is a relativistic correction to the LO IA current. Currents of one-and two-pion range, describing long and intermediate range dynamics, enter at NLO and N3LO-panels (b), (c), (e)-(j). Short-range dynamics is encoded by the contact currents of panel (k). Unknown LECs enter the tree-level diagram of panel (e) and contact currents of panel (k). LECs entering the contact terms are of two kinds, namely minimal and non-minimal. The former enter also the NN chiral potential at NLO, and can then be constrained to NN scattering data; the latter need to be fixed from e.m. experimental data. A common procedure implemented to reduce the number of unknown non-minimal LECs (there are 5 of them) is to impose that the two LECs entering the isovector part of the tree-level current illustrated in panel (e) are in fact saturated by the ∆-couplings entering the ∆ transition e.m. current [29,31]. The remaining three LECs are commonly fixed so as to reproduce the magnetic moments of the deuteron, triton, and 3 He [31]. Early investigations on the e.m. charge operator in χEFT have been carried out in Refs. [34][35][36], and more recently loop corrections have been derived in Ref. [32], and subsequently Ref. [30]. In closing, we note that the structure of the charge operator is quite different from that of the vector e.m. current. Two-body corrections, in this case, are expected to be relatively small. In fact, leading twobody operators of one-pion range are suppressed as they enter at N3LO (as opposed to NLO as seen in the case of the vector currents), while there are no free LECs entering the charge operator [30]. Deuteron, He and H electromagnetic form factors For A = 2-3 nuclei, theoretical calculations performed by different groups are available, which makes it possible to compare them not only with the experimental data but also between themselves to test the solidity of the ab initio prescription. The left and middle panels of Fig. 3, show the deuteron charge and quadrupole form factors, respectively, calculated by Piarulli and collaborators [31] (magenta hatched bands) and Phillips [35,36] (purple bands). Both calculations are based on chiral NN potentials. In particular, Piarulli et al. use wave functions from the chiral NN potential at N3LO [38], while Phillips those from the NN interaction at N2LO [39]. The thickness of the bands represents the sensitivity of the results to different cutoffs utilized to regularize the divergent behavior at high momenta of the chiral operators' matrix elements [31,35,36]. The two calculations very nicely agree with the experimental data for low-value of momentum transferred (q ≃ 3 fm −1 ) and exhibit a similar (small) cutoff dependence. In the case of the charge form factor, as q increases, both theoretical cal-EPJ Web of Conferences culations exhibit a more pronounced cutoff dependence, and differ between each other, an indication that this observable is sensitive to the nuclear wave functions utilized in the calculations. In the case of the quadrupole form factor, the agreement with the experimental data is seen up to q ≃ 6 fm −1 , well beyond the expected regime of validity of the χEFT framework. In the right panel of Fig. 3, we compare the results for the deuteron magnetic form factor obtained by Piarulli et al. [31] (hatched magenta band) based on chiral N3LO potential [38] and chiral e.m. currents at N3LO [29], with the fully consistent χEFT calculations by Kölling et al. [37] (solid purple band) based on the chiral NN potential at N2LO [39] and chiral e.m. currents at N3LO [32,33]. The theoretical results are in very good agreement with each other and with the experimental data for values of momentum transferred q ≃ 3 fm −1 , and present a comparable cutoff dependence. Form factor calculations of A = 3 nuclei have been reported in Ref. [31]. Here, we show results for the trinucleon magnetic form factors obtained utilizing the chiral e.m. currents at N3LO of Refs. [28,29] and two sets of nuclear Hamiltonians, namely the AV18 [3] NN plus UIX [4] 3N potentials, and the N3LO [38] NN and N2LO [9] 3N potentials. Calculations in IA are given in light blue (based on chiral interactions) and blue (based on conventional interactions), while full calculations that include the complete e.m. current up to N3LO are given in magenta (based on chiral interactions) and red (conventional interactions). In the figure, the top panels show the 3 He and 3 H magnetic form factors, while the bottom ones show their isoscalar (F S T ) and isovector (F V T ) combinations [31]. As it is well know from studies based on the conventional approach (see Ref. [15]), two-body e.m. currents are crucial to improve the agreement between the observed positions of the zeros and the predicted ones at LO (or IA). Despite the excellent agreement between theory and experiment for q ≤ 2 fm −1 , the theory underpredicts the data at higher momentum transfers, while the zeros are found at lower values of q than observed. The theoretical description of the first diffraction region is still incomplete. Magnetic moments and electromagnetic transitions in A ≤ 10 nuclei Moving on to larger nuclear systems, we find a number of Greens's Function Monte Carlo calculations [41] based on the AV18 [3] plus IL7 [5] nuclear Hamiltonian that use the chiral e.m. currents up to N3LO from Refs. [28,29,31]. Magnetic moments of light nuclei [40] are summarized in the left panel of Fig. 5, where IA results are given by blue dots, while calculations that include the full chiral e.m. current operator are indicated by red diamonds to be compared with the experimental data represented by black stars. First, we note that corrections from two-body currents are found to be small where the IA picture is satisfactory (see, e.g., 6 Li, 9 Be, 10 B), and large where the IA picture is incomplete (see, e.g., 7 Li, 7 Be, 9 C,9 Li). Corrections from two-body components can be as large as 40%, 21st International Conference on Few-Body Problems in Physics as seen in the case of 9 C, and are crucial to reach (or improve) the agreement with the experimental data. It is also interesting to note that two-body effects, while being significant for the 9 C and 9 Li magnetic moments, are found to be negligible for those of 9 Be and 9 B. This behavior can be explained considering the dominant spatial symmetries of the nuclear wave functions for these A = 9 systems. For example, the dominant spatial symmetry of 9 Be ( 9 B) corresponds to an [α, α, n(p)] structure [42]. Therefore, the unpaired nucleon outside the α clusters does not interact with other nucleons. As a consequence, two-body currents, that describe the coupling of external e.m. probes with pair of interacting nucleons, produce a negligible correction. On the other hand, the dominant spatial symmetry of 9 C ( 9 Li) corresponds to an [α, 3 He ( 3 H), pp (nn)] structure, and two-body correlations contribute in both the trinucleon clusters and in between the trinucleon clusters and the valence pp (nn) pair, resulting in a large two-body current contribution. GFMC calculations of selected E2 and M1 transitions in low-lying nuclear states [40] are summarized in the right panel of Fig. 5. Predictions in IA are represented by blue dots, while those obtained with the full chiral e.m. current operator are represented by red diamonds. Calculations for E2 transitions implicitly include the effect of two-body currents via the Siegert theorem, where the charge density is used in IA. Also for these observables the effect of two-body e.m. currents can be large, and for cases in which the experimental errors are relatively small, e.g., 7 Li(1/2 − → 3/2 − ), 8 B(1 + → 2 + ), it is found that their inclusion leads to agreement with the experimental data. This scheme has been most recently utilized to study e.m. (both E2 and M1) transitions occurring in 8 Be [43,44]. It is found that the agreement between the calculated and the experimental M1 widths is not satisfactory. Nevertheless, chiral two-body e.m. currents provide correction at the 20%-30% level, which, in most but one case, improves the IA values. It is possible that the systematic underprediction of these observables is due to a poor knowledge of the small components entering the calculated GFMC nuclear wave functions [44]. In this talk, I presented an overview on the present status of ab initio calculations of e.m. observables, including e.m. moments and form factors, as well as e.m. transitions in light nuclei. The emphasis was on calculations that account for many-body effects in both nuclear Hamiltonians utilized to generate the wave functions, and e.m. current operators. I focused on results that account for two-body operators that have been derived from a χEFT formulation with pions and nucleons, including up to corrections of two-pion range. The ab initio prescription is extremely successful in explaining the experimental data, provided that many-body effects in both the e.m. currents and nuclear Hamiltonians are accounted for. χEFT based calculations of A = 2 and 3 nuclei e.m. form factors [31] nicely agree with the experimental data in the low-energy regime of applicability of χEFTs, with two-body corrections playing an important role in improving the agreement between the calculated and the experimental values of the trinucleon magnetic form factors. E.m. two-body current operators provide a 40% correction in the calculated magnetic moment of 9 C [40], and corrections at the 20%-30% level in M1 transitions occurring in 8 Be [44]. There are many interesting e.m. observables that can be accessed within this formalism. For example, few (or no) ab initio calculations of e.m. (charge and magnetic) form factors in A > 4 nuclei currently exist [1], and it would be interesting to perform them to have a deeper insight on nuclear e.m. structures. A complete microscopic profile of nuclei includes also studies of e.m. reactions such as radiative captures and photonuclear reactions. From the theoretical point of view, i) the construction of chiral potentials compatible with Quantum Monte Carlo computational calculations [45], opens up the possibility of performing consistent Quantum Monte Carlo calculations that use chiral potentials and chiral currents; ii) the construction of the chiral NN potential with the explicit inclusion of ∆-excitation [46], allows study of the effects of ∆-isobars in chiral two-body e.m. current operators [28]. Weak processes are also being vigorously studied within the χEFT formulation. Among these studies are the derivation of the axial two-body current operator up to one-loop [47], as well as the construction of two-body operators entering pion-production reactions induced by neutrino scattering off nuclei (see contributions by F. Myhrer in this proceedings).
5,236
2015-08-28T00:00:00.000
[ "Physics" ]
How Close is Your Government to its People? Worldwide Indicators on Localization and Decentralization This paper is intended to provide an assessment of the impact of the silent revolution of the last three decades on moving governments closer to people to establish fair, accountable, incorruptible and responsive governance. To accomplish this, a unique data set is constructed for 182 countries by compiling data from a wide variety of sources to examine success toward decentralized decision making across the globe. An important feature of this data set is that, for comparative purposes, it measures government decision making at the local level rather than at the sub-national levels used in the existing literature. The data are used to rank countries on political, fiscal and administrative dimensions of decentralization and localization. These sub-indexes are aggregated and adjusted for heterogeneity to develop an overall ranking of countries on the closeness of their government to the people. The resulting rankings provide a useful explanation of the Arab Spring and other recent political movements and waves of dissatisfaction with governance around the world. Policy Research Working Paper 6138 This paper is intended to provide an assessment of the impact of the silent revolution of the last three decades on moving governments closer to people to establish fair, accountable, incorruptible and responsive governance. To accomplish this, a unique data set is constructed for 182 countries by compiling data from a wide variety of sources to examine success toward decentralized decision making across the globe. An important feature of this data set is that, for comparative purposes, it measures government decision making at the local level rather than This paper is a product of the Poverty Reduction and Economic Management Unit, East Asia and the Pacific Region. It is part of a larger effort by the World Bank to provide open access to its research and make a contribution to development policy discussions around the world. Policy Research Working Papers are also posted on the Web at http://econ.worldbank. org. The author may be contacted at<EMAIL_ADDRESS>at the sub-national levels used in the existing literature. The data are used to rank countries on political, fiscal and administrative dimensions of decentralization and localization. These sub-indexes are aggregated and adjusted for heterogeneity to develop an overall ranking of countries on the closeness of their government to the people. The resulting rankings provide a useful explanation of the Arab Spring and other recent political movements and waves of dissatisfaction with governance around the world. Introduction A silent revolution has been sweeping the globe since the 1980s. Hugely complex factors such as political transition in Eastern Europe, the end of colonialism, the globalization and information revolution, assertion of basic rights of citizens by courts, divisive politics and citizens' dissatisfaction with governance and their quest for responsive and accountable governance have been some of the contributing factors in gathering this storm. The main thrust of this revolution has been to move decision making closer to people to establish fair, accountable, incorruptible and responsive (F.A.I.R.) governance. The revolution has achieved varying degrees of success in government transformation across the globe due to inhibiting factors such as path dependency accentuated by powerful political, military and bureaucratic elites. While there has been monumental literature dealing with various aspects of this revolution, there has not been any systemic study providing a time capsule of the changed world as a result of this revolution. Such an assessment is critical to providing a comparative world perspective on government responsiveness and accountability. This paper takes an important first step in this direction by providing a framework for measuring closeness of the government to its people and providing a worldwide ranking of countries using this framework. The paper is organized as is four parts as follows. Part I is concerned with highlighting the conceptual underpinnings and developing a framework to measure closeness of the government to people. It presents a brief overview of conceptual underpinnings of moving governments closer to people. This is followed by a discussion of basic concepts in measuring government closeness to its people. It calls into question the methodologies followed by the existing literature and argues for a focus on the role and responsibilities of local governments as opposed to sub-national governments where intermediate order governments typically dominate. It is the first paper that advocates and treats various tiers of local governments (below the intermediate order of government) as the unit of comparative analysis for multi-order governance reforms. Part II presents highlights of the unique dataset compiled for this study. It presents summary statistics on structure, size, tiers of local governments and security of their existence. It also presents summary statistics on the various subcomponents of political, fiscal and administrative decentralization. Part III is concerned with empirical implementation of the framework presented in Part I. It begins by highlighting the relative importance and significance of local governments. This is followed by providing country rankings on various aspects of political, fiscal and administrative decentralization. By combining these measurements, an aggregate indicator of localization is developed for each country. This index is then adjusted for population size, area and heterogeneity. We also provides correlations of these indexes with the corruption perceptions 3 index, citizen-centered governance indicators, per capita GDP, size of the government and the ease or difficulty of doing business in the country. Part IV provides concluding observations highlighting the strength and limitations of the constructed indexes. Moving Governments Closer to People: Conceptual Underpinning of the Rationale and an Empirical Framework for Comparative Analysis Why Closeness of Government to Its People Matters: Conceptual Underpinnings Several accepted theories provide a strong rationale for moving decision making closer to people on the grounds of efficiency, accountability, manageability and autonomy. Stigler (1957) argued that that the closer a representative government to its people, the better it works. According to the decentralization theorem advanced by Wallace Oates (1972. P.55), "each public service should be provided by the jurisdiction having control over the minimum geographic area that would internalize benefits and costs of such provision", because:  local governments understand the concerns of local residents;  local decision making is responsive to the people for whom the services are intended, thus encouraging fiscal responsibility and efficiency, especially if financing of services is also decentralized;  unnecessary layers of jurisdictions are eliminated;  inter-jurisdictional competition and innovation are enhanced. An ideal decentralized system ensures a level and combination of public services consistent with voters' preferences while providing incentives for the efficient provision of such services. The subsidiarity principle originating from the social teaching of the Roman Catholic Church and later adopted by the European Union has argued for assignment of taxing, spending and regulatory functions to the government closest to the people unless a convincing case can be made for higher level assignment. Recent literature have further argued that such local jurisdictions exercising such responsibilities should be organized along functional lines while overlapping geographically do that individual are free to choose among competing service providers (see the concept of functional, overlapping and competing jurisdictions (FOCJ) by Frey and Eichenberger, 1999). Moving government closer to people has also been advanced on the grounds of creating public value. This is because local governments have the stronger potential to tap some of the resources that come as free goods -namely, resources of consent, goodwill, good Samaritan values, community spirit (see Moore, 1996). Moving government closer to people also matters in reducing transactions costs of individuals to hold the government to account for incompetence or malfeasance -a neo institutional economics perspective advanced by Shah and Shah (2006). Finally, a network form of governance is needed to forge partnership of various stakeholders such as interest based network, hope based network, private for profit or for non-profit provides and government providers to improve economic and social outcomes. Such network form of governance is facilitated by having an empowered government closer to people that plays a catalytic role in facilitating such partnerships (see Dollery and Wallis, 2001). In summing, a strong non-controversial case has been made by the conceptual literature to move government decision making closer to people on efficiency, accountability and responsiveness grounds. The question that is relevant is to develop a methodology for a comparative global assessment of a government's closeness to its people. This is the focus of research in the next section. Measuring a Government's Closeness to Its People: An Empirical Framework A government is closer to its people if it encompasses a small geographical area and population, and it enjoys home rule and cannot be arbitrarily dismissed by higher level governments. This requires an understanding of the structure, size and significance of local governments including its legal and constitutional foundation of its existence. An empirical framework for a comparative assessment must incorporate assessment of these factors. The following paragraphs elaborate on the methodology adopted in this paper to capture these elements. Unit of analysis. The literature to-date without exception takes sub-national governments as a unit of analysis for measuring closeness to people. This viewpoint is simply indefensible. 2 This is because states or provinces in large countries such as USA, Canada, India, Pakistan, Brazil, and Russia are larger in population size and area than a large number of small or medium size countries. Having empowered provinces and states in these countries means that decision making is still far removed from the people. Also intermediate orders of government in large federal countries may be farther removed from people than the central government in smaller unitary states. Therefore it would be inappropriate to compare provinces in Canada or states in Brazil, India, or the USA with municipalities, say, in Greece. This approach also vitiates against small countries such as Liechtenstein and Singapore as these countries would be mistakenly rated as having decision making far removed from people. In view of these considerations, local governments are the appropriate unit for measuring closeness to people as implemented here. Local government tiers. Local government administrative structure varies across countries and the number of administrative tiers varies from 1 to 5. This has also a bearing on the closeness of the government and must be taken into consideration. Local government size. Average size of local government in terms of population and area also varies across countries and it has a bearing on potential participation of citizens in decision making. An example of potentially misleading choice of units for comparative analysis is in Fan et al 2009, where the authors create a dummy variable, which is equal to 1 when the executive bodies at the lowest tier of government are elected. As a result, say Bangladesh gets 0, and Indonesia gets 1, which suggests that at the lowest tier Indonesia is more politically decentralized than Bangladesh. However, the average population of the local government unit in Indonesia is about 0.5 million, while in Bangladesh (according to the definitions in the paper) it is about 100 people. There are elected executive bodies in Bangladesh at a level of administrative units with population even less than 0.5 million, which implies that Bangladesh is more politically decentralized than Indonesia. Significance of local government. Whether or not local governments command a significant share of national expenditures indicates their respective role in multi-order public governance. This is important in terms of their roles and responsibilities. For example, a local government may have autonomy but only a limited and highly constrained role as in India. This needs to be taken into consideration while making judgment on closeness of government decision making to people. Security of existence of local governments. If local governments do not have any security of existence then their autonomy can be a hollow promise. Thus safeguards against arbitrary dismissal of local governments must be examined. This is to be assessed both by de-jure the legal and or constitutional foundations of local government creation and also de-facto working of such provisions. For example, local governments in India have constitutional backing, the same in Pakistan are creatures of the provinces and in China they simply are created by an executive order. While the legal and constitutional foundations of local government in India and Pakistan are much stronger, in practice and by tradition, local governments enjoy greater security of tenure in China. Empowerment of local government. This is to be assessed on three dimensions -political, fiscal and administrative (see Boadway andShah, 2009 andThompson, 2004). Political or democratic decentralization implies directly elected local governments thereby making elected officials accountable to local residents.. Political decentralization is to be assessed using the following criteria: direct popular elections of council members and the executive head; recall provisions for elected officials; popular participation in local elections and the contestability and competition in local elections. Fiscal decentralization ensures that all elected officials weigh carefully the joys of spending some else's money as well as the pain associated with raising revenues from the electorate and facing the possibility of being voted out. Fiscal decentralization is to be evaluated using the criteria: range of local functions; local government autonomy in rate and base setting for local revenues; transparency and predictability and unconditionality of higher level transfers; finance follows function or revenue means more or less match local responsibility; degree of selffinancing of local expenditures; responsibility and control over municipal and social services; autonomy in local planning, autonomy in local procurement; ability to borrow domestically and from foreign sources; ability to issue domestic and foreign bonds; and higher level government assistance for capital finance. Administrative decentralization empowers local governments to hire, fire and set terms of reference for local employment without making any reference to higher level governments, thereby making local officials accountable to elected officials. This is to be assessed using indicators for: freedom to hire, fire and set terms of reference for local government employment; freedom to contract out own responsibilities and forge public-private partnerships; and regulation of local activities by passing bye-laws. Description of the Data To implement the above framework, we have developed a unique and comprehensive dataset for 182 countries using data for the most recent year of availability (mostly 2005) on the relative importance of local governments, their security of existence and various dimensions of their empowerment. The following sections introduce and analyze various dimensions of these data. Local Government -Basic Definitions General government (GG) consists of 3 parts: Central Government (CG), State or Provincial Government (SG), and Local Government (LG). Each part consists of governmental units (in case of CG -only 1 unit), which are united into one or more tiers (in case of CG -1 tier). As far as data permits, Social Security Funds are consolidated with an appropriate part of GG. We use commonly accepted definitions of LG and SG as provided by the IMF Government Finance Statistics (GFS). These definitions are quite vague which results into countries deciding for themselves and reporting corresponding data. This sometimes leads to inconsistencies. For example, France with three sub-national tiers of government reports all of them as LG, whereas Spain -which in many ways has the same administrative structure as France -reports one tier of SG, and two tiers of LG. Giving more precise definitions for LG and SG, which could be applied to all countries, is a difficult task. In constructing a comparative data set, we have attempted to correct for these self-reporting biases by using country specific research studies where available to make a distinction between SG and LG tiers. Tiers of Local Government Our dataset contains detailed information about administrative structure of every country. In particular, we report which tiers of GG are ascribed to a local government, and number of governmental units at each tier. Tiers are needed to calculate the average population of LG administrative unit as follows: where LG-pop is the average population of an LG unit, T is the number of tiers in the country, P is its population, and X is the number of LG units at the i'th tier. Of the sample of 182 countries only 20 have state governments (SG), while the rest of the countries have only local and central governments. 26% of the countries have one tier of local government, 46% have two tiers, while 23% and 6% have three and four tiers respectively. 7 Figure 1: Number of Tiers of Local Government -World Map Source: Authors' calculations based upon data sources reported in Annex Table A1. Note: Color of a country corresponds to its percentile in the world's distribution: red -0-25 th , yellow -25-50 th , blue -50-75 th , green -75-100 th . Figure 1 shows the world map, where darker shades represent countries having more tiers of local government. Table 1 reports analysis of these tiers by geographic region and by country per capita income. World regions on average have two LG tiers with South Asia and the East Asia regions having above average number of tiers. High income countries however, tend to allow lower number of LG tiers as compared to lower income countries. Average Population Size of Local Government Units The average tiers-adjusted population of a local government unit ranges from about several thousand people (Equatorial Guinea, Switzerland, Czech Republic, Austria) to several hundred thousand people (Somalia, DR Congo, Indonesia, Korea), with the country-average population of 101,000 people. As shown in Figure 2 (see also Table 1) local governments in European and North American countries are significantly smaller in population size than the ones in the rest of the world, while the LG in Sub-Saharan Africa and East Asia are on average more than five times larger. Lower income countries have significantly larger population size governments. Figure 2: Population of Local Governments -World Map Source: Authors' calculations based upon data sources reported in Annex Table A1. Average Area of Local Government Units The average area of a local government unit ranges from 0.01 thousand square kilometers (TSK) in Czech Republic to 70 TSK in Libya, with the cross-country average of 2.1 TSK. European and South Asian countries have relatively much smaller area size local government units, while Africa and Middle East have average LG areas of up to 14 times larger. LG in higher income countries are generally smaller in average area than the ones in lower income countries (see Table 1 and Figure 3). The overall pattern observed here is that higher income countries on average tend to have smaller size (both in terms of population and area) local governments with fewer tiers than lower income countries. Figure 3: Area of Local Government -World Map Source: Authors' calculations based upon data sources reported in Annex Table A1. The Significance of Local Government: Relative Importance and Security of Their Existence Measurement of relative importance of local government and constitutional safeguards regarding arbitrary disbandment are critical to reaching a judgment about closeness of the government to its people. The following paragraphs highlight the variables used in this measurement. (a) Relative Importance of Local Governments The relative importance of local governments is measured by share of LG expenditures(lgexpdec) in consolidated general government expenditures for all orders of government (GG). This is obviously an imperfect measure of relative importance of local governments as a significant part of local government expenditures may simply be in response to higher level government mandates with little local discretion. However, data on autonomous local government expenditures are simply not available. Figure 4: Relative Importance of Local Governments and Their Independence -World Maps Source: Authors' calculations based upon data sources reported in Annex Table A1. LG share of GG expenditures varies greatly over our sample -from virtually zero percent in a number of countries (Guyana, Mozambique, Haiti, etc.) to 59 percent in Denmark, and have near chi-square distribution with one degree of freedom. A large majority of countries (63 percent) have local government expenditure shares less than the sample average of 13 percent, and only 11 percent of the countries have LG expenditures shares higher than 30 percent. Only in Europe, East Asia and North America, local governments are important players in the public sector. An alternate variable that could serve as a proxy for the relative importance of LG is LG employment (lgempl): share of LG employment in GG employment. The available data on this variable are however much less reliable and shows a great deal of year to year volatility for most developing nations. In view of this, we are left with no alternative but the use of expenditure shares as the only variable to measure the relative importance of local governments. LG employment is used in calculation of administrative decentralization index. (b) Security of Existence of Local Governments Local government security of existence is measured by LG independence(lgindep). This measure attempts to capture the constitutional and legal restraints on arbitrary dismissal of local governments. Only in 6 out of 182 countries, local governments have significant safeguards against arbitrary dismissal. . LG in 48 percent of the countries have limited independence and for the remaining 49 percent of countries in our sample, local governments can be arbitrarily dismissed by higher order governments. Europe, North America and Brazil receive relatively higher scores on this indicator whereas local governments in Africa and the Middle East have almost no security of existence. LG Relative Importance LG Independence Local Government Empowerment Local government empowerment is measured on fiscal, political, and administrative dimensions as discussed below. (a) Fiscal Decentralization The following variables are used to assess local government fiscal autonomy. • LG vertical fiscal gap(lgvergap). Vertical fiscal gap refers to the fiscal deficiency arising from differences in expenditure needs and revenue means of local government. These deficiencies are partially or fully overcome by higher level financing. Therefore, vertical fiscal gap is a measure of fiscal dependence of local government on higher level financing. The design and nature of higher level financing has implications for fiscal autonomy of local governments. It must therefore be recognized that vertical fiscal gap while being a useful concept cannot be looked in isolation of a number of related indicators to have a better judgment on local fiscal autonomy as done here. The average vertical gap in the world is 52 percent. It is somewhat higher in African and Latin American countries. However, in all regions there are local governments with high share of expenditures and high reliance on financing from above (e.g. Brazil), as well as almost non-existent LG governments that rely solely on their own financing (Togo, Niger). • LG taxation autonomy (lgtaxaut). This measure reflects upon a local government's empowerment and access to tools to finance own expenditures without recourse to higher level governments. It measures its ability to determine policy on local taxation (determining bases and setting rates) and as well as autonomy in tax collection and administration. Only 16 percent of the countries in our sample grant significant taxation autonomy to their LGs, while the rest grant limited or no tax autonomy to their local governments. 13 • LG unconditional transfers (lgtransf). Unconditional, formula based grants preserve local autonomy. Such grants are now commonplace yet conditional grants still dominate. Europe and North America, Latin America and Southern Asia regions have high percentage of countries with high scores on this indicator.  LG Expenditure Autonomy. Measured by share of LG expenditures in total GG expenditures this variable does not fully reflect the actual expenditure discretion that local governments have. First, LG may be simple distributors of the funding transferred to them from an upper-tier government, and have little choice over how the money in their budget should be spent. If the LG vertical gap (difference between LG expenditures and LG non-transfer revenues) is wide, and if the transfers from upper-tier governments are earmarked and discretionary, the actual spending power of LG may be much lower than it would be indicated by lgexpdec. Second, even the own revenues of LG (tax revenues or borrowed funds) may strongly depend on CG policy. If LG are not allowed to regulate taxes without CG interference (usually in such cases they receive a revenue-share of a tax, which is regulated by CG), then they cannot fully rely on the revenues from these taxes, and their policy would still be partly dependent on CG. We adjust for the first argument -that the real LG expenditure autonomy depends on the vertical gap and the structure of intergovernmental grants -by defining LG expenditure autonomy variable (lgexpdiscr): Note from (2), that even if a country has widest possible vertical gap (1), and smallest possible share of unconditional formula-based transfers (0) it still keeps 0.25 share of its original expenditure decentralization. This is to reflect the fact that discretionary conditional grant from 14 CG still gives more autonomy to the LG than the direct spending of CG. At the same time, country with a positive vertical gap and best possible set of transfers gets less than lgexpdecshare of it. This is to reflect the fact that even the best set of transfers does not give LG as much fiscal independence as its own revenues. • LG borrowing freedom (lgborrow). Can LG borrow money to satisfy their capital finance needs? Can the borrowing be done without consent or regulation of CG? 89 of 160 countries in our sample forbid any kind of borrowing by LGs, while only in 22 countries LGs are allowed to borrow without any restrictions. Local borrowing rules are more accommodating in Europe and Latin America. The descriptions, definitions and sample distributions of fiscal decentralization variables that we use are reported in Tables 7 and 8, and Figure 6 displays corresponding world maps. Annex Table A1. Note: Color of a country corresponds to its percentile in the world's distribution: red -0-25 th , yellow -25-50 th , blue -50-75 th , green -75-100 th . (b) Political Decentralization Political decentralization refers to home rule for local self-governance. This is examined using the following criteria. • LG legislative election(lglegel). Are legislative bodies at the local level elected or appointed? Is the truth somewhere in between? (For example, part of council members is appointed, part is elected, or members of councils are elected from preapproved by CG list.) Elected local councils are now commonplace around the world with only 34 percent of the countries in the sample having any restraints on popular elections of legislative councils at the local level, and only 14 countries have appointed local councils . Middle East and Sub-Saharan Africa are lagging behind the rest of the world in permitting directly elected local councils. • LG executive election (lgexel). Are executive heads (mayors) at the local level elected - LG Vertical Gap LG Unconditional Transfers LG Taxation Autonomy LG Expenditure Autonomy LG Borrowing Freedom 16 directly or indirectly -or appointed? Direct elections of mayors are not yet commonplace with some restrictions on direct elections in 79 percent of the countries. Thirty-six countries have no restrictions, while in 36 countries mayors are appointed at all LG tiers. While Africa and Middle East are traditionally lagging behind, European countries also receive relatively low scores on this indicator as most of the countries have some tiers of local government with appointed or indirectly elected mayors. • Direct democracy provisions (lg_dirdem). Are there legislation provisions for obligatory local referenda for major spending, taxing and regulatory decisions, recall of public officials, and requirement for direct citizen participation in local decision making processes? Only three countries in our sample (Switzerland, Japan and USA) have direct democracy provisions (as defined in Table 5 Table A1. Note: Color of a country corresponds to its percentile in the world's distribution: red -0-25 th , yellow -25-50 th , blue -50-75 th , green -75-100 th . (c) Administrative Decentralization Our concern here is to measure the ability of local governments to hire and fire and set terms of employment of local employees as well as regulatory control over own functions. As the latter data are not available, we are constrained to measure administrative decentralization simply by the first set of variables as follows. LG Legislative Election LG Executive Election LG Direct Democracy • LG HR policies (lghrpol). Are LG able to conduct their own policies regarding hiring, firing and setting terms of local employment? Only 43 of 158 countries allow their LGs full discretion regarding whom and at what terms to hire or fire. Europe, North America, Australia, and Latin America are leaders on this indicator. Many more countries (77) make this kind of decisions only at the central level even for local employees. LG employment (lgempl): share of LG employment in GG employment. Country average for LG employment is estimated to be 26 percent. However, about 34 percent of the countries in our sample report more than 30 percent of public workforce to be employed at the local level. The descriptions, definitions and sample distributions of administrative decentralization variables are reported in Tables 9 and 10, Figure 6 displays corresponding world maps. Figure 7: Administrative Decentralization Variables -World Maps Source: Authors' calculations based upon data sources reported in Annex Table A1. Note: Color of a country corresponds to its percentile in the world's distribution: red -0-25 th , yellow -25-50 th , blue -50-75 th , green -75-100 th . Worldwide Ranking of Countries on Various Dimensions of Closeness of Their Governments to the People Our main assumption is that decentralization to local governments matters only when local governments are important players in the public sector as measured by their share of general government expenditures, and have security of existence. Indeed, it is hard to believe that local governments -however politically or administratively independent they are from the centerhave little ability to serve their residents if they do not command significant budgetary resources and if they can be dissolved at will by a higher order government. These two variables adjusted by the degree of political, fiscal and administrative decentralization form the basis of our aggregate country rankings on "closeness" or "decentralization" nexus. In the following, political, fiscal and administrative decentralization sub-indexes are first constructed for sample countries. These indexes are then aggregated to develop a composite index of government's closeness to its people -the so-called "decentralization index". Finally this index is adjusted for heterogeneity and size of LGs. Fiscal Decentralization Index The formula for our fiscal decentralization index (fdi) is the following: Where lg_expaut is local expenditure autonomy, lg_taxaut is tax autonomy and lg_borrow represents legal empowerment for local borrowing. This index penalizes those countries, where LG do not have taxation autonomy nor borrowing freedom, however, it may still be positive for these countries (equal to 0.25 share of lg_expaut) reflecting the fact that own revenues do grant some degree of discretion to LG. At the same time, countries with full taxation autonomy and borrowing freedom get an index, which is equal to lg_expaut. If there is no data on lg_taxaut or lg_borrow then the worst possible values are assumed: lg_taxaut=lg_borrow=0. LG HR Policies LG Employment 20 Figure 8: Fiscal, Political, Administrative Decentralization Indexes -World Maps Source: Authors' calculations based upon data sources reported in Annex Table A1. Political Decentralization Index This index is constructed by simply taking the average variables described in the earlier section: Every variable discussed above is an essential and independent part of political decentralization. Therefore, taking the average of all variables seems to be a reasonable measure. The index is calculated for 182 countries. Administrative Decentralization Index Administrative decentralization index (adi) is constructed as follows: The index is built for 182 countries. The Aggregate Decentralization Indexes The aggregate index (di) incorporates the relative importance of LG (measured by lg_expdec), the security of existence of LG (measured by lg_indep), and fiscal, political and administrative indexes. It is constructed as follows: The index penalizes countries with low political and administrative decentralization, but even if pdi=adi=0 the index is still positive if LG have some fiscal autonomy and security of existence. It reflects the fact that even fully subordinated LG without any considerable administrative responsibilities still makes fiscal decisions in more decentralized way than the CG. It also smoothes measurement errors that can be contained in our measures of political, administrative decentralization, and security of existence. This index is constructed for 158 countries worldwide. Together they comprise 98% of the world's GDP, and 99% of the world's population. The Figure 8 depicts distribution of the decentralization index on the World map. The darker the color of a country, the more decentralized it is. European countries, North America, Brazil, and China receive high scores on this index. Countries from Latin America, former Soviet Union, and East Asia receive average decentralization index, while Middle East and African countries are the least decentralized. Table A1. Developing the Government "Closeness" Index by Adjusting the Decentralization Index for Heterogeneity of Size and Preferences Our main premise is that the decentralization brings government decision making closer to the people. The decentralization indexes reported earlier indicate the significant local governments are in policymaking and public service delivery responsibilities in any country. These indexes do not fully capture the actual closeness of local governments to people. This is because local governments vary widely in population, area and diversity of preferences of residents. For example, Indonesia has average LG unit population size of 0.5 mln people, while in Switzerland, for instance, the average local government population size is only 3 thousands. Population of such countries as Malta, Iceland, Belize, Maldives, etc. is lower than 0.5 mln people. It is obvious that in most aspects, e.g. accounting for heterogeneous preferences, being accountable and known to people, etc., even central governments in these countries are closer to people that the LG in Indonesia. Therefore, the decentralization indexes need to be adjusted for LG population and area and other measures of a country's heterogeneity. Our procedure of the adjustment is the following. Suppose we have a country with decentralization index β, average population of LG unit N, and heterogeneity index α. Heterogeneity index is based on average area of LG unit, ethno-linguistic, age, income, urbanization composition of the country's population, as well as its geographical features (relief, versatility of climatic zones, etc.). Each resident of the country has different preferences regarding the level of governmental services provided. If an average LG provides x units of the service then the disutility of a resident i is , where f is some function of two arguments. Disutility increases with the distance between the decision of the government and the preference of the resident, and all things equal, disutility increases with heterogeneity of the country, i.e. residents are more distant in their preferences in more heterogeneous countries. Governments are assumed to be benevolent, and minimize the aggregate disutility of all residents in a region they are in charge of. Since we assume symmetric distribution of preferences in the region, benevolent government would provide N/2 units of the service -a level preferred by the median resident. Given the assumptions above, the question we ask is what decentralization index should (β,N,α)country have in order to produce a disutility of an average resident equal to the one in (β, )country, a country with the same decentralization index β, but some benchmark levels of average LG unit population and heterogeneity index? The answer to this question is follows from the identity below: where AD(N,α) is the disutility of an average resident in LG with population N and heterogeneity index α, given that the government sets its service to satisfy the median resident. AD can be found from the following expression: where in the above equation we use approximation of a sum with the integral (to simplify calculations), and our assumption about symmetric around median preferences. For our calculation of decentralization index adjustment we take the following f : where parameter A allows us to control the sensitivity of our results to large differences in average LG unit population. Given f, the AD from (8) becomes: First, we assume there is no heterogeneity, i.e. α=0. By choosing different A's we consider three scenarios: sensitive (A=0.01), moderate (A=0.1), and conservative (A=1). Then we introduce heterogeneity in the moderate scenario. First, our α is only based on the average LG unit area. Figure 10: Government Closeness Index -World Map Source: Authors' calculations based upon data sources reported in Annex Table A1. Note: Color of a country corresponds to its percentile in the world's distribution: red -0-25 th , yellow -25-50 th , blue -50-75 th , green -75-100 th . Then the heterogeneity index is extended to account for additional variables. These are age, residency, income, ethnic, religious, linguistic structure of population, country's area, relief heterogeneity (difference between highest and lowest points), and climate heterogeneity (difference between highest and lowest latitude). Table 13 presents top ten leaders in each of the five new indexes (columns 2-6), each corresponding to adjustments presented above. The decentralization index without adjustments is presented in column 1. As is suggested by the name, the conservative scenario adjustment (A=1) results in the smallest changes. Yet, Finland, Switzerland, USA, Iceland move up the ladder as the countries with traditionally small governments. On the other hand, countries with large average LG population e.g. China, Japan, and Republic of Korea have their rankings lowered. Moving from conservative to sensitive scenario, countries with small LG continue to get relatively higher indexes. Switzerland is the most decentralized country with this kind of adjustment, Iceland is the second. More European countries (Hungary, Georgia, Czech Republic) enter the list of leaders instead of Asian countries. Adjustment for area and heterogeneity do not change the ranking much, which may suggest that the adjustment procedure is too conservative. The only notable difference is that Switzerland gets lower index (moves down from 1st to 2nd place) because of its linguistic and ethnic heterogeneity. Figure 10 shows the distribution of our final Government Closeness Index in the world. Relationship of the Decentralization Indexes with Government Size, Incidence of Corruption, Ease of Doing Business and Incomes and Good Governance In the Table 14 we present simple OLS regressions of our decentralization indexes (and lg_expdec -a standard measure of decentralization in the literature) on disaggregate decentralization indicators, corruption measures (TI corruption perception index), development measures (GDP per capita), size of the government (GG consumption, % of GDP), number of procedures in a country needed to start a new business (Start of business, # proc.), number of civil conflicts in a country (# civil conflicts), strength of country's democratic institutions (Democraty score), durability of political regime in a country (Durability of regime), and citizencentric governance indicators (CGI) as reported in . We report both regressions with no other controls apart from corresponding economic indicator and regressions, where we also control for level of development of a country (measured by GDP per capita). These regressions indicate that decentralized governance is associated with higher per capita GDP , lower incidence of corruption (higher corruption perception index), better environment for doing business, and higher durability of political regime -even controlling for the level of development. We also find that decentralization is associated with lower government consumption, higher citizen-centric governance performance, and stronger democracy institutions, although the relationship with these variables looses significance (but keeps sign) when controlling for the level of development. When decentralization is measured only by lg_expdec the statistical associations between decentralization and our selected economic indicators have generally lower significance (i.e. have lower t-statistics). At the same time, decentralization index adjusted for heterogeneity and LG population generally produces higher regression coefficients than unadjusted decentralization index. Concluding Remarks The silent revolution of the past two decades has attracted strong policy and research attention worldwide. The assessment of the impact of this revolution in moving decision making closer to the people, however, remains an unanswered question. This paper takes an important first step in this direction by providing a framework of comparative measurement and developing worldwide ranking of countries on people empowerment on various aspects of government decision making. While there is a crying need for systematic collection of quality data needed for the application of the comparative framework presented here, the integration of available diverse dataset as done here has yielded promising results. For example, the closeness indexes presented here show that one could have predicted well in advance with a fair degree of accuracy countries that were ripe for popular people revolt such as the one experienced through the Arab Spring or similar movements across the globe. The indexes also provide useful barometers of the enabling environment for doing business or promoting growth and economic development and good governance. Overall they provide useful aggregate measures of government closeness to their people. We hope this paper will stimulate further research to improve upon the data and the methodology presented here as well as facilitate building common consensus in countries poorly ranked here for fundamental governance reforms.
9,465.4
2012-07-01T00:00:00.000
[ "Economics", "Sociology" ]
covR Mediated Antibiofilm Activity of 3-Furancarboxaldehyde Increases the Virulence of Group A Streptococcus Background Group A streptococcus (GAS, Streptococcus pyogenes), a multi-virulent, exclusive human pathogen responsible for various invasive and non-invasive diseases possesses biofilm forming phenomenon as one of its pathogenic armaments. Recently, antibiofilm agents have gained prime importance, since inhibiting the biofilm formation is expected to reduce development of antibiotic resistance and increase their susceptibility to the host immune cells. Principal Findings The current study demonstrates the antibiofilm activity of 3Furancarboxaldehyde (3FCA), a floral honey derived compound, against GAS biofilm, which was divulged using crystal violet assay, light microscopy, and confocal laser scanning microscopy. The report is extended to study its effect on various aspects of GAS (morphology, virulence, aggregation) at its minimal biofilm inhibitory concentration (132μg/ml). 3FCA was found to alter the growth pattern of GAS in solid and liquid medium and increased the rate of auto-aggregation. Electron microscopy unveiled the increase in extra polymeric substances around cell. Gene expression studies showed down-regulation of covR gene, which is speculated to be the prime target for the antibiofilm activity. Increased hyaluronic acid production and down regulation of srtB gene is attributed to the enhanced rate of auto-aggregation. The virulence genes (srv, mga, luxS and hasA) were also found to be over expressed, which was manifested with the increased susceptibility of the model organism Caenorhabditis elegans to 3FCA treated GAS. The toxicity of 3FCA was ruled out with no adverse effect on C. elegans. Significance Though 3FCA possess antibiofilm activity against GAS, it was also found to increase the virulence of GAS. This study demonstrates that, covR mediated antibiofilm activity may increase the virulence of GAS. This also emphasizes the importance to analyse the acclimatization response and virulence of the pathogen in the presence of antibiofilm compounds prior to their clinical trials. Principal Findings The current study demonstrates the antibiofilm activity of 3Furancarboxaldehyde (3FCA), a floral honey derived compound, against GAS biofilm, which was divulged using crystal violet assay, light microscopy, and confocal laser scanning microscopy. The report is extended to study its effect on various aspects of GAS (morphology, virulence, aggregation) at its minimal biofilm inhibitory concentration (132μg/ml). 3FCA was found to alter the growth pattern of GAS in solid and liquid medium and increased the rate of auto-aggregation. Electron microscopy unveiled the increase in extra polymeric substances around cell. Gene expression studies showed down-regulation of covR gene, which is speculated to be the prime target for the antibiofilm activity. Increased hyaluronic acid production and down regulation of srtBgene is attributed to the enhanced rate of auto-aggregation. The virulence genes (srv, mga, luxS and hasA) were also found to be over expressed, which was manifested with the increased susceptibility of the model organism Caenorhabditis elegans to 3FCA treated GAS. The toxicity of 3FCA was ruled out with no adverse effect on C. elegans. Significance Though 3FCA possess antibiofilm activity against GAS, it was also found to increase the virulence of GAS. This study demonstrates that, covR mediated antibiofilm activity may increase the virulence of GAS. This also emphasizes the importance to analyse the Introduction Streptococcus pyogenes, also called group A streptococcus (GAS) is a β-haemolytic pathogen which exclusively affects human and naturally inhabits human skin and throat [1,2]. It ranks among top ten infectious pathogens, affecting 700 million individuals and causing mortality over 500,000 per year globally [3]. It is an originator of wide number of invasive and non-invasive diseases like pharyngitis (strep throat), necrotizing fasciitis and streptococcal toxic shock syndrome [4]. An early treatment for streptococcal infection is imperative, since untreated mild infections can even lead to complex diseases like rheumatic heart disease and glomerulonephritis [5]. Virulence factors of GAS include hyaluronic acid capsule, pyrogenic exotoxins (A, B & C), surface associated M protein, streptokinase, streptodornase, streptolysin S, streptolysin O, and biofilm formation [6]. Biofilm formation is a protective effort put in by microbes to escape from antibiotics. In some cases, the biofilm-associated resistance to antibiotics is due to the inability of drug to penetrate the biofilm [7] and in other cases, antibiotics penetrate biofilms but are not effective compared to planktonic cells due to metabolic/physiological differences [8]. The extracellular matrix, three dimensional structure and difference in gene expression are the three barriers that prevent drug penetration and increases antibiotic resistance in microbes during biofilm mode of growth [9]. The emergence of erythromycin and clindamycin resistant GAS among healthy school children in Korea were found to be increased by the rate of 21.6% and 23.6% respectively, between 1995 and 2002 [2]. Erythromycin resistance was also observed with 44% in Finland during 1992 [10], 32% in USA during 1994-95 [11] and 17.1% in Spain during 1996 [12]. Tetracycline resistance strains (34%) were also observed among patients with invasive diseases in USA. A number of studies have reported the biofilm formation of Group A Streptococus both in vitro and in vivo [13,14,9]. Biofilm formation of GAS has been linked to therapeutic treatment failure. For instance, all of the 99 GAS isolates obtained from the children between 2-18 years of age in Calgary, Canada, were found to possess biofilm forming ability and 32 of them did not respond to penicillin treatment [9]. About 90% of the total 289 clinical isolates of GAS causing both invasive and non invasive infections were found to possess biofilm forming efficacy [15]. Among the 60 children who were about to undergo tonsillectomy, 21 individuals (37%) were screened positive for GAS and SEM analysis of their tonsil revealed the presence of GAS to be in biofilm [16]. Thirty seven percent of non-severe recurrent acute otitis media cases in children were identified to be caused by nasopharyngeal biofilm-producing GAS [17]. The emergence of antibiotic resistance phenomenon among GAS and the clinical importance of streptococcal biofilm is an alarming threat to the mankind, which makes it mandatory to find novel antagonistic agents. Unlike antibiotics which build selective pressure on microbes and induce antibiotic resistance, these antagonistic agents are used to inhibit the pathogenicity of the organism than killing it. Antibiofilm compounds are one such class of compounds which inhibit the microbial biofilm formation, thereby aiding antibiotic penetration. Two component systems (TCS) play a key role in adaptation of the microbes to varying environmental conditions [18]. GAS consists of about 13 such two way component systems among which covRS system is well studied and found to control about 15% of its total gene expressed [19,20]. GAS also consists of various stand-alone transcription factors which coordinate the virulence factor expression in the organism. mga and srv are such important stand-alone factors in GAS which play key role in its virulence factor expression [21]. Hence studying the influence of antibiofilm agents on these TCS and stand-alone regulators is expected to provide little insights on the target of the antibiofilm agents. 3-Furancarboxaldehyde (3FCA) is a volatile compound present in floral honey [22][23][24], fruits [25,26] and oils of certain plant roots [27]. 3FCA was also found to possess anti-oxidant property [28]. The current study aims to unveil the antibiofilm activity of 3-furancarboxaldehyde (3FCA) against GAS, its influence on virulence factor production and to investigate the mechanism governing its antibiofilm activity by studying the differential gene expression between control and 3FCA treated GAS. The in vitro studies were further manifested in Caenorhabditis elegans, a simple and widely used model organism for host-pathogen interaction and toxicological studies. Ethical statement In the current study, healthy human blood was used in blood survival assay and sheep blood was used in haemolysin quantification assay. Blood samples were used only for research purpose. The blood sample from healthy human (one of the authors of the manuscript) was drawn by technically skilled persons and a written consent was obtained. Experimental methodology and use of healthy human blood was assessed and approved by Institutional Ethical Committee (Human Research), Alagappa University, Karaikudi under the No. IEC/AU/2014/3. Fresh sheep blood sample was collected from the Karaikudi municipality modern slaughter house, Karaikudi. Since, the blood is normally discarded in the slaughter house; no specific permission from ethical board was required. Bacterial strain and culture condition S. pyogenes, SF370 was cultured in tryptose agar (Hi-media, Mumbai, India) and todd hewitt broth (Hi-media, Mumbai, India) supplemented with 0.5% yeast extract and 0.5% glucose (THYG), and incubated at 37°C. Overnight culture of GAS with 0.4 OD at 600 nm was considered as standard cell suspension. Effect of 3FCA on GAS biofilm formation 3FCA was added at increasing concentrations individually to wells containing one ml of THYG, in a sterile 24 well micro titre plate. One per cent inoculum from standard cell suspension of SF370 was added to each well and incubated at 37°C for 24 h. After incubation, optical density (OD) of each well was measured at 600 nm, so as to check the effect of compounds on the growth of GAS SF370. Planktonic cells were then discarded and the wells were washed with sterile distilled water and allowed to dry. Dried wells were stained with 1ml of 0.4% crystal violet (w/v) for 10 minutes, washed twice with distilled water and allowed to dry. Biofilm bound crystal violet was then extracted using 20% glacial acetic acid for 10 min and the contents of the wells were read at 570 nm. The amount of dye present after washing directly reflects the amount of cells in the biofilm. Inhibition percentage was measured using the formula given below: Growth curve and viability of GAS In order to assess the influence of 3FCA on growth of GAS, OD 600 of GAS cultures were observed over a period of 24 h at a regular interval of 1 h. The Viability of GAS was quantified using XTT-menadione. XTT-menadione solution was prepared freshly prior each experiment by adding 0.2 mg/ml XTT (Sigma Aldrich, USA) and 0.172 mg/ml menadione (Hi-media, Mumbai, India) at 12.5:1 ratio. Briefly, equal number of cells were inoculated to 1 ml of media in a 24 well microtiter plate with and without 3FCA and incubated at 37°C for 24 h. After incubation the planktonic cells were aspirated, harvested and washed with sterile PBS and resuspended in 200 μl of PBS. In order to collect cells involved in biofilm, each well was added with 200 μl of sterile PBS and scrapped thoroughly. To these planktonic and biofilm cell suspensions, 25μl of XTT-menadione suspension was added and incubated in dark at 37°C for 4 h. Cell viability is correlated to the reduction of XTT into orange colored formazan, which was quantified spectrophotometrically at 490 nm. The relative graph was drawn with planktonic and biofilm cell viability. Effect of 3FCA on GAS growth on solid and liquid media Standard cell suspension of GAS was streaked over tryptose agar containing acetone (vehicle control) or 3FCA at MBIC concentration and incubated for 48 h at 37°C, in order to study its nature of growth on solid media. To investigate its growth in liquid media, 1% standard cell suspension was added to 1ml of THYG with and without 3FCA and incubated for 18 h at 37°C. Microscopic techniques In order to study the architecture of biofilm upon treatment, biofilms were developed on 1 X 1 cm glass slides, which were placed in a 24 well plate containing 1ml of medium, 1% inoculum and compound at increasing concentrations. The plate was incubated for 24 h at 37°C. The slides were stained and fixed accordingly. For light microscopic analysis, glass slides were washed with sterile PBS and stained with 0.4% crystal violet, which was then washed and air dried. The air dried slides were viewed under microscope (Nikon Eclipse 80i, USA) at 400X magnification and imaged. For Confocal Laser Scanning Microscopy, the slides were washed with sterile PBS and stained with 0.1% acridine orange for 5 min at dark, and de-stained with sterile distilled water. The acridine orange stained cells were observed and imaged under CLSM (LSM 710, Carl Zeiss, Germany). For scanning electron microscopy, the slides were washed with sterile PBS and fixed with 2% glutaraldehyde for 8 h at 4°C and washed with sterile PBS. The slides were then dehydrated with ethanol in increasing concentrations (20,40,60, 80 and 100%) and gold coated prior their observation under SEM (VEGA 3 TESCAN, Czech Republic). For transmission electron microscopy, the 24 h liquid cultures of GAS grown in the presence and absence of 3FCA were centrifuged and the cells were washed with sterile PBS and fixed with 2% glutaraldehyde for 6 h at 4°C. Glutaraldehyde fixed cells were washed thrice with distilled water and a drop of the sample was placed on a piece of parafilm in a carbon coated copper grid of 3 mm diameter and allowed to dry. The grid was washed with distilled water and excess of water was removed. The dried grid was stained with 2% uranyl acetate, air dried and observed under TEM (Hitachi, H-7500, Japan). Auto-aggregation assay Bacterial aggregation was calculated by visually observing the time it takes to settle at the bottom. Before the assay, GAS cultures were grown to 0.4 OD at 600 nm in the presence and absence of 3FCA. The cells were then harvested and resuspended in sterile PBS. The tubes were kept undisturbed and were visually observed every 5 minutes in order to check the rate at which the cells settle. Faster the cells settle, higher is the aggregation rate. Protease quantification Azocasein assay was performed with control and compound treated GAS supernatant in order to quantify the total cysteine protease production. GAS was grown in the presence and absence of 3FCA for 24 h at 37°C. The culture was centrifuged at 12000 rpm, and the supernatants were filter sterilised through 0.2 micron nylon membrane filter. Equal volume of supernatant and activation buffer (1 mM EDTA, 20 mM DTT in 0.1 M sodium acetate buffer, pH 5.0) was added to the cell free culture supernatant and kept at 40°C for 30 min. To the mixture, equal volume of 1% (w/v) azocasein was added and incubated further for 1h at 40°C. Trichloro-acetic acid (10%) was then added and mixed thoroughly in order to precipitate the protein and stop the reaction. The mixture was then centrifuged at 12000 rpm for 5 min and the supernatant was read at 366 nm. Hemolysin quantification Freshly collected sheep blood was washed twice with sterile PBS and resuspended to a final concentration of 2% (v/v) in PBS. Equal volumes of bacterial cell free culture supernatant and 2% blood were added together and incubated at 37°C for 1h followed by incubation at 4°C for 1h. The tubes were then centrifuged at 3000 rpm for 5 min and the supernatant was read at 405 nm in order to quantify the haemolysis that has occurred. Hyaluronic acid quantification In order to isolate cell associated hyaluronic acid, 1% of standard cell suspension was inoculated in 1 ml of THYG containing acetone or 3FCA and incubated at 37°C for 24 h. The cells were then harvested and washed with sterile PBS and resuspended in 1 ml of PBS. To extract hyaluronic acid from cells, 1 ml of the suspension was added with 1 ml of chloroform and vortexed thoroughly and kept undisturbed at room temperature for 1 h. The suspension was then centrifuged and supernatant was quantified for hyaluronic acid. Cell associated hyaluronic acid was quantified using stains-all reagent (Sigma Aldrich, USA), which was prepared as described earlier [29]. Briefly, 20 mg of stains-All reagent was dissolved in a solution containing 50 ml formamide, 50 ml distilled water and 16ml of acetic acid. To 100 μl of supernatant, 1 ml of freshly prepared stains-all reagent was added, vortexed and the absorbance was read at 640 nm. Blood survival assay Equal volume of overnight culture of GAS grown in the presence and absence of 3FCA was added to healthy human blood at 1: 4 ratio and mixed gently by inverting the tube and incubated at 37°C for 3 h and the total number of viable GAS was quantified by spread plate method. Total RNA isolation and cDNA Synthesis Total RNA of GAS was isolated from the mixture of planktonic and biofilm cells collected at mid log phase in the presence and absence of compound at MBIC using Trizol reagent. The first-strand cDNA were synthesized using High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, USA). Real Time PCR (qPCR) Analysis RT-PCR was followed by real-time PCR (7500 Sequence Detection System, Applied Biosystems Inc. Foster, CA, USA) in a single-well format in which S. pyogenes gene-specific primers of covS, covR, srv, mga, speB, luxS, hasA, ciaH, sclB, srtB, spy_125 and gyrA genes were combined separately with the PCR mix (SYBR Green kit, Applied Biosystems, USA) at a predefined ratio. Housekeeping gene, gyrase (gyrA) was taken as internal control. The role of these genes and their primer sequences are given in S1 Table. The PCR cycle number was titrated according to the manufacturer's protocol to ensure that the reaction was well within the linear range of amplifications. The steady-state levels of candidate genes were assessed from the Ct (cycle threshold) values of the candidate gene relative to the Ct values of gyrA (internal control). C. elegans survival assay C. elegans survival assay was done to unveil the toxicity of the compound if any, as well as to analyse the impact of 3FCA on the virulence level of GAS against C. elegans. Countable numbers (~10) of hermaphrodites at L4 stage were taken for the study. In order to assess the toxicity of the compound, C. elegans were grown in 1 ml of liquid medium (M9 buffer) containing E. coli OP50 (~1000 CFU/ml) (laboratory food source) in the presence and absence of 3FCA at MBIC. C. elegans grown with E. coli OP50 along with acetone (vehicle) was used as control. In order to assess the influence of 3FCA on virulence of GAS, the survival rate of C. elegans grown in the presence of GAS + 3FCA was compared with that grown in presence of GAS + acetone (vehicle control). The inoculum load of GAS was~1000 CFU/ml and 3FCA was used at MBIC (132 μg/ml). C. elegans were checked for every 4 h to assess their viability. The animals were considered to be dead when it showed no more response or movement to external stimuli like a gentle tap or touch with platinum loop. CFU assay In order to assess the internalization of GAS in C. elegans, a total of 10 worms at L4 stage were grown for 24 h in tubes containing 1 ml of M9 buffer,132 μg of 3FCA and inoculated with 1% standard cell suspension of GAS. Worms grown in the presence of acetone (vehicle) and GAS was considered as control. The exposed C. elegans were washed first with M9 buffer containing 0.01% sodium azide. This was followed by a wash with tetracycline (0.5 μg/ml) in order to remove the skin adhered GAS. The washed live worms were crushed with silicon carbide (220 mesh) for the extraction of internalized GAS. The extracted cells were serially diluted and spread plated on tryptose agar plate. The plates were incubated overnight at 37°C, and the total CFU were counted. Statistics All the experiments were performed at least twice in independent experiments in triplicates. All data were expressed as arithmetic mean ± standard deviation. Unpaired Student's t-test was used to compare the groups. Statistical significance was set at P<0.01. Results Antibiofilm and non-fatal effect of 3FCA on GAS The antibiofilm activity of 3FCA was measured in a microtiter plate using crystal violet. A concentration dependent increase in antibiofilm activity was observed. 3FCA at 132 μg/ml concentration showed 90% activity. Since no significant increase in the activity was observed at higher concentrations, 132 μg/ml was considered as Minimum Biofilm Inhibitory Concentration (MBIC) (Fig 1). The growth curve of GAS in the presence and absence of 3FCA also showed a mild increase in the OD 600 nm of 3FCA treated samples compared to control and the doubling time of GAS in both the control and 3FCA treated cultures were found to be around 2 h and the cells reached the stationary phase at eleventh hour itself. The non antibacterial activity of 3FCA was also assessed by quantifying and comparing the total number of viable cells in the control and treated wells using XTT. The results showed no significant difference among the total viable cells in the control and treated samples, which confirmed that 3FCA does not have any antibacterial activity against GAS (Fig 2). The total number of viable cells involved in biofilm and planktonic mode of growth, were quantified individually. The total number of cells involved in biofilm formation of 3FCA treated samples was reduced, wherein the absorbance was found to be 0.12 and that in control was found to be 0.4. Complementing this, the viable planktonic cells were found to be increased from 0.7 OD in control to 1.25 OD upon treatment. On comparing the absorbance at 490 nm of biofilm cells in the control and treated samples, it was found that only 26% of cells were allowed to form biofilm in treated samples. Microscopic techniques The light microscopic as well as CLSM images clearly showcase reduction in the biofilm covered surface area on treatment with 3FCA. The biofilms were also studied under SEM, which showed decreased chains or biofilm on treatment. The morphology of the organism was also observed to be modulated. The treated cells were observed to be coated with increased extracellular matrix compared to control. The framework of the cell wall of control and treated cells was found to be dissimilar. TEM analysis was performed in order to avoid dehydration of the cells and to visualize them in their native form. The images clearly displayed increased hyaluronic acid secretion around 3FCA treated cells (Fig 3). Effect of 3FCA on morphology of GAS The colonies of 3FCA treated GAS on agar plates were found to be more mucoid and bigger in size, compared to its control and those grown in liquid media were seen to be in unusual clumps and non adherent to the surface which is in total contrast to the control cells settled at the bottom, with uniform growth pattern throughout the well (Fig 4). The change in the morphology was also confirmed with TEM analysis which showed that treated cells were covered with hyaluronic acid. Effect of 3FCA on auto-aggregation pattern Auto aggregation assay results showed drastic aggregation of 3FCA treated GAS. The level of bacterial aggregation was examined by measuring the time taken for the organism to settle. During the time course, the turbidity of untreated control had no difference till 30 min, whereas 3FCA treated cells were observed with increased rate of aggregation. Even after one hour of incubation in static state, control cells did not settle whereas, treated cells were found to settle within 30 min. Similar result was also observed in polystyrene plate. Aggregation of 3FCA treated cells was observed at the center of the plate whereas no difference was seen in untreated wells. Images were obtained after incubating the plate and tubes for 30 min in static condition (Fig 5). Effect of 3FCA on protease and haemolysin production Extra cellular cysteine protease is an important and well-studied virulence factor of GAS. On the other hand, streptolysin O and streptolysin S are the oxygen labile and oxygen stable exotoxins responsible for the hemolytic activity of GAS. The results indicated no significant difference in protease and haemolysin production upon treatment with 3FCA indicating that 3FCA was found to be ineffective against extracellular protease and haemolysin production by GAS (Table 1). Hyaluronic acid quantification Hyaluronic acid capsule which act as a shield for GAS in escaping from the human immune system was quantified using Stains all reagent. The results showed a concentration dependent increase in the hyaluronic acid secretion of up to 91% on treatment with 132μg/ml of 3FCA (Fig 6). . (a, b) Growth of GAS in Tryptose agar plates. GAS growth pattern were found to be mucoid in the presence of 3FCA (b) compared to its control (a). (c, d) Growth of GAS in liquid medium (THYG). GAS grown in the presence of 3FCA (d) were found to be clumped at the centre, whereas their corresponding control (c) were uniformly distributed throughout the well and found to be adhered to the surface. Effect of 3FCA on ex vivo blood survival Hyaluronic acid capsule and surface associated M protein, which are responsible for the escaping of the organism from phagocytosis was primitively assessed with its survival in healthy human blood. The results showed an insignificant difference among the survival of control and treated GAS in healthy human blood (Table 1). Real time PCR (qPCR) analysis To find out the effect of 3FCA on the gene expression pattern, qPCR was performed. The expression of genes involved in TCS (covRS, ciaH), stand-alone regulators of virulence (mga, srv), quorum sensing (luxS) and streptococcal exotoxin B production (speB), hyaluronic acid synthesis (hasA), cell wall associated protein synthesis (srtB/spy_135), streptococcal collagen like protein synthesis (sclB) and minor pilin subunit synthesis (spy_125) were studied. All these candidate genes were known to be directly or indirectly involved in streptococcal virulence, biofilm formation and aggregation. Detailed descriptions of the genes, their role and the nucleotide sequences of primers used in the study are given in the supplementary S1 Table. The gene expression levels of the treated cells were compared with control and normalised to one. The results displayed significant down regulation in the expression of covR, srtB and ciaH (66%, 95% and 41% respectively) and up regulation in the expression of covS, luxS, mga, srv and hasA (189%, 61%, 51%, 88% and 48% respectively) genes. An insignificant difference in the expression levels was observed for speB and sclB genes (Fig 7). C.elegans survival assay A compound should be nontoxic in nature so as to be used at clinical level. To confirm this, cytotoxicity of 3FCA was assessed on a simple, eukaryotic model organism, C. elegans, which is widely used in compound screening and toxicological studies. The avirulent E. coli OP50, uracil auxotroph was used as food source for the model. Acetone + OP50 and 3FCA + OP50 were used respectively as control and test sample. The results revealed that 3FCA had no cytotoxic effect on C. elegans. In order to assess the streptococcal virulence level, Acetone + SF370 and 3FCA + SF370 were used as control and treated samples respectively. In presence of GAS, complete killing took place in 144 h, whereas the nontoxic 3FCA in combination with GAS showed much reduced survival time of only 80 h for complete killing of C. elegans (Fig 8). CFU assay In order to confirm the enhanced virulence of 3FCA treated GAS, enumeration of GAS internalised by C. elegans in its intestine was performed. As expected, the total number of GAS CFU obtained from the intestine of C. elegans incubated with 3FCA treated GAS were found to be 2.47 x 10 5 CFU/ml compared to 4.23 x 10 4 CFU/ml in untreated GAS, which is approximately six fold increase in internalisation. Discussion In the current era, emergence of multi drug resistance phenomenon among microbes has made a dire need for novel antagonistic agents which are not dreadful to microbes but reduces the human risk factors associated with pathogenicity. One such class of modern microbial antagonistic agents are called antibiofilm compounds, which inhibit the microbial biofilm formation. Biofilm formation is considered as one of the most important virulent factors of microorganism which help them to survive in hostile conditions and prevents penetration of antibiotics to the cells encased in it [30]. Unlike antibiotics, these antibiofilm compounds do not exert selection pressure on microorganisms; thereby antibiotic resistance phenomenon can be eliminated. Nevertheless, as the organism survives in the environment where its biofilm is depleted, it becomes important to investigate its behaviour and pathogenecity in the presence of the antibiofilm agents and unveil their mode of action. The present study investigates the antibiofilm activity of 3-Furancarboxaldehyde, a volatile compound present in floral honey against Group A Streptococcus and a great deal of attention has been paid to study the effect of 3FCA on biofilm formation and virulence factor production. The current study demonstrates that, 3FCA possesses strong antibiofilm activity against GAS. It is hypothesised from our results that 3FCA targets the covRS TCS leading to down regulation of covR which in turn increases the virulence of the organism. 3FCA also promotes aggregation of the organism which is speculated to be due to the combinatorial effect of increase in hyaluronic acid production and down regulation of srtB gene. The hypothesis is supported with both physiological assays and gene expression studies The antibiofilm efficacy of 3FCA was evaluated against GAS, which showed a concentration dependent increase in the activity. The effect of 3FCA was also studied against S. mutans, S. mitis, S. salivaris and S. sanguinis which revealed that 3FCA neither pose antibacterial nor antibiofilm activity against any of these pathogens (Data not shown). The obtained results clearly suggest that 3FCA possesses specific antibiofilm activity against GAS. Monitoring the growth curve of the organism for 24 h showed a dose dependent increase in the absorbance, which when verified with XTT assay showed an insignificant difference. The results revealed 3FCA as an ideal antibiofilm agent against GAS with no antibacterial effect. Light microscopic analysis showed reduction in biofilm covered surface area in the 3FCA treated wells and CLSM analysis revealed decrease in the thickness of biofilm. The SEM micrograph of treated cells showed an abnormal morphology, which prompted us to study the morphology of GAS grown in solid media in the presence of 3FCA and its behaviour in liquid media. The result of SEM analysis was corrugated by the mucoidal CFUs seen in tryptose agar plates supplemented with 3FCA. Even in liquid media, treated cells were found to be grown in clumps and floating contrary to control samples. The difference in the cell surface was also analysed with TEM which clearly showed mucous layer surrounding the cells. The layers surrounding the cells in SEM analysis is attributed to be dehydrated mucous secreted by the cells. Earlier studies reveal that the mucoidal nature of the organism is associated with the production of M protein and hyaluronic acid capsule [31,32]. This lead us to quantify the cell associated hyaluronic acid and M protein. Cell wall associated hyaluronic acid was quantified with Stains-All reagent which showed increased hyaluronic acid production as the concentration of 3FCA increases. In order to further corroborate the results, GAS was treated with healthy Effect of 3FCA and 3FCA treated GAS on the survival of C. elegans. C. elegans were exposed to Acetone (vehicle control) + OP50 (♦), 3FCA + OP50 (■), Acetone (vehicle control) + GAS (▲), 3FCA + GAS (×). The survival rate of C. elegans in presence of OP50 was found to be insignificantly affected by 3FCA, whereas significant differences were observed among the lifespan of the nematodes grown in the presence of GAS alone when compared to those grown in 3FCA treated GAS. * denotes p<0.001 compared to control. doi:10.1371/journal.pone.0127210.g008 human blood, since hyaluronic acid capsule and surface associated M protein play a crucial role in evading opsano-phagocytosis [33]. The results showed that the treated cells were also equally able to grow in healthy human blood, which suggest that the mucoidal nature of 3FCA treated GAS is due to increased production of cell wall associated M protein and hyaluronic acid capsule. This result goes in parallel with the real time PCR data wherein mga and hasA were found to be over expressed. mga is a gene which regulates emm gene production and hasA is a gene involved in hyaluronic acid capsule synthesis [34]. On the other hand, aggregated nature of growth of 3FCA treated GAS in liquid culture triggered us to study the auto-aggregation pattern of GAS in the presence and absence of 3FCA. 3FCA treated GAS showed an increased rate of auto-aggregation than untreated cells. The GAS cells which are encapsulated with higher amount of hyaluronic acid were found to grow in clumps and were seen to settle down easily, since they are oxygen sensitive [35]. A similar phenomenon was seen here where 3FCA treated cells were found to grow in clumps and aggregate rapidly. Hence, increase in auto-aggregation is ascribed to the increased hyaluronic acid production. This increase in aggregation is also expected to be the cause of antibiofilm activity since an increased binding efficacy was seen between the organisms than binding to the substratum. The effect of 3FCA on the protease and hemolysin production of GAS was also explored and found to be insignificant. A previous study reports about 50% antibiofilm activity and increased aggregation pattern of S. pyogenes MGAS6180 on treatment with morin hydrate at 225 μM concentration [36]. In order to unveil the mode of antibiofilm activity of 3FCA against GAS and to test whether the antibiofilm activity of 3FCA is also manifested at transcriptional level, the expression level of candidate genes involved in biofilm formation, aggregation, virulence factors and surface associated proteins were quantified. Two component regulatory systems (TCS) and stand-alone factors are considered to be the prime regulators for the expression of the streptococcal armament of virulence genes. Among the genes used in the study, covR and covS genes are the repressor and sensor kinase genes respectively [37] involved in covRS TCS which is well-studied and characterised in GAS [1]. Cov is the acronym of Cluster Of Virulence in streptococci [38] which actively or passively influences about 15 per cent of overall genes encoded by GAS [39]. covR was initially considered as the major negative regulator of genes involved in hyaluronic acid capsule production (hasA, hasB & hasC) [40] and it rapidly responds to the changes in the environment [41]. The results showed down regulation of covR gene, and up regulation of covS and hasA gene which goes in parallel with the earlier report, in which covR mutant SF370 was observed to have eight fold higher expression of covS compared to its corresponding wild type [42]. The study also demonstrates over expression of hasA by covR mutant strain. The up regulation of hasA in the expression studies was also reflected in the physiological assay, which together confirms the increased hyaluronic acid production. The inability of 3FCA treated GAS to form biofilm and down regulation of covR gene goes in parallel with the previous report in which covR mutants were found to lack biofilm forming ability [43]. covR is a repressor gene which represses various virulence genes like mga (Multiple Gene regulator of GAS), speB, hasA and luxS under normal condition [44][45][46]. Hence, it becomes clear that down regulation of covR has led to the up regulation of mga and luxS. LuxS is an enzyme of activated methyl cycle which in GAS influences virulence gene production in a growth dependent manner [47]. The up regulation of mga which is a positive regulator of M gene protein expression [48] and global transcriptional activator in exponential growth phase was confirmed with the growth of treated GAS in healthy human blood. The results of survival in healthy human blood is also supported by a previous report wherein no significant difference in in vivo growth rate between wild type and covR mutant strain in rat was observed [49]. srv is another important gene called streptococcal regulator of virulence which plays an active role in streptococcal virulence and knock out mutation of which inhibits the biofilm forming efficacy of GAS [50]. To our surprise, srv gene on treatment was found to be increased, which means the compound may increase the virulence of GAS. speB is a gene encoding extracellular protease production, which on over expression may act on its biofilm by lysing the proteins involved in biofilm [50]. There was no significant difference in the expression pattern of speB. Nevertheless, an earlier report states that covR mutation increases speB production [42], but no such increase was reflected in protease quantification as well as in speB expression. srv and covR exert opposite influence on speB, in which the former inhibits and the latter induces speB production [50,42]. From the finding of the current study, it is hypothesised that, up regulation of srv would nullify the impact of down regulation of covR on speB gene expression thereby, causing no significant change in extracellular protease production. ciaH, an another important TCS which controls about 120 genes [21] and involved in oxidative stress and acid tolerance [51] was found to be down regulated. An earlier report which demonstrated the increased aggregation pattern in srtB mutant M6 strain [52] aroused great interest in us to study the gene expression pattern of spy_125/srtB gene in SF370 which was found to be down regulated. Down regulation of srtB gene is also attributed to the increased rate of cell aggregation. On the note that 3FCA being unexplored for its medicinal application, the toxicity of 3FCA was assessed primitively, with the help of eukaryotic model organism C. elegans, which is widely used for ecotoxicology studies as a living bio monitor [53]. C. elegans also conserves many of the basic physiological processes and stress responses observed in humans [54]. The unaffected growth of C. elegans in the presence and absence of the compound confirms that 3FCA does not have any lethal effect on the growth of C. elegans and hence can be considered as a nontoxic compound. The gene expression studies which showed 3FCA treated GAS to be more virulent than the untreated ones, was also confirmed with the help of the same model organism. The rate in which C. elegans were killed by treated GAS was much higher to that of untreated control which strengthens the gene expression data of increased virulence production of GAS in the presence of 3FCA. The increased hyaluronic acid production and increase in adherence of treated GAS were expected to help their internalisation in the C. elegans, which was confirmed with the CFU assay. In summary, we demonstrated the antibiofilm activity of 3FCA, a compound from floral honey, and its major influence in the morphology and virulence of GAS. Our study suggests that 3FCA may target the covR gene of well-studied covRS pathway, which is a negative regulatory pathway controlling virulence genes of GAS. The down regulation of covR is hypothesised to be the prime reason for the observed biofilm inhibition since most of the studied genes (covS, hasA, and mga) were found to express in a similar pattern like that of covR mutant SF370 which lacks biofilm forming ability [43]. The increase in virulence was confirmed using physiological assays, gene expression analysis and in vivo studies in C. elegans. The present study is in disagreement with the outlook that antibiofilm agents which inhibit the biofilm would also reduce the virulence of the organism. Nevertheless, the in vivo increase in the virulence of the organism upon 3FCA treatment needs to be further investigated in higher eukaryotic model organism. In addition, the present study emphasizes the importance of analysing the behaviour and virulence of the pathogens in presence of the antibiofilm compounds ahead of clinical studies. Supporting Information S1 Table. List of genes, their role and the nucleotide sequences of primers used in the study. This file contains the list of genes, their role and the nucleotide sequences of primers used in gene expression analysis. (DOCX)
9,140.6
2015-05-15T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Photothermal Phase Change Energy Storage Materials: A Groundbreaking New Energy Solution To meet the demands of the global energy transition, photothermal phase change energy storage materials have emerged as an innovative solution. These materials, utilizing various photothermal conversion carriers, can passively store energy and respond to changes in light exposure, thereby enhancing the efficiency of energy systems. Photothermal phase change energy storage materials show immense potential in the fields of solar energy and thermal management, particularly in addressing the intermittency issues of solar power. Their multifunctionality and efficiency offer broad application prospects in new energy technologies, construction, aviation, personal thermal management, and electronics. Introduction The global energy transition requires new technologies for efficiently managing and storing renewable energy.In the early 20th century, Stanford Olshansky discovered the phase change storage properties of paraffin, advancing phase change materials (PCMs) technology [1].Photothermal phase change energy storage materials (PTCPCESMs), as a special type of PCM, can store energy and respond to changes in illumination, enhancing the efficiency of energy systems and demonstrating marked potential in solar energy and thermal management systems.In 2016, 178 parties signed the Paris Agreement, committing to limit global temperature rise to below 2 °C.This agreement greatly accelerated the development of renewable green energy technologies.Since 2017, research on PTCPCESMs has significantly increased.In 2023, China included PTCPCESMs in policy support, recognizing their key role in improving the efficiency, durability, and sustainability of new energy technologies [2]. Solar Energy Challenges and PCM Solutions Solar energy is abundant, but because of the intermittent nature of sunlight, solar thermal technology faces substantial issues with narrow application time frames and unstable energy utilization.Traditional solar systems cannot operate outside of sunlight hours, often resulting in low utilization rates, as seen with solar collectors and solar dryers [3].PTCPCESMs can alter their physical state or properties by utilizing solar radiation, absorbing excess heat during peak sunlight periods, and releasing heat when solar intensity is lower or at night, thereby achieving energy storage and controlled release.Consequently, PTCPCESM technology is considered one of the most effective solutions to address the intermittency problem of solar energy. Main Characteristics of PTCPCESMs PCMs can absorb or release a substantial amount of heat near their melting points through phase changes, storing or releasing energy.These characteristics make them suitable for use as thermal storage media in solar collection systems or as working substances in heat pump systems, providing various functionalities in multiple ways [4].In thermodynamics, energy conversion during phase changes involves changes in system entropy and thermal radiation losses.The latent heat absorbed or released by PCMs during melting or solidification is directly related to changes in the system's disorder.However, during this process, some energy is lost as thermal radiation, depending on the material's surface characteristics and environmental conditions.In addition, PCMs also have drawbacks such as low thermal conductivity, low photothermal conversion efficiency, and leakage during the phase change process. PTCPCESMs consist of PCMs and various carriers-organic, inorganic, carbon-based composite, and metal-based-that often encapsulate the PCMs in microcapsules or porous materials.These carriers are primarily focused on enhancing photothermal conversion rates, while also improving thermal conductivity, sealability, and the control of thermal radiation intensity in PCMs.For commonly used PTCPCESM, the photothermal conversion efficiency is required to be above 50% to 70%.The thermal conductivity typically ranges from 0.2 to 0.5 W/m•K, and for composite materials with enhanced thermal conductivity, it can reach 1 to 2 W/m•K.The phase change enthalpy of PTCPCESM usually ranges from 150 to 250 J/g.In this context, porous materials are often used as carriers for infusing PCM.Photothermal conversion is generally achieved through 3 mechanisms: molecular vibrational heating, localized plasmonic heating, and nonradiative relaxation heat release [5]. In carbon-based materials and some organic polymers, the ease of electron excitation from π to π* orbitals, followed by heat release through vibronic electron coupling when these electrons return to their ground state, facilitates molecular vibrational heating.For instance, Atinafu et al. [6] developed a graphene derived from solid sodium acetate to enhance the photothermal conversion efficiency, thermal conductivity, and energy storage capacity of PCMs.The reduction in supercooling increased the composite material's energy storage capacity by 157.6 kJ/kg, which is 101.4% higher than expected.Graphene, with its high thermal conductivity and photothermal responsiveness, effectively controls thermal radiation and absorbs solar light from visible to near-infrared.Its 2-dimensional structure enhances thermal transfer and surface area, promoting rapid heat distribution between PCMs and carriers. Metal-based materials, such as gold nanoparticles and MXene, enhance light-matter interactions and break the traditional diffraction limit through localized surface plasmon resonance by confining incident light to nanoscale dimensions.The excitation of plasmons greatly enhances the electromagnetic fields near the structure, greatly increasing absorption and scattering at the resonance frequency.These properties allow metal nanostructures to have enhanced light collection and focusing capabilities.Fan et al. [7] reported on a novel polyethylene glycol/ Ti 3 C 2 T x layered phase change composite material, which exhibits strong absorption in the ultraviolet-visible-near-infrared region due to the localized surface plasmon resonance effect of Ti 3 C 2 T x nanosheets, achieving up to 94.5% photothermal conversion efficiency under solar irradiation. When semiconductor materials or certain special organic molecules are excited by photons, they generate electron-hole pairs, which can release energy either radiatively or nonradiatively.In nonradiative relaxation, the excited electrons transfer their energy to the lattice, generating phonons and causing a rise in local semiconductor temperatures.Ge et al. [8] studied a light-driven microfluidic control device that utilizes lightresponsive alkoxylated grafted azobenzene PCM to collect, transmit, and utilize energy in low-temperature environments. This device effectively controls temperature through photothermally driven heat release under conditions as low as −40 °C and achieves a high energy density of 380.76 J/g even at −63.92 °C.The thermal effect is primarily due to light-induced molecular isomerization, a nonradiative relaxation process.When light excites azobenzene, the molecules shift from one conformation to another, allowing the light-responsive switch material to control its structure and thus regulate thermal radiation intensity. Appropriate carrier selection significantly enhances the thermal conductivity of PCMs.Wei et al. [9] demonstrated this using cellulose aerogel and molybdenum disulfide as carriers, which increased the thermal conductivity of PCMs by 138%.Through improvements in PCM performance by different carrier materials, PTCPCESMs demonstrate substantial potential in enhancing energy efficiency and meeting diverse application needs. Potential Applications of PTCPCESMs in New Energy Technologies Besides solar systems, PTCPCESMs find extensive applications in the construction industry.Typically, PTCPCESMs are integrated into walls, roofs, and floors to maintain stable indoor temperatures without external energy input, thereby reducing energy consumption [10].With technological advancements, we believe that PTCPCESMs will be widely applied in various emerging fields such as new energy vehicles, personal thermal management, aerospace, and electronic information. As illustrated in Fig. 1,when PCMs are combined with carriers, they utilize the photothermal conversion properties of the carriers to achieve energy storage.During periods of abundant sunlight, the carriers convert solar energy into heat, inducing a phase change in the PCMs and storing energy.In the absence of sunlight, the PCMs release the stored heat, providing a thermal buffering effect.In electric vehicles, PTCPCESMs can balance and manage cabin heat between day and night, improving comfort without requiring additional energy.They can also be incorporated into adjustable clothing to automatically regulate the wearer's body temperature by absorbing and releasing heat, enhancing comfort and energy efficiency.In deep space exploration, PTCPCESMs can maintain spacecraft components and instruments within operational temperature ranges, protecting sensitive instruments and reducing the energy needed for heating and cooling systems.Furthermore, PTCPCESMs can absorb and store heat generated by highpower electronic devices during high activity and release it during low temperatures, ensuring a stable internal environment.The multifunctionality and efficiency of PTCPCESMs suggest their increasingly important role in modern energy and material technologies, providing sustainable and efficient energy solutions across various industries. Conclusion PTCPCESMs have considerably improved thermal conductivity and photothermal conversion efficiency and have partially addressed leakage issues.However, they still face challenges like low mechanical strength, poor interfacial compatibility, and slow environmental response.Future innovations should focus on (a) developing PTCPCESMs with higher photothermal conversion efficiency, (b) exploring PTCPCESMs with precisely adjustable thermal radiation intensity of PCMs, (c) improving the encapsulation system of PTCPCESMs to enhance their durability and leakage prevention capabilities, (d) introducing support frameworks or rigid sealing materials to enhance the mechanical strength of the carriers and composite materials, (e) improving the interfacial compatibility and thermal conductivity between PCMs and carriers through chemical modification; and (f) continuously researching highthermal-conductivity PTCPCESM systems to optimize performance across diverse applications. PTCPCESMs are transforming energy management across multiple fields.Their ability to effectively store and manage thermal energy makes them indispensable in the ongoing transition to sustainable energy practices.As the world continues to make technological breakthroughs in solar energy, electric vehicles, green buildings, and space exploration, PCMs will play a crucial role in achieving a sustainable and efficient future.
2,024.4
2024-08-06T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Battery Capacity Estimation of Low-Earth Orbit Satellite Application Simultaneous estimation of the battery capacity and stateof-charge is a difficult problem because they are dependent on each other and neither is directly measurable. This paper proposes a particle filtering approach for the estimation of the battery state-of-charge and a statistical method to estimate the battery capacity. Two different methods and time scales have been used for this estimation in order to reduce the dependency on each other. The algorithms are validated using experimental data from A123 graphite/LiFePO4 lithium ion commercial-off-the-shelf cells, aged under partial depth-ofdischarge cycling as encountered in low-earth-orbit satellite applications. The model-based method is extensible to battery applications with arbitrary duty-cycles. INTRODUCTION Health and lifetime uncertainty presents a major barrier to the deployment of lithium-ion (Li-ion) batteries in large-scale aerospace, electric vehicle, and electrical grid applications with stringent life requirements.In the satellite industry, for example, the high cost of launch and the inability to make repairs once in orbit dictate the use of mature battery technologies with conservative duty-cycles to reduce risk.If battery health could be precisely tracked on orbit, the duty-cycle might be tailored to best utilize the remaining life and maximize the value of the investment.Similar opportunities may exist for electric vehicles to maximize battery lifetime by intelligently selecting driving routes and charging strategies.Markets for used electric vehicles and batteries also require accurate battery health assessment to mature to their full potential. The field of prognostics and health management offers gen-M.Jun et.al.This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. eral approaches for combining real-time measurements, models and estimation algorithms to track the health and predict the remaining lifetime of batteries (Sheppard, Wilmering, & Kaufman, 2009;Goebel, 2010).Relevant performance/health metrics for battery applications are available power and energy.These can be expressed in terms of battery internal resistance and amp-hour (Ah) capacity, respectively.Battery models are needed to relate capacity and resistance to the current, voltage, and temperature measurement signals available in real-time.For regular predictable duty-cycles such as in unmanned aerial vehicles (Goebel, Saha, Saxena, Celaya, & Christophersen, 2008), simple algebraic relationships between current and voltage may be sufficient.For uncertain duty-cycles such as for electric vehicles, a dynamic model of the current and voltage relationship is necessary.Dynamic models can be in the form of circuit analogs (Verbrugge & Koch, 2006;Plett, 2006), or reduced order physics-based models (Santhanagopalan, Zhang, Kumaresan, & White, 2008;Smith, Rahn, & Wang, 2007;Smith, 2010;J. L. Lee, Chemistruck, & Plett, 2012).Physics-based approaches remain their own active subject of research and thus the simpler circuit model is applied in this work. State-of-charge (SOC) is usually formulated as a reference model state and can be estimated by using various state estimation methods such as extended Kalman filter (Plett, 2004;J. Lee, Nam, & Cho, 2007;Charkhgard & Farrokhi, 2010;Kim & Cho, 2011;Hu, Youn, & Chung, 2012), unscented Kalman filter (Plett, 2006;Sun, Hu, Zou, & Li, 2011) or cubature Kalman filter (Chen, 2012).Those SOC estimation methods work well in certain situations but would not perform properly in other situations.Extended Kalman filters are prone to linearization errors and both extended Kalman filters and unscented Kalman filters are limited to systems with Gaussian noise distribution.Similar to Kalman filters, particle filters belong to the class of Bayesian estimation methods, but can deal with nonlinear systems with non-Gaussian noise without linearization (Sanjeev Arulampalam, Maskell, Gor- & Clapp, 2002).They have been successfully applied to many problems with nonlinear dynamics such as computer vision (Isard & Blake, 1998), speech recognition (Vermaak, Andrieu, Doucet, & Godsill, 2002), robotics (Schulz, Burgard, Fox, & Cremers, 2001), etc.Furthermore, very little work has been done in SOC estimation in conjunction with simultaneous estimation of time-varying battery capacity.This paper proposes a method to estimate both SOC and battery capacity by using a particle filtering approach. Unlike in the laboratory, in an application environment it is infeasible to completely discharge the battery to obtain a full "ground-truth" measurement of battery total capacity.A key question explored in this paper is to what extent battery total amp-hour (Ah) capacity can be estimated based on only partial discharge data.In addition, estimation of battery capacity using partial discharge data is particularly challenging for Li-ion chemistries with a flat open-circuit voltage relationship versus SOC (Plett, 2011).Such is the case for the Li-ion graphite/iron-phosphate chemistry investigated in the present work. CIRCUIT MODEL For the reference model, a second-order circuit model is used in this work as shown in Figure 1.While the battery is an infinite-dimensional system, the two time constants of the second order circuit model provide reasonable approximation of voltage/current dynamics for the present application.The state-space equation of this circuit model is expressed as follows: where Q denotes the battery capacity.The values of the parameters R 1 , R 2 , R s , C 1 and C 2 depend on SOC and time and Q depends on time.(Since the satellite battery considered in this work operates under nearly isothermal conditions, temperature dependency is neglected.) Measurements of resistance versus SOC exhibit a bathtub shape, with small resistance at mid-SOCs increasing to larger values at low and high SOC extremes.This parametric dependence of R 1 , R 2 and R s on SOC is captured in Eq. ( 3)-( 5) As the battery ages, the values of Q slowly decrease and the resistance values slowly increase over time.Since the battery may not be exercised over its entire SOC range in an actual application, only the three relative resistance parameters a r1 , a r2 and a rs are estimated along with the battery capacity.The dynamics of these time-varying parameters can be formulated by: where ε 1 and ε 2 are small positive constants.We assume n 2 to be constant.We can reformulate a state-space equation by combining Eq.( 1) and Eq. ( 6).Let T be the augmented state and ∆t be the sampling time.Then the discrete-time augmented state-space equation of the secondorder circuit model of a battery is expressed as: where PARTICLE FILTER Particle filtering is a method used to approximate the probability density f k of the state x k conditioned on the observations y 0 , • • • , y k 1 .Consider the following nonlinear system: where x k is the state, y k is the measurement, n k is the process noise, and v k is the measurement noise.Suppose that where p(x k | x k−1 ) represents state transition over time and is determined by the process model ( 9) and the distribution of the process noise This step is called prediction or time propagation.When the observation y k at time k is made, the a priori distribution is updated using Bayes' rule: This step is called the measurement update as the measurement data y 0 , • • • , y k are used to obtain the a posteriori distribution can be obtained from the measurement equations ( 10) and the distribution of the measurement noise v k . Particle filters approximate f k by a set of weighted samples or particles where N is the number of particles.For more details about particle filters and sequential Monte Carlo methods, refer to (Sanjeev Arulampalam et al., 2002). In this paper, sampling importance resampling is used for resampling of the particle filter to reduce degeneracy.The algorithm for the particle filter used in the simulations is given in the following: CAPACITY ESTIMATION The simultaneous estimation of the battery capacity and SOC is difficult because they are dependent on each other by the relation Therefore, if the changes in the battery capacity Q are not reflected properly, the calculation of SOC based on Eq. ( 13) is subject to errors even though the measurement of I(t) is accurate.This paper proposes a novel method to estimate the battery capacity and SOC simultaneously using a particle filter and statistical approach. The actual value of Q in real situations changes very slowly over time.This paper utilizes past statistical information for an estimate of Q at a longer interval than the sampling time. Let m ≫ 1 be an integer and T = m∆t.The battery capacity is estimated at every T and the value of Q in Eq. ( 1) is set to the estimated battery capacity Q at every T other than ∆t. The estimate of x k by the particle filter is the weighted sample mean of the particles, that is, xk = ∑ N i=1 w i k x i k and the i-j-th element q k (i, j) of the weighted covariance matrix where x n k (i) and xk (i) are the i-th elements of the vectors x n k and xk (i), respectively.The value of q k (4, 4) implies an estimation error for x k (4) = Q(k) and the degree of confidence can be represented by the reciprocal of q k (4, 4).Thus, the paper uses as estimate of Q where W k = 1/q k (4, 4) and the value of Q(ℓT ) is reset to a new value of Q in Eq. ( 1) for every ℓT , ℓ = 1, 2, • • • .This formulation can be interpreted that W k is a weight and Eq (15) a weighted time average and re-initialization of state variables. Low Earth Orbit Satellite Application For the simulations, we used battery data generated at the Jet Propulsion Laboratory.They performed experiments to evaluate the cycle life performance of A123's 26650 LiFePO 4based commercial off-the-shelf cells for potential low earth orbit satellite applications.This testing consists of implementing partial depth-of-discharge (DOD) cycling, with 30%, 40%, 50%, and 60% DOD selected.The testing was performed at the room temperature (23 • C) and consisted of a 30-minute discharge period and a 60-minute charge period. The charge and discharge rates were scaled proportionately to the corresponding DOD (i.e., the 30% DOD test involved using a 0.4C charge rate and a 0.60C discharge rate).For operational capacity checks (OPCAPS), full charge and discharge of the battery were conducted every 250 cycles.The plots of battery capacity with respect to cycle number are shown in Figure 2. The degradation of battery capacity is clearly observed from the plot. The analysis contained in this paper focuses upon the 50% DOD data from cycle 2723 to 2815.The battery capacity is reduced to about 2.05 Ah from the initial 2.2 Ah in the range of these cycles. Least-square regression was used to provide an initial set (Danzer & Hofer, 2008).The beginningof-life values parameterizing R 1 , R 2 and R s as functions of SOC are shown in Table 1 and the plots of each resistance and eigenvalue are illustrated in Figure 3. Several nonlinearities arise in the model.Values of open-circuit voltage, V ocv (SOC) in Eq. ( 2), were taken at 10% increments in SOC following each one-hour rest period of the HPPC test and were implemented in the model as a look-up table.The nonlinearity in Eq. ( 1) lies in time-varying parameters, R 1 , R 2 , and R s , which are also dependent on SOC. The values of the parameters in the particle filter were tuned with simulations.We set the vales of ε 1 and ε 2 to be 0.00001.The value of the measurement noise v changes adaptively depending on SOC where m v is a scaling constant depending on measurement error when the value of SOC is very high or low.The process noise n 2 is set to be a constant and n 1 to be a function of SOC The number of particles used in the simulations is 3,000.The sampling time of the filter is dependent on the interval of measurements.The plot of measurement intervals that are used for the simulations in Section 5.1.1 is shown in Figure 4. Measurements were mostly sampled at every 10 minute and the biggest sampling interval is 15 minute.The particle filter used in the simulations performs stratified resampling (Kitagawa, 1996) if where N is the number of particles.Otherwise, the particle filter resamples using the normalized importance weight described in Section 3. First, we performed simulations with the data from cycle 2773 to 2815 that include OPCAPS.The estimate of the battery capacity was done every 20 hours, that is T = 20 hr in Eq. ( 15). Figure 5 shows the plot of the battery capacity estimate.The initial value of Q is set to 2.2 at time 0, which is the initial battery capacity before battery degradation.This initial value is kept until 20 hr.At T = 20 hr, the battery capacity is estimated to be about 2.1831 and the state variable Q in the particle filter is re-initialized to this value, and so on. The plots of weight W k = 1/q k (4, 4) and the estimate of x k (4) = ∑ N i=0 w i k x i k (4) that are used for the battery capacity estimation using Eq. ( 15) are shown in Figure 6a.scale method performs better than particle filtering using augmented state which is usually used for the simultaneous estimation of state and parameters. The SOC estimate and estimation error are illustrated in Figure 7.The blue solid line in Figure 7a shows the SOC value calculated from input current data and the true battery capacity (2.05 Ah) by using Eq. ( 13) and the black dash-dotted line illustrates estimated SOC from the particle filter.It can be observed in this plot that the peak value of SOC calculated from Eq. ( 13) increases over time.This is because measurement errors are accumulated through integration in Eq. ( 13) and the estimation using the particle filter is more robust to the measurement errors.The estimation error in Figure 7b is the difference between the estimate by the particle filter and the calculated value from input current data and the true battery capacity (2.05 Ah) using Eq. ( 13).The solid line in Figure 7b indicates SOC estimation error by using the proposed two-time scale method and the dotted line represents error by particle filtering without two-time scale.It shows that the error goes below 0.01 (1%) after about 100 hr by using the proposed method while the error does not decrease without two-time scale method. Simulation without Operational Capacity Checks (OPCAPS) The second simulation was performed with the data from cycle 2723 to 2806, which does not include OPCAPS and only has repeated charge and discharge with 50% DOD.The simulation results are shown in Figure 8, 10 and 9.In this case, the accumulated error in the SOC calculation by Eq. ( 13) is more noticeable.However, the SOC estimation using a particle filter oscillates between 0.5 and 1, which is the expected SOC range with 50% DOD. The estimate of the battery capacity converges to about 2.025 Ah, which is a little less than 2.05 Ah, the actual capacity.The errors in SOC and the battery capacity estimation without OPCAPS are greater than those with OPCAPS and it took longer time to converge for SOC estimation.However, the error is about 1.2% and the estimate can be concluded to be accurate even without OPCAPS. CONCLUSION A method to simultaneously estimate both the capacity and SOC of a Li-ion battery has been proposed using a particle filtering method for SOC estimation and a statistical approach for the battery capacity.The battery capacity estimation has been performed in a different time scale from the SOC estimation and used accumulated past data from both measurement and the particle filter outputs.The estimated value of the battery capacity has been used to modify the parameter of the battery state-space model.Simulation results showed the robust performance of the algorithm in simultaneous estimation with or without operational capacity checks.The proposed method has been shown to perform better than the particle Due to the high cost of launch, satellite batteries are expected to operate until the end of the satellite's life.Unlike the OP-CAP used in laboratory tests, in space the battery can never be fully discharged and hence the battery's total capacity must be indirectly estimated.Trending of battery total capacity over lifetime is important for satellite health management to ensure that no regular partial discharge cycle ever exceeds the present capability of the battery, causing loss of the satellite.The proposed method is adequate for the satellite applications since it estimates the battery capacity and SOC robustly even without OPCAPS and measurement errors are not accumulated in SOC estimation unlike Coulomb count method, which indicates that it is suitable for the applications with long operation time. Importance Sampling Step 1 Let the subscript k denote discrete time k for simple notation, i.e., x k = x(k) 2 p(x|y) means p(X = x|Y = y) for simplicity of notation where X and Y are random variables and x and y are their realizations. Figure 3 . Figure 3.The values of resistances and eigenvalues identified at beginning of life by minimizing the square of voltage error between model and HPPC test data Figure 5 . Figure 4.The time intervals between data samples from JPL experiment data Figure 6.Weight W k and estimate ofx k (4) = ∑ N i=0 w i k x i k (4) Figure 7 . Figure 7. Battery SOC estimation and error with OPCAPS Figure 9 . Figure 9. Weight W k and estimate of x k (4) = ∑ N i=0 w i k x i k (4) Table 1 . Beginning of life parameter values in Eq.(3)-(5) for R 1 , R 2 and R s Figure 2. Capacity loss during partial DOD cycling of A123 LiFePO 4 -based cells (Courtesy Jet Propulsion Laboratory)
4,096.4
2020-10-18T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
A Survey on Machine Learning Software-Defined Wireless Sensor Networks (ML-SDWSNs): Current Status and Major Challenges Wireless Sensor Network (WSN), which are enablers of the Internet of Things (IoT) technology, are typically used en-masse in widely physically distributed applications to monitor the dynamic conditions of the environment. They collect raw sensor data that is processed centralised. With the current traditional techniques of state-of-art WSN programmed for specific tasks, it is hard to react to any dynamic change in the conditions of the environment beyond the scope of the intended task. To solve this problem, a synergy between Software-Defined Networking (SDN) and WSN has been proposed. This paper aims to present the current status of Software-Defined Wireless Sensor Network (SDWSN) proposals and introduce the readers to the emerging research topic that combines Machine Learning (ML) and SDWSN concepts, also called ML-SDWSNs. ML-SDWSN grants an intelligent, centralised and resource-aware architecture to achieve improved network performance and solve the challenges currently found in the practical implementation of SDWSNs. This survey provides helpful information and insights to the scientific and industrial communities, and professional organisations interested in SDWSN, mainly the current state-of-art, ML techniques, and open issues. I. INTRODUCTION The IoT (as a general IoT ecosystem including middlewares, servers, cloud, edge) is an emerging technology that has caught tremendous attention from the scientific and industry communities and professional organisations due to its diverse benefits: including financial, efficiency, management, etc. The associate editor coordinating the review of this manuscript and approving it for publication was Yafei Hou . It is a key enabling technology of the so-called industry 4.0. IoT stakeholders (e.g., governments, industry), which have recently acknowledged that IoT is a real business opportunity. Forecasts estimate that the IoT business can grow into a market worth USD 7.1 trillion by 2025 [1] and that the number of connected ''things'' can exceed the 75 billion devices barrier [2]. The exponential growth of connected devices fosters the creation of a large variety of IoT vendors and protocols. Despite the variety of vendors and protocols, the IoT ecosystem must, somehow, deliver seamless services to users. Emerging IoT applications such as smart agriculture, transportation systems, health systems, etc., expand the scope of the internet to include sensing technologies such as WSNs. WSNs, enablers of IoT technology, are built upon the interconnection of a large number of Networked Embedded Systems (NESs). An NES often called a wireless sensor node, is a tiny energy-constrained device comprised of a processing unit, a memory unit, a communication transceiver, and some sort of power supply. They are usually deployed to measure physical variables such as humidity, temperature, pressure, air quality, etc., and they work cooperatively to achieve a common goal. The main characteristics of NES are the low cost, size, and limited resources [22], [23]. WSNs are used in a range of applications that enable integration of the physical world into the computer-based world, resulting in benefits and improvements in remotely managing the physical world, keeping an electronic record of physical variables, early detection of potential threats, predictions, and economical benefits. Their low cost and ease of deployment make WSNs attractive in the practical implementation of the IoT. However, their small size and low cost lead to limitations on resources such as energy supply, memory size, computational speed and communication bandwidth. Therefore, the limited resources in WSNs need to be managed effectively; so they can run for the longest time possible. The SDN paradigm has been proposed to alleviate the management complexity currently found in wired networks. A simple representation of an SDN architecture is shown in Fig. 1. SDN breaks the vertical integration of the network by separating it into application, control and data planes. The application plane hosts user applications and programs that explicitly, directly, and programmatically convey information regarding the network requirements and desire network behaviour to the SDN controller. The control plane consists of a logically centralised entity that process requirements from the application plane and deploy them in the data plane, and provides the application plane with a global view of the network. The data plane is the network infrastructure that consists of networking devices that become forwarding devices FIGURE 2. Simple representation of an SDWSN architecture [24]. with no intelligence. The introduction of SDN abstractions into the WSN forms what we call SDWSNs. The SDWSN paradigm emerges to solve the management complexity in current WSNs deployments. This new paradigm allows adding new functionalities into the network, no different from adding another application to the control plane [10]. In large WSNs, with thousands of sensor nodes, it is critical to consider and implement management solutions [25]. The SDWSNs centralise the network intelligence in an SDWSN controller, leaving sensor nodes acting as simple forwarding devices (see Fig. 2). Sensor nodes forward packets to the destination based upon the reprogrammable forwarding table managed by the controller. SDWSN controller leverages the global information of the network (e.g., network statistics, energy levels, interference, etc.) to come up with new powerful and intelligent protocols to achieve the desired network performance. Although SDWSN has been demonstrated to improve network performance against other traditional WSNs, there is a need for novel architectures that make the most of the global view of the network assets and balance the expenditure of network resources when making the WSN programmable. ML-SDWSN has been devised as a potential network architecture solution to exploit the centralised WSN assets information to enhance the overall network performance. The ML component has, at hand, real-time data including network statistics (Received Signal Strength Indicator (RSSI), Packet Delivery Ratio (PDR), etc.), network resources (sensor nodes remaining energy, applications load, etc.), network topology, etc. This makes the ideal environment to deploy ML algorithms tailored to user and application requirements. ML-SDWSN is also seen as a prominent solution to alleviate the communication overhead introduced; thus, making the most of SDWSNs. ML-SDWSN is discussed in detail in Section VI. A. CONTRIBUTION Despite the diverse benefits brought by SDN to WSNs, without proper countermeasures to minimise the management overhead introduced, it can negatively impact the network performance of the WSN and lead to high energy costs. This paper conducts an extensive literature review by exploring relevant research articles on SDWSNs and ML-SDWSNs approaches. Research works that have reviewed papers on SDN are listed in Table 1. Topics on these surveys include SDN basics, SDN for IoT, SDWSNs, SDN for Smart Grids (SG), SDN for underwater WSNs (UWSNs), and ML-SDWSNs. As can be seen from the table, existing surveys have paid little attention to the use of ML techniques in SDWSNs. In particular, the article in [12] published in 2017, briefly discusses the use of ML algorithms in SDN, while SDWSN papers were not surveyed. Their article surveys papers mostly based on the use of ML algorithms in SDN in general. Papers that take advantage of the global view of the controller in SDWSNs to improve the network performance were not discussed. The survey in [18], published in 2019, briefly reviews papers that use Artificial Intelligence (AI) for intrusion detection in SDWSNs. It mainly discusses how the security vulnerabilities of SDWSNs can be counteracted by combining cryptography schemes and AI techniques. A survey paper published in 2020 on ML-WSNs is presented in [21], it mainly focuses on the use of Deep Learning (DL) in WSNs, and they also discuss the energy expenditure in the ML training phase. The survey papers in [5], [19] discuss the design challenges of WSNs due to their inherent dynamic behaviour, and the power of ML techniques to improve the ability of WSNs to adapt to such changing behaviour of their surrounding environment. Due to the distributed nature of traditional WSNs, ML techniques are laborious to apply to operate and control traditional WSNs. However, the design concepts of SDN (e.g., centralised architecture) form the perfect habit to easier apply ML techniques. The survey paper in [16], published in 2018, principally focuses on how ML techniques are applied to SDN architectures mainly to traffic classification, routing, Quality of Service (QoS) prediction, security and resource management. The paper also briefly discusses SDWSNs and directions on the use of ML in WSNs. The survey in [20], published in 2019, presents network applications that combined SDN and ML concepts. The survey provides thorough discussions on ML methods and SDN-concept networks, their applications and gives future directions on future ML in future SDN. In contrast, the contributions of this survey article are as follows. 1) We firstly provide a comprehensive background on WSNs including the evolution of Microcontroller Unit (MCU)-sensor nodes, networking and standards, and challenges of WSNs. 2) We provide a systematic review of SDWSN proposals that have not been previously covered by other survey papers. We categorised them into general frameworks, proposals that seek to improve Key Performance Indicators (KPIs) (QoS-related works), research works that reprogram both hardware and software of sensor nodes (fully programmable mechanisms), scientific articles that leverage the global view of the controller to devise new routing and management protocols (network topology and management proposals), and research papers that seek to solve the controller placement problem (Controller placement works). 3) The nature of the SDWSN centralised architecture opens up new research opportunities to experiment with AI/ML algorithms embedded in the SDWSN controller to improve the overall WSN performance. Therefore, we perform a systematic review of research papers that have combined research efforts of ML and SDWSNs, to improve network performance. 4) We discuss open issues and research directions in SDWSNs. This review will serve to produce a better understanding and clarify the current status and the potential research directions regarding the open issues of SDWSNs. To the best of our knowledge, there does not exist a survey that covers in-depth the state of the art of ML techniques used in SDWSNs. Fig. 3 provides a visual representation of the organisation of this paper. Section II provides detailed background on WSNs including the networking standards, embedded operating systems and challenges. Section III provides background on SDN, SDWSNs and presents the early adopters of SDWSNs. Section IV presents the current status of research works that have expanded the state-ofart of SDWSNs. Section V presents an overview of the most commonly used ML algorithms in supervised, unsupervised, semi-supervised, reinforcement and deep learning. Section VI presents a survey of research efforts that have applied ML techniques in SDWSN. Section VII summarises both SDWSN and ML-SDWSN research works. Section VIII discusses major challenges and future directions for both SDWSNs and ML-SDWSNs. Finally, in Section IX conclusions are drawn. Acronyms used throughout this paper are summarised in Table 2. II. BACKGROUND The introduction of WSNs has opened new opportunities for monitoring applications. These can be summarised as follows. • Home monitoring: This is an example of a Wireless Sensor and Actuator Network (WSAN). This kind of network can collect sensed data such as temperature, humidity and states of other sensors such as magnetic VOLUME 10, 2022 sensor or switches, and is also capable of changing the environment and physical world through actuators such as servos, motors or switches. • Environmental monitoring: The goal of this WSN is to maintain the sink informed of any environmental changes at the deployed location and surroundings. This term has evolved to cover many monitoring applications of the environment such as sea, volcanoes and forest monitoring, etc. • Event detection: Thousands of sensor nodes can be deployed in a specific field to detect early hazards to the ecosystem. For example, sensor nodes embedded with temperature, humidity and gas sensors can be used to detect the presence of fire. Early detection of hazards can prevent the loss of lives and valuable resources. • Physical variable monitoring: WSNs can be also used in a simple task such as data logging information of a physical variable of interest. For examples, keeping track of simple things like the temperature of a refrigerator, all the way up to monitoring the water level and flow of a nuclear power plant [39]. As mentioned above, the use of WSNs covers a range of applications that enable integration of the physical world into the computer-based world, resulting in benefits and improvements in our quality of life. Also, a wide variety of wireless sensor devices have been developed to enable wireless connectivity and sensing capabilities in tiny objects, a historical and most popular WSN platforms available in the market are shown in Table 3. A. NETWORKING AND STANDARDS FOR WSNs Networking technology sets the form of communications between sensor nodes. Here, the most widely used communication protocols in WSNs are presented. Other forms of wireless communications methods are surveyed in [40]. The most commonly used communication transceiver for WSNs is the low-power radio and the most popular frequency band is the 2.4 GHz as shown in Table 3. 2.4 GHz radios are popular, low-cost, well-supported and the frequency band is standardised in the IEEE 802.15.4 [41]. Among communication protocols, used in this frequency band, are ZigBee [42], Bluetooth [43], [44], and IPv6 over Low-Power Wireless Personal Area Networks (6LoWPAN) [45]. 1) ZIGBEE Zigbee was originally designed by the ZigBee Alliance under the specifications of the IEEE 802.15.4 standard [42]. Among its features are low power consumption and support for different network topologies such as mesh, star and tree, which makes ZigBee a good candidate for Industrial Internet of Things (IIoT). However, ZigBee does not meet with all requirements of industrial applications as it cannot serve a large number of sensor nodes and suffers from interference [43], [44]. 2) BLUETOOTH Bluetooth was originally designed to achieve medium data rates for short distances (typically up to 10 m). Due to the power consumption concerns, the Bluetooth-Low-Energy (BLE) specification was proposed. BLE was conceived for embedded systems with low-power requirements and limited processing power. This extension provides up to 1 Mbps over 5-10 m range [43], [44]. 3) 6LoWPAN 6LoWPAN was established by the Internet Engineering Task Force (IETF) [45]. 6LoWPAN was conceived under the premise that the Internet Protocol should be applied even to the smallest devices, and that resource-constrained embedded systems should be able to participate in the IoT. Therefore, 6LoWPAN is a lightweight protocol that uses an adaptation layer, that has a set of functions, to enable transmissions of Internet Protocol version 6 (IPv6) packets over IEEE 802.15.4 radios. The great advantage of 6LoWPAN is that enables direct communication with other Internet Protocol (IP) devices locally or via IP network. There also exist other communications methods that are only used for a set of sensor nodes in WSNs. (i) WiFi is a wireless networking technology based on IEEE 802.11 family of standards [46]. It is commonly used for Local Area Networks (LANs) and to provide wireless high-speed Internet access. It is common to find WiFi modules in gateways or border routers to enable internet connectivity to WSNs. Sensor nodes rarely use WiFi modules as it imposes high power requirements and shortens the network lifetime. (ii) General Packet Radio Service (GPRS) was introduced as a wireless communication packet service that promises data rates from 54 to 114 kbps [47]. GPRS offers a best-effort service that is often used in gateways to communicate with an online monitoring centre. Similar to WiFi, GPRS was not designed for WSNs applications as it also imposes higher power requirements than IEEE 802.15.4. (iii) Long Range Wide Area Network (LoRaWAN) is a technology that enables long-range transmissions (more than 10 km) with low power consumption. LoRaWAN is a cloud-based Media Access Control (MAC) protocol that uses Long Range (LoRa) in its physical layer. Features of LoRaWANs include low bandwidth (250 bps up to 11 kbps), long-range, low cost and low power consumption [48]. Thus, LoRaWAN deployments make more sense in applications that use small payloads and transmit data few times a day over long distances, than having hundreds of IEEE 802.15.4 radios interconnected to cover the same area size, resulting in increased energy consumption, and management complexity of sensor nodes. Overall, There is no such thing as the best communication technology for WSNs as the optimum communication protocol largely depends on the application. For home monitoring or smart home, Zigbee and 6LoWPAN can be the appropriate technology as they provide good data rates and support multiple network topologies. For industrial monitoring, VOLUME 10, 2022 6LoWPAN or LoRaWAN technologies are good solutions, however, 6LoWPAN works better when frequent measurements are needed, and LoRaWAN fits better for large fields, multiple sources of interference, or for infrequent interaction with the gateway. B. EMBEDDED OPERATING SYSTEM (EOS) Due to the limited resources available, sensor nodes require a lightweight Operating System (OS) [9], [49]. The two Embedded Operating Systems (EOSs) that have achieved the most attention by the research SDWSN community so far are: (i) Contiki, which is an open-source OS for lowpower IoT networks, is designed for resource-constrained sensor nodes [50]. In its core uses C language and has three network stacks; RIME, Internet Protocol version 4 (IPv4) and IPv6. Contiki-NG [51] has been presented as a new version of the Contiki project. Contiki-NG started as a fork of the Contiki project and preserves part of its original characteristics. Contiki-NG provides an overall clean-up, updated support for IPv6 over the TSCH mode of IEEE 802.15.4e (6TiSCH), streamlined RPL implementation, and other features for resource-constrained IoT devices. (ii) TinyOS is also designed for resource-constrained sensor nodes but in its core uses the nesC programming language [52] and supports IPv6 in its protocol stack, namely, Berkeley Low-power IP (BLIP). There exist some EOSs that have not been yet used in SDWSNs: FreeRTOS [53] is an open-source real-time OS kernel for NESs, designed to be small and simple. The footprint can be as low as 9KB and supports over 40 MCU architectures. Key features include a small memory footprint, low overhead, and very fast execution. Zephyr [54] is a stable and open-source real-time OS for resource-constrained embedded systems. It supports multitasking, multiple network stacks, and multiple architectures. One of the network functions provided by Zephyr is the dual-stack that enables simultaneously use of IPv4 and IPv6. OpenWSN is not an operating system, but an open-source implementation of a fully standards-based protocol stack for short-range networks, such as the IEEE802.15.4e Time-slotted Channel Hopping standard [55]. IEE802.15.4e, along with lowpower IoT protocols, such as 6LoWPAN, Routing Protocol for Low-Power and Lossy Networks (RPL) and CoAP, allows ultra-low power and highly reliable mesh networks that are fully merged into the Internet. RIOT presented as an opensource real-time multi-threading OS that supports a wide range of IoT devices such as low-power sensor boards and microcontrollers including 8-, 16-and 32-bit architectures, that are normally used in IoT networks [56]. The RIOT design principle is to be energy-efficient and reliable that supports real-time and small memory applications. It also provides API access, which is independent of the hardware. Multiple open standard protocols have been ported to RIOT such as the IPv6 network protocol stack that includes the IETF for connecting constrained systems to the Internet (6LoWPAN, IPv6, RPL, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP)). A brief comparison of the above mentioned operating systems is presented in Table 4. The table shows a comparison of the MCUs supported, the memory footprint, support for RPL, UDP and TCP. Although the memory footprint is platformdependent, the memory values given in the table can be used as references to perceive how low the memory footprint can be for the specified operating system to run. It shows that Contiki, Contiki-NG, OpenWSN, RIOT and Zephyr are the only operating systems that provide full support for TCP over 6LoWPAN and that FreeRTOS and Contiki support the largest range of MCUs. TinyOS currently supports 8-and 16-bit CPU architectures and the support for TCP still in the experimental phase, which limits the sensor nodes in supporting higher application protocols such as HTTP. C. CHALLENGES IN WSNs The challenges associated with WSNs and IoT can be divided into three different categories: sensor node hardware, heterogeneity and inflexibility. 1) SENSOR NODE HARDWARE As mentioned before, the main challenges presented in sensor nodes are due to their constrained resources. • Energy source: due to the communication nature of sensor nodes being wireless, most of the applications require sensor nodes to operate in harsh environments or areas with limited access [57], [58]. Thus, it is envisaged that sensor nodes operate without any battery renewal or human intervention for a long time. The power source and individual energy consumption are vital for the Network Lifetime (NL) of WSNs. • Memory size: the memory of sensor nodes stores information regarding the protocol stack and applications running in the node. The integration of the protocol stack, routing protocols and applications into the node imposes a challenge when adding new features in the already constrained memory. The memory has to be managed effectively to assure all applications and program code run efficiently and that the node can host new features as required. • Computational speed: the nature of WSNs is to use low-power microcontrollers which work well for nonresource-intensive tasks such as sensing and radio communications. The use of more powerful processing units directly affects the sensor node size, power consumption and price. However, the use of low-power microcontrollers limits the sensor node when executing tasks of significantly different intensities as occurs with most IPs which require a scheduler and run on top of the firmware. On top of this, sensor nodes, considered to be autonomous systems, use complex routing algorithms that add a processing cost to the already constrained device. • Communication bandwidth: when sensor nodes need to transmit in real-time, bandwidth limitations impose restrictions on how many sensor nodes can transmit and the rate at which they can post their data in realtime [59]. Furthermore, wireless communication can take up to 75% of the total energy in some applications [60]. The communications between sensor nodes have to be managed in a way that sensor nodes reliably transmit their data and that the energy consumption does not compromise the NL. 2) HETEROGENEITY The IoT ecosystem enables the interconnection of a large number of heterogeneous devices that creates new user applications to improve the quality of our lives. However, engineers working on the development of new applications face challenges when setting up a network of heterogeneous devices and systems. These heterogeneous devices include a variety of networking devices, manufacturers and software. The wide variety of networking connectivity technologies, protocols and communication methods can present difficulties to engineers and developers when implementing new network designs or protocols. Thus, the IoT must bring seamlessly together all heterogeneous devices to provide services to users. 3) INFLEXIBILITY Since IoT enables the interconnection of objects to the internet, the number of connected devices increases dramatically. The WSN technology provides the IoT with new sensing capabilities, integrating the physical world into the digital world. State-of-art WSNs are deployed with inflexible firmware. Where, after deployment, any modification to the firmware (e.g., tasks, behaviour in sensor nodes) requires an on-site visit or Over-The-Air (OTA) programming technology to reprogram sensor nodes' firmware. On site-visit, such as the example given in [4], of a WSN that comprises 100 sensor nodes that measure pollution in a lake, that demands for task reprogramming would require taking sensor nodes out of the lake and reprogram their firmware to modify such task, which is not practical and increases the management costs. Whereas OTA permits firmware updates without taking sensor nodes out of the environment and without interrupting the normal operation of sensor nodes, the time required to update an entire WSN is an issue in time-sensitive applications. A smart building application, which has 69 end devices, needs on average seven hours to complete transferring a 125 KB image file to all sensor nodes [61]. Overall, WSNs enable a range of applications from home monitoring to hazard detection in remote areas with difficult access and strict operational requirements such as NL. Wireless sensor nodes are designed to be small, cheap and wireless, so they can be easily embedded, even into the smallest things and used en-masse in widely physically-distributed applications. Such design requirements impose several constraints in the power supply, memory size, processing power and communication bandwidth, making smart management of these resources a high priority in the design of practical and cost-efficient WSN applications. The WSN has to work seamlessly with other network devices independently of the vendor who produced them. Furthermore, it must also manage limited resources and provide easy updates of realtime applications. Hence, there is a genuine, real-world need for innovative research efforts into the smart management of resources in wireless sensor networks. Solutions should be independent of the practical application, and the behaviour of sensor nodes and the software running on them easily modified. Therefore, there is a need to tackle the above-mentioned challenges inherent to WSNs and the IoT. SDN has been proposed as a prospective solution to overcoming these challenges. VOLUME 10, 2022 III. SOFTWARE-DEFINED WIRELESS SENSOR NETWORK (SDWSN) The SDWSN paradigm is inspired by the SDN technology, which is a network management approach that enables to dynamically and programmatically reconfigure the network, that is introduced below. A. SOFTWARE-DEFINED NETWORKING SDN is a network paradigm solution to the current wired network limitations. It first breaks the vertical integration of the network by separating the control plane or the ''control logic'' from the underlying networking devices such as routers and switches. Then, the networking elements become forwarding devices with little or no intelligence. The intelligence is instead logically centralised in a controller, facilitating policy enforcement and network reconfiguration [7]. A simple representation of an SDN architecture is shown in Fig 1. SDN is an approach to network management that enables dynamic network configuration that improves network performance and oversees the network status. SDN is currently widely used in wired networks where architectures are decentralised and complex, and emerging network applications require more flexibility and easy troubleshooting. Although SDN centralises the network intelligence in the control plane, it does not necessarily mean that the data plane depends on a single controller. The control plane can be built upon multiple controllers which can be physically distributed but logically centralised. Apart from the three SDN layers, data plane or infrastructure, control plane and application plane, multiple Application Program Interfaces (APIs) also exist: northbound, southbound, eastbound, and westbound. The Northbound API enables communication between the application and control plane. Using this API, the control plane provides a global view of the network to the application plane. The southbound API is the communication channel between the data and control plane. This API is used by the controller to deploy different policies and network management configurations in devices of the data plane. Network devices of the data plane report network status to controllers using the southbound API. The eastbound and westbound APIs are responsible for orchestrating the communication channel between multiple controllers, so they can make coordinated decisions [11]. The most well-known protocol used in the southbound API is OpenFlow [62]. Researchers have recently applied SDN concepts into WSNs to perform network management, policy enforcement and network reconfiguration functions. The synergy between WSNs and SDN forms the so-called SDWSN paradigm. B. SOFTWARE-DEFINED WIRELESS SENSOR NETWORK PARADIGM The SDWSN paradigm emerges to solve the management complexity currently found in state-of-art WSNs. This new paradigm allows adding new functionalities into the network, no different from adding another application to the control plane [10]. In large WSNs, with thousands of sensor nodes, it is critical to consider and implement management solutions [25]. A simple representation of an SDWSN architecture is shown in Fig. 2. The SDWSN architecture differs from the SDN architecture mainly in the data plane. The data plane is based upon wireless sensor nodes that are NESs with constrained resources. SDWSNs centralise the network intelligence in an SDWSN controller, leaving sensor nodes acting as simple forwarding devices. Sensor nodes forward packets to the destination based upon the reprogrammable forwarding table managed by the controller. 1) CHALLENGES OF SDWSNs The main challenge of SDWSN architectures is the shared communication medium and constrained resources. SDN was initially conceived for wired networks, where control packets typically flow through a dedicated communication channel, whereas in WSNs the control packets flow through the same medium. Control packets share the bandwidth with data packets, therefore the bandwidth has to be managed smartly to prevent congestion in the SDWSN. The flexibility of changing the behaviour of sensor nodes implies the introduction of control overhead in the network that may incur increased overhead and energy consumption, and a decrease in the PDR which is a KPI that discloses the amount of data delivered successfully. The most common principal requirement of WSN applications is to prolong the NL, thus the constrained resources of sensor nodes have to be managed in a way that the NL is not drastically reduced. Control packets flowing in the network will increase network energy consumption; therefore, novel control overhead reduction techniques are required to minimise the amount of control overhead and interaction between sensor nodes and the controller, as the work presented in [63]. Readers interested in a detailed background on the SDWSN paradigm, including a comprehensive analysis of challenges, architectures, benefits and design requirements can refer to [9], [10], [15]. C. PIONEERS OF SDWSNs As the SDWSN paradigm is still at its infancy stage, few researchers have started exploring potential architectures for SDWSNs. The introduction of SDN abstractions into WSNs was first introduced by two early adopters: SOF [64] and SDWN [65]. 1) SENSOR OpenFlow (SOF) Luo et al. [64] introduced SOF as a Southbound API to facilitate the communication between the control and data planes. The main objective is to make the WSN infrastructure reprogrammable by customising the flow tables. SOF is motivated by the standard SDN protocol for wired networks, namely OpenFlow [62]. Since WSNs are usually thought to be attribute-based and data-centric networks in comparison to conventional addresscentric networks, they offer two approaches for flow creation: (i) compact network-unique addresses (ZigBee addressing), and concatenated attribute-value pairs that route packets based on the data attributes, and (ii) the use of the IP in WSNs, and they suggest two IP stacks: micro Internet Protocol (µIP) or micro Internet Protocol version 6 (µIPv6) [50], and BLIP [52]. In comparison to OpenFlow, SOF provides in-networking processing functionalities, but there is no evidence of any type of improvement in network performance with their proposed protocol. Their paper mainly presents SOF as the first research effort that synergizes SDN and WSN; therefore, it lacks specification and details. 2) SDWN Costanzo et al. [65] introduce SDWN. Their approach differs from SOF in many ways: (i) it proposes a Southbound API, namely a flow table, (ii) it states the requirements for the SDWN, such as support for duty cycling and in-network data aggregation, to minimise the overall energy expenditure of the network, (iii) it presents the protocol architectures for the generic and sink nodes, and (iv) it describes the packet format for all packets flowing in the network. Generic nodes are sensor devices in the data plane that forward packets as instructed by the centralised controller. The sink node is the SDN controller which defines the rules for forwarding packets. Their paper tries to analyse the benefits of SDN in WSNs with emphasis on Wireless Personal Area Networks (WPANs). A brief comparison of the two early adopters is shown in Table 5. SOF and SDWN are considered as the first step towards reprogrammable WSNs, since then multiple research papers have used them as their foundation for new research works. IV. EXISTING SDWSNs PROPOSALS To tackle shortcomings in SOF and SDWN, and the lack of performance evaluation, several authors have proposed SDWSN approaches that aim to improve the overall SDWSN architecture design and performance. This section provides a systematic review of research works found in the current state-of-art of SDWSNs. We group them into five different categories. • General frameworks: This category contains SDWSN research papers that have been proposed to advance in the state-of-art of SDWSNs, but they lack any form of evaluation. • QoS-related works: Here, we group research works that guarantee a certain level of service. These works aim to improve KPIs; including energy consumption, control overhead, delay, traffic congestion, packet loss, throughput, etc. • Fully reprogrammable mechanisms: SDN provides flexibility to reprogram individual sensor nodes functionalities or behaviour; however, there exist research works that extend this to a fully programmable sensor node including both hardware and software. • Network topology and management proposals: This category presents research works that leverage the global view of the controller to devise new topology and management protocols. • Controller placement works: Research works that seek to solve the controller placement in SDWSNs are grouped in this category. A. GENERAL FRAMEWORKS It is worth mentioning that the below works are general frameworks that are the first step to synergy research efforts of SDN and WSNs, but they lack evaluation performance. However, some authors have extended these frameworks into a mature and tested framework which we will discuss later in this review. Previously discussed research works: SOF [64] and SDWN [65], fit in this category. A brief comparison of general frameworks is shown in Table 6. The table compares general frameworks stating their advantages and disadvantages, EOS used, type of controller architecture, their availability to the research and professional community, and surveys where they have been previously discussed. We can see that they are also the first research works towards SDN-based WSNs as they seek to provide a practical, fully functional SDWSN architecture and implementation but with little or no evidence of evaluation. These research works have evolved and been used by the research community to further investigate SDWSNs. B. QoS-RELATED WORKS 1) ENERGY CONSUMPTION This is a well-studied metric in WSNs. Sensor nodes are usually deployed in harsh environments where physical access to sensor nodes is difficult; therefore, WSNs require to smartly manage their energy resources in a way that they could achieve the longest lifetime possible. Table 7 presents and compares research works currently found in the literature whose main objective is to achieve a reduced energy consumption in WSNs employing SDN. Works that fit in this category, but, has been previously discussed in other SDWSN survey papers are [58] (discussed in [14], [15]), [78] (discussed in [9], [15]), [83] (discussed in [8], [9], [11]), [84] (discussed in [15]), and [85] (discussed in [9], [14]). VOLUME 10, 2022 TABLE 6. Report on the advantages and disadvantages of general SDWSN frameworks including the type of operating system used, control plane architecture, code availability to the public, and references for a thorough discussion. The checkmark () and cross () symbols depict whether the code is available to the public or not. The dash (-) symbol indicates that no information was found on the specified cell. We can see that new research works consider SDN as a viable solution to improve energy consumption in traditional wireless sensor-based networks; however, a common drawback is a lack of demonstrating improvement against traditional WSNs and the viability in real-world deployments i.e. the study of control overhead, WSN architecture setup, to include all protocol stack layers and computational complexities. Also, they lack evaluation with other SDWSN protocols, which can be tightly related to the limited amount of publicly available SDWSN approaches. Moreover, the development of energy consumption algorithms involve a large number of mathematical models, and their evaluation is frequently made using mathematical tools rather than network simulators. Network simulators allow capturing of all physical events happening in a real network i.e. collision, packet loss, etc., and at the hardware level. 2) SECURITY This is a concern in IoT networks. It is also in centralised architectures such as SDN. This is especially true in SDWSN architectures with a single controller, whereby an attacker may compromise the entire network by targeting it. Also, securing a large WSN is a high energy-intensive task that can lead to sensor nodes depleting their energy faster. However, SDWSN permits the controller to build a global view of the network which help in identifying malicious devices and activities. Table 8 details research works that aim to identify and improve security issues SDWSNs. Cybersecurity in IoT is surveyed in [95]. Security is a critical aspect to consider when designing low-power IoT solutions. As seen from Table 8, security in SDWSNs has not been received proper attention as much of the research efforts focus on discussing security through survey papers rather than designing and implementing security schemes in SDWSNs. Also, most research works discuss security from the SDN and WSN perspectives, where some of these concepts can be easily adapted, whereas others might be unfeasible to apply. In WSNs, security solutions are mainly implemented at the sensor level where resources are scarce; therefore, such protocols, which tend to be energy-hungry, are not practical. Security aspects in SDWSNs can be addressed individually at each API. At the northbound API, a misconfiguration can open up new channels of attacks or execute a command that leads to abnormal behaviour of the target application or exposed the information flowing between the controller and the application [96]. At the southbound API, most WSN applications share raw environmental data that can be easily secured centralised at the controller. However, if sensitive data need to be secured at the data plane level, then secure communication schemes should be considered such as SSL/TLS, at the expense of an increase in energy consumption. At west-and east-bound APIs, we can find networked devices with ample resources, e.g. controllers; therefore, secure communication channels can be easily created using traditional security schemes. However, this needs to be studied in detail. Readers interested in an extended discussion on SDN and WSN security from the SDWSN perspective can refer to [10], whereas SDN security is discussed in [97]. 3) DELAY This metric is of great importance in sensitive applications such as health monitoring, target tracking, control systems and fire hazard monitoring applications that require prompt reactions to prevent loss of lives and valuable resources. Table 9 compares research works that strive to reduce the delay in SDWSNs. We can see that few papers addressed the delay in SDWSNs directly, it is addressed indirectly in other works. Overall, it has been demonstrated that SDN-based WSNs has the potential of reducing the network delay in comparison with traditional WSNs, as most of the processing has been removed from the sensor nodes. However, it has been also demonstrated that SDWSN works better for static or quasi-static WSN deployments than in dynamic environments as the increased overhead. There is a call for research efforts to make the most of SDWSNs and take advantage of the global view of the network to create new approaches that minimise the delay even in dynamic environments while maintaining a low control overhead. 4) RELIABILITY This metric assures that the collected data is delivered correctly to the receiver. Table 10 compares research works that aim to improve the reliability of SDWSNs. Similar to the network delay, the network reliability has also been addressed indirectly in other research works. SDWSN architectures grant centralised network monitoring to anticipate potential issues that may impact negatively the network reliability. We can see that an increase in network reliability compromises the performance of other key network metrics. There exist a trade-off between network reliability and other KPIs (this also applies to traditional WSNs) such as energy consumption, control overhead, delay, etc. This has to be studied in detail to evaluate and quantify the impact on network performance when increasing network reliability. However, it is expected that centralised architectures such as SDWSN bring more advantages over traditional WSNs to come up with new innovative algorithms to predict network performance indicators to make better network decisions. 5) CONTROL OVERHEAD Since control packets in SDWSNs share the same communication medium with data packets, it is of great importance to maintain a low level of control packets to avoid negatively impacting KPIs such as residual energy of sensor nodes and the PDR. Many research works [66], [102], [103] have indirectly addressed this metric. Control overhead is a key performance metric to consider when designing SDN-based WSNs. From Table 11, we can see that there exist multiple approaches to minimise the control overhead. They can range from architectural designs such as cluster-based architectures, intra-cluster routing and SDN control routing, and techniques to avoid the extra control overhead such as routes checksum, FSMs, threshold functions, etc. The best technique for control overhead reduction is closely related to the application requirements as there exist evident performance trade-offs between them. The overall benefit that SDN brings to WSNs can be overshadowed by the unmanageable control overhead that can be generated if not proper design measures are put in place. C. FULLY REPROGRAMMABLE MECHANISMS Other research works considered alternative architectures where the WSN can be fully reprogrammable, which includes both software and hardware. Portilla et al. [109] proposed a modular architecture for wireless sensor nodes using a microcontroller and a Field-Programmable Gate Array (FPGA) for the processing layer, and Bluetooth radio for communications. The microcontroller manages the radio communications and the analog and digital sensors, whereas the FPGA processes complex operations. Natheswaran and Athisha [110] proposed a remote reconfigurable wireless sensor node with a soft processor which is a microprocessor core that can be implemented using logic synthesis. Miyazaki et al. [111] proposed an SDWSN that uses a role generation and delivery system in a reconfigurable WSN. They used a combination of FPGA and MCU to avoid overloading the MCU. The MCU handles the network behaviour while the FPGA performs energy-intensive functions. Although these works bring flexibility to reconfigure sensor nodes, the utilisation of reprogrammable hardware enlarges the complexity of the design and cost. Besides, energy consumption in FPGAs is an issue as discussed in [112]. However, the greatest advances in FPGAs with ultra-low power consumption characteristics have extended their use to WSNs [113]- [115]. To achieve the full promise of SDWSNs, the wireless sensor nodes should allow top-layer applications to reconfigure their functionalities by executing different programs. In this way, sensor nodes can be seen as small-scale computers with multiple sensing capabilities. D. NETWORK TOPOLOGY AND MANAGEMENT PROPOSALS Network management is complex and challenging in networks. Some functionalities include network provisioning, configuration, and maintenance [116]. The implementation of management tasks can lead to a steep increase in the use of sensor resources. One of the main goals of SDN is to facilitate network management. It is envisaged that SDN architectures can help to make smarter decisions and improve the management of vital WSN resources. From Table 12, we can see that implementing network management solutions implies an increment in control overhead. For example, add-on systems on top of 6LoWPAN grant a global view of network resources but large and complex processing functions still are in the protocol stack. Also, some works lack control overhead analysis and the implication in network performance when making the WSN manageable using SDN concepts. E. CONTROLLER PLACEMENT WORKS The placement of the controller directly influences the WSN performance. Among the most important performance metrics to optimise are energy consumption and NL. The SDN controller can be placed in such a way that minimises the energy consumption of sensor nodes; however, this not always the optimal solution to prolong the NL of the network because the solution to this optimisation problem can be found in a low density area, resulting in an inefficient resource management in the neighbourhood of the controller [130]. Therefore, sensor nodes that lie in the proximity of the controller drain their energy first, resulting in a shorter NL. Table 13 presents research works that aim to solve the controller placement to improve network performance in SDWSNs. As we can see, the controller placement in SDWSNs has not been widely studied in the current state of the art; this can be largely influenced as SDWSN is still at the proof-of-concept stage where most of the research efforts lie in the conceptualisation of it. Besides, the controller placement has been extensively studied in SDN; however, it should be studied in detail for SDWSNs as they impose different resource requirements. A survey on controller placement in SDN can be found in [131], [132], a study on performance evaluation in [133]. V. MACHINE LEARNING OVERVIEW ML is part of AI that studies computer algorithms to mimic human learning and gradually improve its accuracy. ML is a hot topic and a growing field that has caught tremendous attention among IoT stakeholders. ML algorithms are trained to perform prediction and classification tasks, uncovering vital characteristics within the data. Typical tasks involved in the solution of a ML problem are: (i) Data collection: it usually requires a considerable amount of time to complete this task. It can consist of data acquisition tasks, data labelling and adding new data to already existing datasets. (ii) Data preparation: it is a key step to process raw data and turn it into meaningful and clean data before any training is performed (training is explained in V). Feature engineering is often used to make the collected data better suited to the problem at hand. Tasks include data normalisation, dealing with missing values, data transformation, etc. (iii) Choosing a model: this step consists of selecting the right model for the problem. There exist multiple ML models for different purposes. Some are introduced in this section. (iv) Training: training the model is the bulk task in ML. This is an iterative task that aims to use the training set to improve the prediction of the model at each cycle. Supervised learning uses labelled sample data, The above are generic steps to follow to solve ML problems; however, some ML techniques such as AutoML and DL automates much of these tasks. This section briefly introduces the reader to the most widely used ML techniques currently found in the state-ofart of ML. Readers interested in thorough discussions on ML theory please refer to [134]. ML techniques can be grouped into four different groups: supervised, unsupervised, semisupervised and Reinforcement Learning (RL). Given their current widespread usage, in a separate subsection, we introduce DL, which can be employed in supervised, unsupervised and semi-supervised paradigms. A. SUPERVISED LEARNING Supervised learning uses a set of input data X and a set of labels Y . For every sample x, a label y has been assigned, where x ∈ X and y ∈ Y , and these can be represented in pairs (x 1 , y 1 ) . . . (x n , y n ). The goal of supervised learning is to learn a mapping function that matches a given input (x n+1 ) to a label y i . Since the labels in the training set are known, this set of algorithms are called supervised learning. Supervised learning requires a huge burden when it comes to data labelling, but there are efforts out there to reduce this burden by relying, for instance, on weak supervision. This set of algorithms can be further classified into regression and classification depending on the type of output label. Regression algorithms are used to predict continuous values such as salary, cost, etc., whereas classification algorithms are used to assign a class label to a given input. Between the most popular supervised learning algorithms, we can find K-Nearest Neighbour (k-NN), Naive Bayes, Decision Tree (DT), Neural Networks (NNs), and Support Vector Machines (SVMs), which are discussed in [19], [21]. B. UNSUPERVISED LEARNING In comparison with supervised learning, unsupervised learning algorithms just relies on the input data X . The input data is presented to the algorithm without any tags or labels (unlabelled examples). The goal of unsupervised learning is to create a model that automatically learns from the sample data and identify patterns (features) in order to classify them into groups. Data points within groups share similar characteristics (e.g., highest energy level, malicious nodes, etc.). Unsupervised learning uses a probability distribution P(x) given x, whereas supervised learning uses conditional probability distribution P(x|y) given the target vector y. Unsupervised learning is often applied to solve three main applications: (i) clustering groups data points that share similar characteristics, (ii) outlier detection (anomaly detection) that predicts how far a given feature vector is from the unlabelled examples. (iii) reduced dimensionality that aims to reduce the number of features in the input vector. The most widely used unsupervised learning algorithms are K-means clustering and Principal Component Analysis (PCA). A thorough discussion on unsupervised learning techniques and applications can be found in [19], [135]. Overall, supervised learning uses labelled data to train the model. Labelling the data may be a complex and time-consuming task as it requires human intervention, special instrumentation, experiments, etc. It also requires more computing resources for training, especially for large datasets. Whereas unsupervised learning learns the data, classifies and make inferences of it without any labels (unlabelled data is easy to collect). It is less complex than supervised learning as it is not required to fully understand the data. It is very useful in finding patterns. But, it has less accuracy than supervised learning. C. SEMI-SUPERVISED LEARNING Semi-supervised learning is a ML technique that is built-upon a synergy between supervised and unsupervised learning. VOLUME 10, 2022 In its feature space, semi-supervised learning uses a small set of labelled data (x 1 , . . . , x n ∈ X ) along with a large set of unlabelled data (x n+1 , . . . , x n+u ∈ X ). The used of labelled and unlabelled data can significantly improve learning accuracy. It is often found that the collection of labelled data is a costly task as it requires skilled human intervention. It can lead to large and fully training sets infeasible. In contrast, the collection of unlabelled data is relatively inexpensive. In such applications, the use of semi-supervised learning is a good choice. Semi-supervised learning strategies focus on extending either supervised or unsupervised learning by using information known by the other learning paradigm. It can be used in two main settings: 1) Semi-supervised classification: this can be seen as an extension of the supervised classification problem that assumes there are much less labelled data than unlabelled data. The main goal is to train a model from both data types (labelled and unlabelled) such that the resulting accuracy is much better than the supervised model trained on the labelled data only. 2) Constrained clustering: this can be seen as an extension of unsupervised clustering. It uses some supervised information about the clusters as well as unlabelled data. The main goal is to form better clusters than the clustering obtained using unlabelled data only. There exist other semi-supervised learning settings such as regression, dimensionality reduction, etc. [136]. Overall, semi-supervised learning may achieve the same or better performance than supervised learning but using less amount of labelled data leading to a reduction in costs, and better clustering than other clustering algorithms that rely on unlabelled data only. But, semi-supervised learning may increase computational resources as it processes more data and requires more memory. In addition, the outcome accuracy may deteriorate with the use of unlabelled data as the use of more data does not necessary mean that the algorithm will perform better. More detailed information on semi-supervised learning can be found in [136]. D. REINFORCEMENT LEARNING (RL) In contrast with supervised and unsupervised learning, RL uses Intelligent Agents (IAs) to take actions in the environment so it can maximise the notion of the accumulative reward. Also, it does not need labelled examples as in supervised learning. RL uses the trial and error approach, where decisions are made sequential (one after the other). RL is typically modelled as a Markov Decision Process (MDP), where the set of environment and agent states is defined as S, the set of actions taken by the IA is defined as A, the probability of transition from state s to state s under action a is defined as P a (s, s ), and the immediate reward after the previous transition is defined as R a (s, s ). The main goal of RL is to learn an optimised policy that maximises the reward function [137]. More detailed information on RL can be found in [138]. E. DEEP LEARNING (DL) DL can be seen as an extension of NNs. In general, a NN with an input layer, multiple hidden layers with non-linear activation functions and an output layer is considered a DL network. Here, the use of non-linear activation functions is key as it allows the network to solve complex non-liner problems. As in NNs, each layer in DL contains units (neurons). They can have multiple inputs and make weight associations that are updated based on the error and learning rules. DL architectures that have been applied to WSN applications include Convolutional Neural Networks (CNN) [139], Recurrent Neural Networks (RNNs) [140], and Autoencoder (AE) [141]. Readers interested in thorough discussions on DL algorithms, techniques and applications shall refer to [142]. VI. MACHINE LEARNING SOFTWARE-DEFINED WIRELESS SENSOR NETWORK (ML-SDWSN) A typical ML-SDWSN architecture comprises the three SDN planes and a machine learning module. The ML module works as an add-on system that can be easily installed within the SDWSN architecture as shown in Fig. 4. It can be found in two distinct locations: at the control plane (1) or the application plane (2). The location of the ML module within the SDWSN architecture is upon the network designer, userand application-specific requirements, and available network resources. Installing the ML module at the control plane, which can be built upon multiple controllers, will require the layer to supply all the resources needed for the correct functioning of the network such as enough CPU power to cope with the ML processing needs and memory requirements. The module relies entirely on a single plane; therefore, minimising system failure and network latency as it removes eventual communication outages at upper layers and reducing communication bottlenecks. Whereas, installing the ML module at the application plane frees computing resources at the control plane. It also permits to compute of high processing-intensive functions in a remote location with higher processing resources, therefore, reducing the processing delay. However, the network outage at the upper layers can limit the ML-SDWSN system to act immediately to changes in the data plane; therefore, impacting negatively the network performance. This section provides relevant research efforts in theoretical works and strategies of adopting ML techniques in the context of SDWSNs. The nature of the SDWSN centralised architecture opens up new research opportunities to experiment with ML techniques embedded in the SDWSN architecture to improve the overall WSN performance. Here, we first group research works based on the specific network problem they address. At the end of this section, we discuss and compare the surveyed ML-SDWSN approaches. Readers interested in ML techniques applied to SDN please refer to [16]. A. MOBILITY Technological advances and the introduction of the IoT have enabled new emerging mobile IoT applications such as monitoring and tracking systems for a variety of everyday human activities including sports, health care and entertainment [143]. Current routing protocols of choice for IoT have not been designed for such applications. Researchers have lately used ML techniques to tackle mobility in WSNs through SDN. Theodorou and Mamatas [144] proposed SD-MIoT, which is an SDN-based solution for mobile low-power IoT applications. SD-MIoT aims to reduce the control overhead by detecting the mobility behaviour of sensor nodes. The mobility detector uses network adjacency matrices built upon collected sensor data at the controller. Given a simple mobility scenario as shown in Fig. 5, the mobility detector build a connected graph G = (N , E) where N is the set of sensor nodes and E the set of communication links between sensor nodes. It then builds the adjacent matrix A t , at time t, of G. where each element of A t(i,j) is defined as: To detect connectivity changes, a square transition matrix is calculated at two subsequent adjacent matrices as follows: The transition matrix will contain rows, which represents sensor nodes, with connectivity changes. If all elements of a particular row have a zero value indicates that there are no changes for that row (node); therefore, it is assumed that the sensor node is a fixed node. When multiple connectivity changes are detected in a row (sensor node), it is assumed to be a mobile node. When a single connectivity change is detected, the mobility status of the sensor cannot be defined; however, a simple moving average is tuned to find the best window to allow early connectivity detection while minimising the number of false positives. Then, the mobility detector applies the k-means cluster algorithm to separate static nodes from mobile nodes. The routing protocol proactively and constantly deploys forwarding rules to mobile nodes, therefore, reducing the control overhead. The decision module based on ML is placed in the application plane of the SDWSN architecture. SDN-(UAV)ISE is introduced in [145] for WSNs with data mules. The network architecture, shown in Fig. 6, comprises a data plane based on low power sensor nodes, a cellular network base station to enable communication with the UAV and the control plane that host the ML module. The drone, which acts as a mobile node, serves as a relay node to the SDN controller. The 'set cover problem' is used to find the optimal position to reduce the number of destinations to visit, thus, minimising energy consumption and time. A DT algorithm is used to predict the medium-long term mobility of the drone. The training dataset is constantly updated using the collected data of sensor nodes. The forecasted movements of the drone permit to forecast of the topology changes, so the flow table is created beforehand to reach the drone, thus, reducing the number of control packets generated. SDN-(UAV)ISE reduces the control overhead specially when the topology changes. Roy et al. [146] proposed a RL based adaptive topology control approach. This approach is used in a WSN with mobile nodes to improve network latency, PDR and energy efficiency. It is then demonstrated that RL presents poor overall QoS when mobility is erratic. They then discuss the use of supervised learning algorithms (e.g. RNN) to identify nodes with low periodicity to mitigate their impacts on QoS. Table 14 compares research works that have tackled current mobility challenges in WSNs by combining ML algorithms with SDWSN concepts. These research works are the starting point for new innovative approaches to solving mobility issues in SDWSNs and traditional WSNs. B. SECURITY The broadcast nature of WSNs imposes unique challenges. Traditional security solutions cannot be applied directly. Sensor nodes are resource-constrained devices, while most of the traditional techniques require processing-intensive functions. Sensor nodes are also deployed in harsh environments, making them susceptible to physical attacks, and finally, sensor nodes often interact closely with the physical environment and people, creating new security issues [147]. A simple representation of an ML-SDWSN architecture with watermark enabled is depicted in Fig. 7. SDN-based approaches open up new opportunities to solve the above-mentioned challenges in WSNs. Miranda et al. [151] proposed a collaborative security framework for SDWSNs. It includes an Intrusion Detection System (IDS) in the data plane and an anomaly detection solution near the data plane. A smart monitoring system along with an SVM algorithm is used to improve anomaly detection and mitigation by isolating malicious nodes. At the data plane, CHs generate and embed watermark to data and the sink node runs a watermark detection algorithm to ensure the accuracy of recurrent authentications while implementing data integrity inspections. Kgogo et al. [148] also proposed an IDS using ML to identify which ML algorithm performs better in the detection of threats and attacks. The algorithms tested were DT, SVM, and logistic regression. Results demonstrated that the SVM model is the most effective in detecting both normal and anomaly instances, followed by DT. However, DT is the most efficient and effective in detecting network intrusion in real-time, so the SDWSN can react to any intrusion instantaneously. A comparative study of three AI approaches for IDSs using SDWSNs is presented in [152]. The SDWSN controller comprises three main functions: (i) The flow collector which collects the network information, (ii) the anomaly detector which detects any abnormal behaviour in the networks, and (iii) the anomaly mitigator which serves to counteract the detected anomaly. The three AI-based approaches used are DT, Naive Bayes, and DL. Results show that the Naive Bayes approach is best suited for SDWSN applications where the controller has restricted memory capabilities,e.g., the controller is embedded in one of the sensor nodes, and it also shows fewer energy consumption requirements. For SDWSN applications where the controller memory size is not a concern, e.g., external or cloud-based controllers, the DL or DT anomaly detector can be used. However, the DT approach presents the best overall performance for detecting anomalies, especially, for delaysensitive applications. Chen et al. [149] presented an MLbased DDoS attack detection system. They deployed various wireless sensor nodes in eight poles to collect the data. They extracted the features based on the execution of multiple DDoS attacks including ICMP flood, SYN flood, and UDP flood, with different periods and duration times. Results show that DT achieved over 97% accuracy. Zhao et al. [150] proposed a trusted link-separation method for SDWSNs in adversarial environments. They consider both routing efficiency and security. They use a Bayesian-based model to evaluate sensor nodes' trustworthiness based on their communication interactions. They formulate a multi-objective optimisation problem for the trusted link-separation multipath. The optimisation problem is solved using a greedy algorithm. Table 15 presents a qualitative comparison of research works that aim to tackle security vulnerabilities in WSNs using ML-SDWSNs. These works have demonstrated that ML is a good candidate to overcome the security vulnerabilities currently present in traditional WSNs and SDWSNs, without putting at risk valuable and scarce network resources. C. ENERGY EFFICIENCY This metric has been previously introduced in Section IV-B1. Here, we group research works that use ML techniques to improve energy efficiency in SDWSNs. Huang et al. [153] proposed an SDWSN prototype to improve energy efficiency in environmental monitoring applications. They use RL to perform value-redundancy filtering and load-balancing routing that can adapt to environmental variations and network status, improving energy efficiency and adaptability of WSNs for environmental monitoring applications. Banerjee and Sufian [154] proposed an RL approach to control the transmission range of SDWSNs with moving nodes. Sensor nodes have multiple transmission power levels, and to decide the optimum power level an Epsilon( )greedy algorithm is used. This RL approach gains knowledge from the velocities of successors and link quality metrics such as RSSI, packet reception rate, and attenuation. Younus et al. [155] combined RL and SDN concepts to devise a new routing algorithm for SDN-based WSNs that enhance the overall network performance. For the RL algorithms, they used the Q-learning [156] approach to choosing the best routing path from the routing list obtained by the Spanning Tree Protocol (STP). Simulation results show a prolonged NL and an improved PDR. To prolong the NL of the SDWSN, an RL approach that trains the SDN controller to optimise the routing paths is proposed in [159]. The controller gets the rewards in terms of estimated path lifetime loss. The RL uses four reward functions aimed to extend the NL and reduce energy consumption. Results show a NL improvement of 23%-30% as compared to RL-based WSN. Training the SDWSN controller to find alternative energy-efficient routing paths has been studied in [160]. They used a Deep Reinforcement Learning (DRL) approach that configures routing paths avoiding the use of sensor nodes with low energy levels. The reward expected for forwarding packets to the next hop is estimated using a deep neural network, mainly CNN. Results demonstrated that the proposed approach achieved a prolonged NL compared to existing state-of-art methods. This approach increases the number of hops a packet needs to travel to reach the destination by finding alternative paths, rather than the traditional SP, to avoid exhausting the energy of sensor nodes with low remaining energy. Abdolmaleki et al. [157] proposed a Fuzzy topology discovery protocol for SDWSNs. They implemented a fuzzy logic based SDN controller to improve network performance. The fuzzy logic controller considers the neighbours, traffic, workload level, and remaining energy of each sensor node to choose the best forwarding node. Results show that the proposed approach extended the NL by 45% and the PLR by 50%. A reduced energy consumption and control overhead can be achieved by using a model that predicts the energy consumption of each sensor node. Rahimifar et al. [158] proposed a Markov-based model to predict the future energy consumption of sensor nodes. The controller predicts the individual energy consumption of sensor nodes; thus, sensor nodes avoid reporting energy levels to the controller. Nunez Segura and Borges Margi [161] proposed a Markov chain prediction mechanism for SDWSNs. They compared the prediction model by running it on every sensor node of the WSN and solely in the controller. Experiments show that running the prediction algorithm on the controller (moving the prediction out of sensor nodes) increases the prediction accuracy and PDR while reducing the delay, energy consumption, control overhead and sensor nodes' processing overhead. Table 16 presents a qualitative comparison of research efforts that have used ML-SDWSN concepts to further improve the energy efficiency in traditional WSNs. These works took advantage of the global view of the network granted by the controller and the power of ML to discover energy-efficient paths, optimal transmission range and energy consumption predictions to extend the NL of WSNs. D. RELIABILITY To minimise power outages, which are due to a persistent fault and over utilisation of distribution transformers (DTs), of electrical distribution systems, a remote IoT monitoring and fault prediction system is proposed in [162]. Their approach is a low-cost implementation of a distributed controller architecture with wireless sensor nodes attached to transformers. The LoRa sensor nodes are equipped with a temperature, oil level, humming noise, and overloading sensor. They act as a health tracker of the transformers. The prediction system uses an NN algorithm, which runs on the management plane for prediction on real-time sensor traffic, to improve the smart-grid reliability, transformers health check, and maintenance practises. This is a practical implementation of SDN-based WSNs, and the use of ML to improve the overall system performance. Leveraging the global view of the controller, monitoring the network infrastructure allows employing suitable traffic engineering techniques to improve network performance. An SDN-based IoT architecture is presented in [168] to perform a time granular analysis of network traffic for efficient network management. They used different supervised learning algorithms including DT, SVM, and k-NN to examine the network traffic. Results showed an overall accuracy rate of over 90%, but k-NN achieved 98% accuracy. Other research work that addresses network traffic by means of nonsupervised DL but from the wireless medium perspective, in general, can be found in [170]. With the advent of Internet technologies, new applications have emerged. Each application imposes different bandwidth requirements. It is of great importance to have network resources balanced to comply with strict QoS requirements. The research work presented in [167] aims to minimise the number of unsatisfied user equipment while maximising the throughput of the network through load balancing. They used an NN, which was improved using the fruit fly optimisation (FOA) algorithm, to solve this problem. To comply with strict network reliability requirements, a link quality prediction model for SDWSNs is presented in [169]. The model focuses on predicting the link quality between neighbouring nodes, therefore, improving the overall stability of the routing paths. They use multiple ML models such as; regression, DT, SVM and NN with physical and logical parameters as inputs. The physical parameter includes the RSSI metric, whereas the logical parameter includes the reception of the historical discovery packets. The trained model is then run at the sensor nodes level. Simulation results show that the SDWSN and ML, at the link-layer level, improve the network reliability by avoiding the use of unstable wireless communication links. Since the network infrastructure should dynamically adapt to the user requirements, there should be a decision-making stage that chooses the routing protocol that meets the user-specific requirements. Misra et al. [165] proposed a situation-aware protocol switching for SDWSNs. They designed an adaptive controller that deploys the appropriate routing protocol based on the network conditions and application-specific requirements. The decision-making stage is based on a supervised learning algorithm, which trains the SDN controller, therefore, it can dynamically switch among routing protocols, as per user-specific requirements. As the location of SDWSN controllers is key to enhancing the network performance, it is of paramount importance to find the best location that satisfies the user requirements. ML has been recently being used to solve the multi-controller placement problem in SDWSNs. In [166] an energy-aware multi-controller placement solution using a Particle Swarm Optimisation (PSO) for minimising energy consumption is presented. Moreover, a DRL algorithm resource allocation strategy is conceived to reduce the waiting time of tasks. Researchers have realised that cognitive radio technology can be effectively used along with SDN abstractions to enhance the utilisation of spectrum resources. In [164] a sustainable SDWSN architecture with cognitive radio technology for efficient power management, channel handoffs and spectrum utilisation is proposed. The proposed work has an RL algorithm for efficient spectrum utilisation. The network performance is improved by introducing new capabilities such as dynamically adaptation to spectrum and interference conditions. Orfanidis [163] also intended to refine the robustness of the network by identifying multiple sources of interference altering the network. They planned to use a supervised statistical ML approach. A multivariate linear regression algorithm was planned to use which runs in the SDN controller. A testbed with multiple sources of interference, such as Bluetooth [41] and WiFi [46] networks, was proposed. The feature vector for the statistical model proposed includes PDR, energy consumption, interference, RSSI, end-to-end delay, and noise. The use of ML in SDWSNs has already been explored in the agriculture industry. To enhance the grain quality sold to customers, an ML-based approach is proposed, in [171]. The main objective is to classify the quality of the stored grain. In the deployment, key environmental factors including VOLUME 10, 2022 temperature, moisture and CO 2 concentration levels are considered and used as input for the ML models. The SDWSN controller runs the ML models including the KNN, random forest, and linear regression. Experimental results show that the random forest algorithm performs better than the other classifiers in separating high-quality grains from the infested ones. Table 17 presents a qualitative comparison of ML-SDWSN research works that aim to improve the WSN reliability using ML-SDWSN concepts. These research papers have demonstrated that by having real-time network information (e.g., statistics) and using the power of ML, the controller can promptly react to any network change (e.g., interference, traffic, etc.) by setting up a new network configuration. This allows the ML-SDWSN architecture to proactively provision optimal resources to deal with potential threats that hamper the network performance. E. A CASE STUDY: IN-VEHICLE WSNs AND 6G Due to the increasing number of sensors deployed in modern cars, a growing interest has emerged in reducing the number of wires connecting sensors to cars' microcontrollers [172]. One way to minimise the wiring in modern cars is to use WSN technology. Wireless sensor nodes, in small environments such as cars, are usually in one-hop distance from the sink. A star topology may be used to connect all sensor nodes. However, the high density of sensor nodes can lead to high network interference and latency in a contention-based MAC protocol [173]. The TSCH protocol provides both time and frequency diversity for transmissions boosting the network reliability [55]. TSCH reduces the communication and power overhead. TSCH relies on the scheduler that sets the communication links for each cell (a specific time and channel) in the slotframe. The transmission schedules highly impact the performance of the WSN. They are usually designed and scheduled to meet a specific requirement (e.g., reliability, latency, energy, etc.). A star topology (case 1 in Fig. 8) in TSCH leads to a large slotframe increasing the network latency as the network density grows, whereas a tree topology (case 2 in Fig. 8) enables parallel transmissions reducing the network latency. 1) THE ROLE OF ML-SDWSN AND 6G In TSCH networks, the communication schedules are assigned autonomously (e.g., orchestra [174]) or centralised. SDWSN technology enables new ways to assign communications schedules. Network data is collected centralised such as packet loss, link qualities, energy, etc. The control plane has a global view of the network which makes the perfect environment as it has all the network information and resources to decide on the best communication schedules that satisfy the user or application requirements. Having the network data at hand permits engineers and scientists to deploy bespoke scheduling algorithms. These communication schedules can also be assigned with the aid of ML algorithms. ML offers the potential to anticipate communication links that will suffer from interference when the car passes a specific road or source of interference (a source of interference coming from a motorbike in Fig. 8). ML can set the schedules that reduce the latency of a sensor that is sending more frequent critical data. ML can also dynamically update the schedules based on the remaining energy of sensor nodes. ML-SDWSN technology can be easily applied to intra-car WSNs either utilising the car technology or the 6G infrastructure. Modern cars have powerful processing units in which the control plane can run complex computational operations with strict time and processing requirements. However, the designing and planning of the upcoming sixth-generation (6G) communication network has already begun. 6G is seen as a disruptive technology that will go beyond the mobile internet and will support ubiquitous AI technology at the edge of the network. 6G is envisioned to offer computational efficient dedicated hardware capable of running AI/ML algorithms locally at the edge (see Fig. 8). The 6G infrastructure creates the perfect computational environment to deploy ML-SDWSN applications that impose stringent computational requirements. Offloading the control plane from cars to the 6G network can significantly improve the processing and communication latency, which is of high priority for delayed sensitive applications. F. DISCUSSION ML-SDWSN is a new paradigm that has emerged due to (i) the increasing popularity and demonstrated capability of SDWSNs to enhance network performance, (ii) the ML potential to further improve the network performance of SDWSNs, and (iii) the ML potential to overcome the concerns raised when introducing SDN concepts in WSNs. From the research works that adopted ML in the context of SDWSNs, we can observe that ML-SDWSNs are still in an early development stage. However, a notable exploration has been already achieved. ML techniques have been applied to a range of network issues. To highlight, ML has been shown great ability to reduce the amount of control overhead (packets) flowing in the network, improving the network security and energy. 1) CONTROL OVERHEAD SDWSN has shown great performance in solving challenges currently present in traditional WSNs (see Section IV), and reacting to dynamic changes in the condition of the environment (not being able to be solved with the current traditional techniques of state-of-art WSNs). However, it has also shown that the amount of control overhead needed to implement SDN abstractions into the WSNs required appropriate attention. The ML-SDWSN paradigm is seen as a promising solution to reduce the amount of control overhead packets required to implement SDN abstractions into WSNs. The global view granted through the SDWSN architecture permits the ML module to make accurate predictions allowing the controller to act promptly to changes in the network, provisioning proactively network resources, thus, reducing the control overhead and energy consumption. An example can be found in [144], where the SDWSN architecture collects network information such as reports of neighbouring nodes (also known as Neighbour Advertisements (NA)), and the ML module classifies static nodes from mobile nodes. The use of both SDWSN and ML technologies permits the controller to configure optimal routes in mobile nodes, at a precise time, to avoid them generating flow requests to the controller to find the path to their destination. 2) SECURITY WSNs and SDWSNs are susceptible to security threats due to their broadcast nature and centralised architectures. Intruders can tamper with sensors and the overall network, putting at risk valuable network assets and systems. Traditional security solutions applied to wired networks cannot be applied directly in WSNs as most of the solutions require processing-intensive functions. The use of both SDWSN and ML technologies creates a new pathway to solve security issues inherent to WSNs and not being easy to solve in state-of-art WSNs due to their limited resources. The SDWSN realises the network collection (e.g., sensor behaviour, energy levels, raw data) and reconfiguration, while the ML module runs a suite of algorithms that can easily classify problematic nodes, identify network pitfalls, etc. Both technologies enable the execution of appropriate actions to mitigate the impact promptly. ML-SDWSN grants an intelligent, centralised and resource-aware mechanism to protect the network against cyber-physical attacks. It frees up the processing and communication load of sensor nodes to implement security countermeasures. 3) ENERGY EFFICIENCY Monitoring applications are often deployed in harsh environments with difficult access to the electrical network. These types of networks aim to run the programmed task for the longest time possible. To achieve the longest NL multiple solutions has been proposed [175]. SDWSN offers innovative mechanisms to bring forward new solutions to such problems (see Section IV-B1). The centralised architecture creates a new setting to run novel algorithms at the logically centralised control plane. ML has been used in SDWSNs to balance the overall energy consumption to prolong the NL. ML learns and identifies patterns from the information collected from the sensor nodes. This data is used by the ML module to configure e.g., new routing paths, at a precise time, to minimise the main objective (e.g., overall energy consumption, individual energy consumption, etc.). Research results of ML-SDWSN works that aim to minimise the energy consumption show that ML-SDWSN technology is a good candidate to further extend the NL of traditional WSNs and SDWSNs. ML-SDWSN has not only been used for finding the routing path that reduces the energy consumption but it has also been used at the individual sensor node level. ML can tune the ideal transmission range for sensor nodes, thus minimising the transmission energy. Also, one of the performance metrics to consider when devising a new routing path for SDWSNs is the individual remaining energy of sensor nodes. ML plays a role in predicting sensor nodes remaining energy, minimising the need for sensor nodes to report their energy level, therefore, minimising the control traffic and energy in the network. The reduction of energy consumption is a key performance metric to consider when deploying monitoring applications. ML has shown great potential to achieve this goal. However, care must be taken with the frequency of the network configuration tasks, as this can negatively affect the network performance. 4) NETWORK RELIABILITY ML plays a big role when comes to improving the network reliability of traditional WSNs. ML uses the centralised information collected through the SDWSN architecture to identify patterns. These patterns (e.g., traffic congestion times, interference, nodes failures, task loads, etc.) are then used by the control plane to reconfigure the network (provision the network) to avoid a drop in the network reliability. For example, the use of ML in SDWSNs has been used to detect sources of interference and to trigger timely actions to mitigate them. ML has also been used to detect periodical heavy traffic links and to anticipate them by setting up new routing paths. Network reliability is a key objective when designing WSN applications. There exist multiple solutions to enhance the network reliability in WSNs. They can range from verification at individual layers of the protocol stack up to end-toend verification. Although they are state-of-art mechanisms to improve network reliability, they struggle to overcome network-level traffic issues. Overall, ML-SDWSN is built upon a multidisciplinary area that puts together the best of communication networks, software-defined networking and machine learning concepts to go beyond the current state-of-art knowledge in SDWSNs to facilitate WSN programmability without putting at risk the network performance. However, there still is room to explore ML techniques in SDWSNs, but, most importantly to evaluate the benefits that ML brings to SDWSN, especially, against traditional WSNs. Besides, a comparison of the two locations of the ML module is needed (see Fig. 4) to appreciate the significance and the applications for both architectures. ML-SDWSN is a promising technology envisioned to evolve along with the deployment of 5G and 6G networks including SDN, ML, cloud computing, and Network Function Virtualization (NFV). VII. SUMMARY OF SDWSN PROPOSALS In this section, we provide simple statistics of previously discussed SDWSN proposals. This will allow us to uncover research open issues and future trends in SDWSNs. A. SUMMARY Fig. 9a shows the percentage of research works for each category. This lets us discover where most of the research efforts in SDWSN has focused. Most of the proposed research works leverage SDN concepts to reduce energy consumption and management complexities currently found in WSNs. In contrast, the least number of research works focused on making the sensors fully reprogrammable. The most popular EOS used in SDWSN is Contiki as shown in Fig. 9b. Research works that have not used any type of operating system are largely influenced by research works that aim to reduce energy consumption in SDWSNs in which most of them used a numerical tool such as MATLAB. It is of great importance to identify the most used performance metrics as they also help to pinpoint where most of the research effort resides. Similar to WSNs, the most popular performance metric to improve is energy consumption as shown in Fig. 9c. The control overhead, which is among the most important metrics, is considered in 11% of the surveyed works. Packet delivery metrics such as PDR and PLR are considered in the 8% of the proposals. Fig. 9d shows the percentage of the number of research works that have used any type of evaluation. Even though most of the research efforts aim to reduce energy consumption, which largely influences numerical evaluation methods, in SDWSNs, the most popular network simulator is Cooja, which is the Contiki network simulator. Mininet and NS-3 that offer add-on modules (e.g., WiFi, OpenFlow, etc.) to reduce the time to design a simulation environment was used in 6% and 4% of the surveyed works, respectively. 10% of the research works did not have any form of simulation or experimental evaluation. Overall, 41% of the surveyed works were evaluated using simulations tools, 22% through testbeds, 21% employing numerical approaches. The remaining 16% of the works did not use any evaluation method or it is unknown. B. POPULARITY OF SDWSN AND VENUES OF PUBLICATION The first research works that start exploring the use of SDN concepts in the WSN architecture appeared around 2012. Then, several research works start appearing to extend the use of SDWSNs to a vast variety of IoT applications. However, exponential growth is perceived from 2017. This agrees with the number of research works on ML techniques in SDWSNs that started to emerge. In 2019 and 2020+, the growth continued exponentially. This is influenced by the number of research works that have used previous works, which have their code freely available, to devise new solutions to improve network performance. This exponential growth shows that the research community sees SDWSNs as a potential pathway to overcome the management complexity currently found in the current state-of-art WSNs. The publication venues of scientific publications reporting on SDWSNs is shown in Fig. 10. As can be seen from the figure, the most popular dissemination method, by far, is journals, followed by conference proceedings. Workshops and forums are the least popular dissemination methods. The journal publications are widespread across different venues. However, looking at specific journals venues, not shown here due to space constraints, the most popular journals are IEEE Internet of Things Journal with 10 publications, followed by IEEE Access with 8, IEEE Systems Journal with 6 publications and Sensors (MDPI), IEEE Sensors Journal and Journal of Ambient Intelligence and Humanized Computing (JAIHC) with 5 publications. VIII. MAJOR CHALLENGES AND FUTURE DIRECTIONS SDWSNs is a relatively new and continuous evolving research area. Previous sections provided a comprehensive review and discussions of SDWSN and ML-SDWSN research works. The objective of this section is to group and discuss open issues currently found in stateof-art SDWSNs. A. STANDARDISATION SDWSNs have to deal with the exponential growth of wireless sensor devices, a vast variety of manufacturers, and protocols. The creation of standards for such rapidly evolving technology, with various groups of stakeholders, is not an easy task [176]. Some SDWSN papers share similar architectural designs and protocols, while others have their own new architectures and protocols. There is currently no established VOLUME 10, 2022 technical standard for SDWSNs that defines the set of functions and protocols for sensors nodes and controllers [15], [177], [178]. The standardisation of SDWSN should be seen as a holistic architecture that covers all layers involved in the model. The exponential growth of scientific articles calls for an urgent standardisation. Otherwise, this will result in incompatible architectures, and protocols that will go against the SDN principles [10]. Therefore, affecting the rate at which new SDWSN proposals are emerging. B. CONTROL OVERHEAD One major concern of adopting SDN principles in WSNs is the control overhead. SDN was originally designed for wired networks where control packets flow through a dedicated control channel. In contrast, SDWSNs share the same communication medium for both control packets and data packets. Even though control overhead has been indirectly addressed in many research works (see Fig. 9c), papers that specifically focus on reducing the control overhead is still low as shown in Fig. 9a. Minimising the number of control packets is of a great deal to avoid impacting the network performance negatively. Research works have applied multiple techniques to reduce the control overhead as shown in Table 11. Research works that synergy all those techniques simultaneously with ML techniques can lead to a significant improvement in control overhead. For example, the use of ML techniques to tackle mobility in WSNs can greatly reduce the control overhead by proactively and constantly setting the path for packets generated by mobile nodes. This reduces the amount of packet-in messages, which are flow setup requests sent by sensor nodes to the controller to seek instructions on how to handle an incoming packet that is not present in its forwarding table. 1) NEIGHBOUR ADVERTISEMENT AND NETWORK CONFIGURATION SDWSNs have two main functions that generate control packets [73]. (i) Neighbour Advertisement (NA) which is a key function in the initial phase of the SDWSNs setup. Sensor nodes use NA messages to advertise their current and neighbour status. The SDN controller builds a global view of the network using NA messages. Sensor nodes also use NA messages to keep the controller updated on any change in the network. The frequency of NA messages directly affects the network performance. Frequent NA messages immediately warn the controller about any change in the network (e.g., dead node, interference, battery depletion, etc.) but at the cost of increased control overhead and energy consumption, while infrequent NA messages reduce the impact on network performance, the controller would not be able to react immediately to changes in the network. (ii) Network Configuration (NC) is used by the controller to manage and control the overall behaviour of the network. Literature review reveals that NC packets are mainly used to dynamically program forwarding tables of sensor nodes. Overall, there still are research gaps to reduce control overhead in SDWSNs. What should be the optimal frequency of NA messages without affecting the network performance, also how to deliver NC messages effectively and at the right timing while minimising the impacts on network performance. C. SECURITY Along with the control overhead, security is one of the main concerns in SDWSNs. Security in WSNs, in general, is one of the research areas that have caught most of the researchers' attention. WSNs impose unique challenges due to the dynamic behaviour of communication links. Moreover, sensors nodes have limited resources that restrain the use of traditional security solutions. However, the centralised architecture of SDN brings advantages when devising new countermeasure solutions for security threats. The global view of the network at the controller facilitates constantly and proactively detecting changes in the network. Also, the centralised network information calls for the use of ML-based solutions. Security in SDWSN is still in its initial stage as shown in this survey. But, it makes sense to use ML algorithms in SDWSNs due to the centralised architecture. The centralised architecture offloads the power-intensive computational tasks from the network infrastructure, then security applications can be easily implemented at the controller. The advantages and disadvantages of centralised or distributed security solutions based on ML need to be studied in detail. Centralised architectures have an overall view of the network facilitating the detection of abnormal behaviours but at the expense of more network information. In contrast, in distributed architectures sensor nodes can also perform some amount of processing to run lightweight ML solutions, which minimises the control overhead, but it may increase the energy consumption due to the processing. D. CONTROLLER PLACEMENT The location of the controller in the network directly affects the network performance. Controller placement has been widely studied in SDN [131], whereas controller placement in SDWSNs is still in its infancy stage. Although SDWSN is inspired by SDN, the communication medium differs. Therefore, the optimal placement of the SDWSN controller can be based on previous research works on SDN, however, the placement has to be subject to specific characteristics of the transmission medium, in this case, wireless. The controller placement is also tightly related to scalability problems in SDWSNs. The use of distributed and dynamic SDWSN controllers (embedded in the sensor nodes) can potentially balance the expenditure of key network resources, e.g., energy. The use of ML algorithms to predict, and pinpoint the best locations where sensor nodes can run key controller functionalities may lead to an overall network improvement and a reduced control overhead. E. EOS Fig. 9b reveals that most of the research works, in this survey, did not adopt any type of EOS. In fact, there still are a number of EOSs that have not been yet used in SDWSNs. For instance, there is no evidence of any SDWSN solution that have used a Real-Time Operating System (RTOS). An RTOS works on strict processing time requirements. This can serve for SDWSN applications that require some level of reliability. In general, the use of EOSs aligns with SDN principles. It brings flexibility when adding new applications to sensors' programs. The use of EOSs makes sensor nodes to be seen as small-scale computers with multiple sensing capabilities, and they are also supported in a variety of sensor platforms, shrinking the interoperability breach. F. SCALABILITY This is another big concern in centralised architectures such as SDN. It is known that the management overhead increases as the network increases. Several techniques have been proposed to address scalability issues in SDWSNs. Among the most widely used techniques is the use of multiple controllers. The control plane may include physically distributed controllers. The location of the controllers directly affects the network performance, as discussed in Section IV-E. The network management load can be balanced across multiple controllers. Each controller oversees a specific zone of the network topology. However, one concern that rises is to find the optimal number of controllers required before affecting network performance. Also, how to cope with the dynamic nature of WSNs. The use of static controllers can directly affect the NL. G. MACHINE LEARNING (ML) The 48% of the research works surveyed here adopted ML techniques in their proposals. The number of ML-SDWSN research works has been exponentially increasing, with a steep increase in 2020. The first ML-SDWSN articles started appearing in 2015; however, ML-based works took off in 2018. The year with the most numbers of publications in ML-SDWSN was 2020+ with 17 publications. This increasing popularity shows that ML has been seen as an attractive solution to improve network performance on SDWSNs. The adoption of ML in SDWSNs has shown good performance in reducing control overhead, prolonging NL, and intrusion detection. However, there still are areas to explore and ML techniques to use. For example, the dynamic nature of WSNs unfolds new opportunities to envision ML techniques that automatically continuous learning including AutoML and transfer learning. The use of an online AutoML structure will allow the system to continuously adapt to new situations while reducing the need for a long training phase on a big dataset that might not even be available. Transfer learning will permit learning from simulation or controlled environments and deploy them in real-world applications, which might improve the learning rate, accuracy or the need for less training data. DL could be useful in unveiling which kind of features or parameters are actually more relevant to the specific user application. Besides, the use of multiple architectures such as centralised or distributed ML techniques should be studied in depth. The time complexity of algorithms should be also considered, especially for real-time applications with strict time constraints and resource-constrained IoT devices. H. TESTBEDS FOR SDWSNs SDWSNs have different network topologies. Some topologies have the controller embedded in one of the sensor nodes. This imposes strict hardware requirements such as sensor nodes with enough resources to run centralised protocols, store network information and with access to main powers. Other topologies require multiple embedded controllers; therefore, the network infrastructure must provide multiple sensor nodes with large resources. In contrast, SDWSN topologies with the controller connected directly to the sink node (e.g. via serial interface, USB) requires fewer resources from sensor nodes but requires a higher computing machine connected to the sensor node such as a PC, Raspberry Pi, etc. Therefore, a testbed for SDWSNs needs to account for different network topologies, provide an accurate and high dynamic range for power measurements, CPU resources, multiple sensor platforms and EOSs, and debug tools including packet sniffer. IX. CONCLUSION The SDWSN paradigm is built upon the synergies research efforts between SDN and WSNs. SDWSN has been envisioned to solve the management complexities currently found in the current state-of-art WSNs. Overall, SDWSN will help industrial and research organisations accelerate the designing, building, and testing of emerging IoT applications, by simplifying the introduction of new abstractions, removing the management complexities, and costs. This paper presented a comprehensive review of SDWSN research works and ML techniques to perform network management and reconfiguration, and policy enforcement. Additionally, we also provided helpful information and insights to stakeholders interested in state-of-art SDWSNs, ML techniques, testbeds and open issues. This survey has unveiled that VOLUME 10, 2022 although the introduction of SDN abstractions into WSNs is a relatively new topic, notable exploration has already been achieved. The surveyed scientific articles have demonstrated that SDWSN is an effective solution for improving network performance and management, which would not have been possible with traditional WSN architectures. Despite these major achievements, there are several open issues such as standardisation, control overhead, scalability and security that need to be addressed adequately to reach the real promise of a fully reprogrammable network for IoT applications. This survey also reveals that the use of ML algorithms over the SDWSN is becoming popular and shows good performance in tackling the major issues in SDWSN. According to the surveyed articles and statistics performed, we believe that the synergy between ML and SDWSNs can shape networking decisions smarter and robust, and that ML will play a major role in the creation of new applications and protocols for SDWSNs. DL, for example, will be useful in reducing the complexity of model training, especially for large-scale WSN deployments, due to its ability to uncover patterns in the data to build more efficient decision rules. Some ML-SDWSN applications may have strict latency requirements, for such applications, DL could be useful in reducing the training phase and allowing the controller to react fast enough to changes in the network. Lastly, the advent of 6G, mainly its architecture and resources, and the flexibility gained in SDWSN architectures, set the perfect environment to run state-of-art ML algorithms and support upcoming ML approaches. 6G provides a powerful, flexible and multi-node architecture to run, deploy and manage ML-based distributed control architectures and logically centralised control schemes for large scale SDWSNs. ACKNOWLEDGMENT The document reflects only the authors' view and the Commission is not responsible for any use that may be made of the information it contains.
22,527.4
2022-02-03T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Three point functions in higher spin AdS_3 holography with 1/N corrections We examine three point functions with two scalar operators and a higher spin current in 2d W_N minimal model to the next non-trivial order in 1/N expansion. The minimal model was proposed to be dual to a 3d higher spin gauge theory, and 1/N corrections should be interpreted as quantum effects in the dual gravity theory. We develop a simple and systematic method to obtain three point functions by decomposing four point functions of scalar operators with Virasoro conformal blocks. Applying the method, we reproduce known results at the leading order in 1/N and obtain new ones at the next leading order. As confirmation, we check that our results satisfy relations among three point functions conjectured before. Introduction Holography is expected to offer a way to learn quantum corrections of gravity theory from 1/N corrections in dual conformal field theory. In this paper, we address this issue by utilizing one of the simplest holographies proposed in [1], 1 where 2d W N minimal model is dual to Prokushkin-Vasiliev theory on AdS 3 given by [6]. We examine three point functions with two scalar operators and one higher spin current in the minimal model up to the next leading order in 1/N expansion. They should be interpreted as one-loop corrections to three point interactions between two bulk scalars and one higher spin gauge fields in the dual higher spin theory. We develop a simple and systematic method to compute the three point functions by decomposing four point functions of scalar operators with Virasoro conformal blocks. Among others, we expect that this way of computation makes the dual higher spin interpretation easier. Applying the method, we reproduce known results at the leading order in 1/N obtained by [7,8]. Exact results are available only up to correlators with spin 5 current [9][10][11], but a simple relation was conjectured for generic s in [11]. We obtain the 1/N corrections of correlators with spin s ≤ 8 current, and the results for s = 6, 7, 8 should be new. We check that they satisfy the conjectured relation as confirmation of our results. We would like to examine the W N minimal model in 1/N expansion, but we should specify the expansion in more details. The minimal model has a coset description su(N) k ⊕ su(N) 1 su(N) k+1 , (1.1) whose central charge is given by . The model has two parameters N, k. For our purpose, it is convenient to define the 't Hooft coupling The minimal model is argued to be dual to the higher spin theory of [6], which includes higher spin gauge fields ϕ (s) (s = 2, 3, 4, . . .) and complex scalar fields φ ± with mass m 2 = −1 + λ 2 . The large N limit of minimal model with λ in (1.3) kept finite corresponds to the classical limit of higher spin theory, where λ is identified with the parameter in bulk scalar mass. The higher spin gauge fields ϕ (s) and bulk scalars φ ± are dual to higher spin currents J (s) and scalar operators O ± , respectively. Here different boundary conditions are assigned to the bulk scalars φ ± and the dual conformal dimensions are given by ∆ ± = 2h ± = 1 ± λ at the tree level. Basic data of conformal field theory may be given by spectrum and three point functions of primary operators. Since higher spin symmetry of the minimal model is exact, spectrum does not receive any corrections in 1/N. Namely, there is no anomalous dimension for higher spin current J (s) . Therefore, as simple but non-trivial examples, we examine three point functions and specifically focus on those with two scalar operators and one higher spin current as with s = 2, 3, 4, . . .. HereŌ ± are complex conjugate of O ± . In [7,8], the three point functions in the large N limit of the minimal model have been computed from classical higher spin theory. They were reproduced with conformal field theory approach in [8,12,13], 2 but these methods are applicable only to the leading order analysis in 1/N. Since the W N minimal model is solvable, for instance, by making use of the coset description (1.1), we can obtain the three point functions (1.4) with finite N, k in principle. However, in practice, the computation would be quite complicated, and only explicit expressions are available only with spin 3, 4, 5 currents [9][10][11] (see also [15] for an alternative algebraic method). In this paper, we develop a different way to compute the three point functions (1.4) from the decomposition of scalar four point functions by Virasoro conformal blocks. Our method may be explained as follows; Let us consider a generic operator product expansion of scalar operators O i with conformal weights (h i , h i ) as where the coefficient C 12p includes the information of three point function. Moreover, A p has conformal weights (h p ,h p ), and dots denote contributions from descendants. Using the expansion, we can decompose scalar four point function as is Virasoro conformal block, which can be fixed only from the symmetry in principle. Once we know scalar four point functions and Virasoro conformal blocks, we can read off coefficients as C 12p by solving constraint equations coming from (1.6). For our case with O i = O ± orŌ ± , four point functions can be computed exactly with finite N, k, for instance, by applying Coulomb gas approach as in [16]. On the other hand, Virasoro conformal blocks are quite complicated, but explicit forms may be obtained by applying Zamolodchikov's recursion relation [17], see also [18,19]. We can find other works on the 1/c expansion of Virasoro conformal blocks in, e.g., [20][21][22][23]. Gathering these knowledges, we shall obtain the coefficients as C 12p up to the next leading order in 1/N expansion. The paper is organized as follows; In order to study the decomposition (1.6), we need to examine scalar four point functions and Virasoro conformal blocks. In the next section we decompose scalar four point functions in terms of cross ratio z, and in section 3 we give the explicit expressions of Virasoro conformal blocks in expansions both in 1/N and z. After these preparations, we compute three point functions (1.4) by solving constraint equations coming from (1.6) in section 4. In subsection 4.1 we reproduce known results at the leading order in 1/N. In subsection 4.2 we obtain the 1/N corrections of three point functions for s = 3, 4, . . . , 8, and check that they satisfy the relation conjectured in [11]. Section 5 is devoted to conclusion and discussions. In appendix A we examine Virasoro conformal blocks in expansions of 1/c and z by analyzing Zamolodchikov's recursion relation. In appendix B we compute three point functions with higher spin currents of double trace type. Expansions of four point functions We would like to obtain the coefficients as C 12p by solving (1.6). For the purpose, we need information on the both sides of the equation, i.e., scalar four point functions and Virasoro conformal blocks. In this section we examine scalar four point functions. We are interested in three point functions of two scalar operators O ± and a higher spin current J (s) as in (1.4). We consider the following four point functions with scalar operators O ± as Exact expressions with finite N, k may be found in [16]. From the expansions in z, we can read off what kind of operators are involved in the decomposition by Virasoro conformal blocks. In the rest of this section, we obtain the explicit forms of four point functions in z expansion for parts relevant to later analysis. Let us first examine the z expansion of G ++ (z) in (2.1), and see generic properties of the four point functions. The expression with finite N, k is [16] (2.5) Here the exact value of conformal dimensions ∆ + = 2h + is which is expanded in 1/N up to the N −2 order. In the expansion in z, we would like to pick up the terms corresponding to the three point function (1.4). The operator product of O + may be expanded as Here J (s 1 ,s 2 ;s ′ ) (z) are higher spin currents of double trace type as with s ′ ≥ 6 as s 1 , s 2 ≥ 3 and s ′ −s 1 −s 2 ≥ 0. If we use the normalization as J (s) J (s) ∝ N, then the two point function of this type of operator becomes J (s 1 ,s 2 ;s ′ ) J (s 1 ,s 2 ;s ′ ) ∝ N 2 . This is related to the fact that C (s) There could be currents of other multi-trace type, but the contributions are more suppressed in 1/N. Furthermore, A (n,m) (z) are double trace type operators of the form as 9) and the conformal weights are (h n,m ,h n,m ) = (2h + +n, 2h + +m). The dots in (2.7) include the operators dressed by higher spin currents J (s) (z),J (s) (z) for instance. The operator product expansion in (2.7) suggests that the contributions from J (s) or its descendants are included in terms like z s+l /|z| 2∆ + , where l = 0, 1, 2, . . . corresponds to the level of descendant. In (2.4), such terms appear as (2.10) Note that they also include effects from higher spin currents of double trace type J (s 1 ,s 2 ;s ′ ) (z) among others. For the first term in (2.4), the other contributions involve at least one anti-holomorphic currentJ (s) (z). For the second term in (2.4), the expansions become polynomials of z andz at the leading order in 1/N, and this implies that double trace type operators A (n,m) should appear as A p in (1.5). At the leading order in 1/N, we can expand (2.10) around z ∼ 0 as This corresponds to the expansion by the identity operator in (2.7). Thus the non-trivial contributions to our three point functions come form the terms at least of order 1/N. At the next and next-to-next orders in 1/N, there are two types of contributions in (2.10). One comes from which becomes Here we have used for k ≥ 2 14) and the definition of harmonic number The other comes from the hypergeometric function, which can be similarly expanded in 1/N as In total, we have where f (n) First few expressions are We would like to move to another four point function G −− (z) in (2.2), whose expression with finite N, k can be again found in [16]. We use the four point function in order to obtain the three point function (1.4) with the other type of scalar operator O − . As for G ++ (z), the relevant part is Here we may need First few expressions are From the four point functions G ±± (z), we can read off the square root of coefficients (C (s) ± ) 2 , but relative phase factor cannot be fixed. In order to determine it, we also need to examine G −+ (z) in (2.3), which can be computed as [16] with finite N, k. For later arguments, we need which is expanded in (1 − z) up to the 1/N order. Virasoro conformal blocks In the previous section we analyzed the left hand side of (1.6). In order to obtain three point functions by solving the equations in (1.6), we further need information on F (c, h i , h p , z). In general, the forms of Virasoro conformal blocks are quite complicated. In practice, we actually do not need to know closed forms but expansions in z up to some orders. For the purpose, a standard approach may be solving Zamolodchikov's recursion relation in [17]. Following the algorithm developed in [18] (see also [19]), we obtain the expressions of Virasoro conformal blocks to several orders in z and 1/c in appendix A. Related works may be found in [19][20][21][22][23], and in particular, some closed form expressions were given, e.g., in [20]. Our findings agree with their results after minor modifications Le us consider the four point function in the decomposition of (1.6) with h 1 = h 2 and h 3 = h 4 . In the decomposition, intermediate operator A p can be the identity or other. As observed in the examples of previous section, only the Virasoro conformal block with the identity operator (called as vacuum block) survives at the leading order in 1/N. This simply means that the four point functions are factorized into the products of two point ones at the leading order in 1/N. Virasoro conformal block with A p of single trace type would appear at the next leading order in 1/N. We would like to examine 1/N corrections to three point functions, so we need 1/N corrections to the Virasoro block of A p . This also implies that we need the expression of vacuum block up to the next-to-next leading order in 1/N. Let us first examine the vacuum block with h 1 = h 3 = h ± . As was explained in appendix A, the 1/c-expansion of vacuum block is given by The 1/c order term corresponds to the exchange of spin 2 current (energy momentum tensor) in terms of global block. We need to rewrite the expansion in 1/c by that in 1/N as 3) The first two terms can be easily read off as Since there are two types of contributions to V 0 , we separate it into two parts as One comes from the 1/c order term with the next leading contribution from h 2 ± /c as ±± were given in (2.19) and (2.26). Here we have used which are obtained from the 1/N expansions of h ± as in (2.6) and (2.23) and c in (1.2) as The other comes from the 1/c 2 order terms in (3.1) as with k a (z), k b (z), and k c (z) in (3.2). We also need Virasoro blocks of A p up to the next non-trivial order in 1/N. It is known that the Virasoro block is expanded in 1/c as (see, e.g., [23]) Here g(h p , z) is the global block of A p and the expressions of f a (h p , z), f b (h p , z), and f c (h p , z) were obtained in [23]. See also appendix A. For our application, we set h 1 = h 3 = h ± and h p = s. We need the expansion in 1/N instead of 1/c as The leading term V (0) p (z) is given by the global block as The next order contributions in 1/N are where the functions f a (s, z), f b (s, z), and f c (s, z) are given by Three point functions After the preparations in previous sections, we now work on the decompositions of four point functions by Virasoro conformal blocks as in (1.6). In the current case, the decompositions are Here G ±± (z) are four point functions defined in (2.1) and (2.2), and the expansions in z were obtained as in (2.17) and (2.24). Moreover, V 0 (z) is the vacuum block and V s (z) is the Virasoro block of higher spin current J (s) (or J (s 1 ,s 2 ;s) ). Their expansions in z can be found in the previous section. Solving constraint equations from (4.1), we read off the coefficients C (s) ± , which are proportional to the three point functions (1.4). It is convenient to expand the coefficients in 1/N as C (s) Then we can see that the constraint equations from (4.1) at the order N 0 is trivially satisfied as 1 = 1. The non-trivial conditions arise from order 1/N terms, and they determine the leading order expressions C (s) ±,0 as seen in the next subsection. The main purpose of this paper is to compute C (s) ±,1 , which are 1/N corrections to the leading order expressions. We derive them by solving order N −2 conditions up to s = 8 in subsection 4.2. Notice that we should take care of C (s 1 ,s 2 ;s ′ ) ± in (4.1) for s ≥ 6, which may be expanded as The coefficients C Leading order expressions in 1/N We start from three point functions at the leading order in 1/N. We examine the constraint equations from (4.1) up to 1/N order. Up to this order, the vacuum block is given by (see (3.1)) where we have defined The Virasoro block of J (s) is as in (3.11) with (3.12). Therefore, the expansion in (4.1) can be written as up to the order of 1/N. The four point functions G ±± (z) can be expanded as Comparing the coefficients in front of z n , we obtain (4.10) They are the constraint equations for (C (s) ±,0 ) 2 with s = 3, 4, . . .. In order to fix relative phase factor, we examine G −+ (z) in (2.3) as well. The decomposition in (1.6) become in this case. The extra phase factor (−1) s may require explanation; Now we need to use a slightly different expression of operator product expansion as (4.12) Then the coefficients in front of global blocks are given by Here the factor (−1) s can be obtained from the coordinate dependence of three point function, which is completely fixed by conformal symmetry, see (4.15) below. Therefore, we have constraint equations for three point functions as by comparing the coefficients in front of z n . Now we have three types of constraint equation as in (4.10) and (4.14), and we would like to show that the known results satisfy these equations. At the leading order in 1/N, the three point functions have been computed as [8] The phase factors η (s) ± depends on the convention of higher spin currents, but we may set η . (4.18) The first few coefficients are ±,0 = 1 20 along with (4.5) for s = 2. Using these explicit expressions, we can check that the constraint equations (4.10) and (4.14) are indeed satisfied. Let us examine the equations (4.20) from low order terms in z. There are no z 0 and z 1 order terms in the both sides. We can see that the equality in (4.20) is satisfied at the order of z 2 from (3.6). Non-trivial constraint equations appear at the z 3 order as where f The z 4 order constraints are f (4) where the contribution from (3.9) starts to enter. The constraints lead to . (4.24) We then find by solving the constraints. For s ≥ 6, the contributions from higher spin currents of double trace type should be considered. They are given by . (4.28) In appendix B we also compute the three (3) where the effect of J (3,3;6) in (4.26) enters. Solving these equations we find − 1273 84(λ + 1) + 11 3(λ + 2) + 9 2(λ + 3) For spin 7, another double trace operator J (3,4;7) in (4.26) should be considered as ±,0 C (7) ±,1 + 2C (6) ±,0 C (6) ±,1 · 10 3 + 2C (3) which lead to − 8117 132(λ + 1) + 556 33(λ + 2) + 9 2(λ + 3) The constraint equations for C (8) ±,1 are ±,0 C (3) Here we have taken care of double trace operators J (4,4;8) , J (3,5;8) , and J (3,3;8) in (4.27). We then have , (4.34) Since the three point functions were already obtained with finite N, k in [9-11] for s = 3, 4, 5, they can be compared to our results in principle. Instead of doing so, we utilize a simpler relation, which is on the ratio of three point functions (see (4.52) of [11]) nk + (n + 1)N + n nk + (n − 1)N . (4.35) The relation was derived for s = 2, 3, 4, 5 by using the explicit results and conjectured for generic s based on them. The expression up to the 1/N order becomes We can easily check that (4.18) satisfy this condition. The relation in (4.36) at the next leading order in 1/N implies We have confirmed our results (and the conjectured relation in (4.36)) by showing that our results on C with the energy momentum tensor T ∝ J (2) . They do not appear in the decomposition of Virasoro conformal blocks but can be fixed by the conformal Ward identity as In particular, they lead to (4.5) and 2C (2) with (3.7), or equivalently As a consistence check, we can show that they satisfy (4.38) as well. [16], and Virasoro conformal blocks can be obtained including 1/N corrections, say, by analyzing Zamolodchikov's recursion relation [17]. Solving the constraint equations from the decomposition, we can obtain three point functions including 1/N corrections. At the leading order in 1/N, we can easily reproduce the known results in [8] because Virasoro conformal blocks reduce to global blocks in this case. At the next leading order, we have obtained 1/N corrections to the three point functions up to spin 8. Previously exact results were known for s = 3, 4, 5 in [9][10][11], and our findings for s = 6, 7, 8 are new. We have confirmed our results by checking that the conjectured relation in (4.38) is satisfied. We have evaluated 1/N corrections only up to spin 8 case because of the following two obstacles. One comes from 1/c corrections to Virasoro conformal blocks. Up to the required order in 1/c, closed forms can be obtained, for instance, by following the method in [23] except for f c (s, z) in (3.13). In (3.14) (or in [23]), the function f c (s, z) is given up to the order z 5+s , but we need the term at order z 6+s with s = 3 for spin 9 computation. We have not tried to do so, but it should be possible to obtain the terms at higher orders in z without a lot of efforts. Another is related to the contributions from higher spin currents of double trace type as analyzed in appendix B. In order to obtain primary operators of this type, we have used commutation relations in (B.1), which are borrowed from [24]. For spin 9, a current of the form J (3,6;9) ∼: J (3) J (6) : would give some contributions. However, in order to find its primary form, we need the commutation relation between W, Y , which is currently not available. At the order in 1/c which do not vanish at c → ∞, we can derive the commutation relations involving more higher spin currents, for instance, from dual Chern-Simons description as in [25][26][27][28]. The computation is straightforward but might be tedious. In any case, it is definitely possible to obtain the 1/N corrections of three point functions for s ≥ 9, and it is desired to have expressions for generic s. Conclusion and open problems There are many open problems we would like to think about. Because of the simplicity of our method, it is expected to be applicable to more generic cases. For example, it is worth generalizing the current analysis to supersymmetric cases. Recently, it becomes possible to discuss relations between 3d higher spin theory and superstrings by introducing extended supersymmetry to the duality by [1]. Higher spin holography with N = 3 supersymmetry has been developed in a series of works [29][30][31], while large or small N = 4 supersymmetry has been utilized through the well-studied holography with symmetric orbifold in [32,33]. Previous works on the subject may be found in [34][35][36]. As mentioned in introduction, the main motivation to examine 1/N corrections in 2d W N minimal model is to learn quantum effects in dual higher spin theory. We would like to report on our recent progress in a separate publication [37]. A Recursion relations and Virasoro conformal blocks In this appendix we derive the expressions of Virasoro conformal blocks in expansions of 1/c and z by solving Zamolodchikov recursion relation in [17], and we compare our results to those previous obtained especially in [23]. We decompose a four point function by Virasoro conformal blocks F (c, h i , h p , z) as in (1.6). In the following we set h 1 = h 2 and h 3 = h 4 . The recursion relation for Virasoro conformal blocks is [17] F (c, h i , h p , z) =z hp 2 F 1 (h p , h p ; 2h p ; z) Here the poles for c are located at c = c mn (h p ) with The residua are For our purpose, it is enough to obtain first several terms of Virasoro blocks in z expansion, and we obtain them by following the strategy of [18], see also [19]. We decompose Virasoro conformal blocks by global blocks as The generic expressions of χ q are given in (2.28) of [18]. With h 1 = h 2 and h 3 = h 4 , it can be shown that χ q = 0 for odd q. The explicit expressions for q = 2, 4, 6 can be found in (C.1) of the paper as Inserting these expressions into (A.6), we can obtain the Virasoro conformal blocks up to the order of z hp+7 . Let us start from vacuum block. As discussed in the main context, we need its expression up to the 1/c 2 order. For h p = 0 the coefficients χ q can be found in (2.15) of [18], and they are expended in 1/c as Note that there is no 1/c-correction to χ 2 (c, h i , 0). Using 2 F 1 (4, 4; 8; z) = 1 + 2z + 25 9 z 2 + 10 3 B.1 Higher spin algebra In order to find out higher spin currents of double trace type, which are primary to Virasoro algebra, we utilize commutation relations among higher spin currents given in [24] (see also [15,27,28]). The currents are denoted as W, U, X, Y , which are proportional to J (s) with s = 3, 4, 5, 6. In order to obtain the leading order expression (C (s 1 ,s 2 ;s ′ ) ±,0 ) 2 , we only need commutation relations up to the terms vanishing at c → ∞ as 7 [W m , X n ] = (4m − 2n)Y m+n + 1 56 The constants are in the current notation. With the conventions, higher spin charges are given by Here |O ± ≡ O ± (0)|0 and at the leading order in 1/c. B.2 Three and two point functions We start from spin 6 current J where the prescription of normal ordering is (see, e.g., (6.144) of [39]) with h A as the conformal weight of A. We then obtain (C (n + 4) · · · (n + 7)U n z n+8 + d 6! n (n + 2) · · · (n + 7)L n z n+8 , (B Here we have applied the normal ordering prescription as in (B.8). For instance, we may set which means that there is no contribution from J (3,4;8) .
6,463.8
2017-08-07T00:00:00.000
[ "Physics" ]
Studies on external genitalia of seven Indian species of the genus Spilarctia Butler (Lepidoptera: Arctiidae: Arctiinae) alongwith the description of a new species Genus Spilarctia was established in 1875 by Butler on the type species Phalaena lutea Hufnagel, 1766, from Germany. This genus was synonymised under Spilosoma Stephens, 1828, by Hampson in 1894. However, in 1901, Hampson described the genus Diacrisia Hübner, 1819, in a broader concept and synonymised 31 genera under it which also included both the genera i.e., Spilarctia Butler and Spilosoma Stephens. Seitz (1910) introduced the division of the family Arctiidae into eight subfamilies and transferred the genus Spilarctia Butler under Spilosominae. Later, Daniel (1943) followed this division in spite of the fact that Strand (1919) treated Spilosominae as a synonym of Arctiinae. Arora & Chaudhary (1982) and Arora (1983) followed the classification given by Seitz (1910). Holloway (1988) used Spilosoma Curtis (=Spilosomoa Stephens) as a valid generic name. Koda (1988) brought out an important publication on the generic classification of subfamily Arctiinae of the Palearctic and Oriental regions based on the male and female genitalia. He re-characterized the genus Spilosoma Curtis and Spilarctia Butler and provided suitable status to both these genera in this publication. Kirti & Singh (1994) studied the genitalic structures of four Indian species i.e. Spilarctia multiguttata Walker, S. casignata Kollar, S. obliqua Walker and S. comma Walker. In the present study a large sample of 43 representatives were collected from different localities of Western Ghats of India. On close examination of morphological characters, seven species were separated. Out of these, six species were identified from the relevant literature and by comparison with the collections preserved in different national museums viz., Indian Agricultural INTRODUCTION Genus Spilarctia was established in 1875 by Butler on the type species Phalaena lutea Hufnagel, 1766, from Germany.This genus was synonymised under Spilosoma Stephens, 1828, by Hampson in 1894. However, in 1901, Hampson described the genus Diacrisia Hübner, 1819, in a broader concept and synonymised 31 genera under it which also included both the genera i.e., Spilarctia Butler and Spilosoma Stephens. Seitz (1910) introduced the division of the family Arctiidae into eight subfamilies and transferred the genus Spilarctia Butler under Spilosominae.Later, Daniel (1943) followed this division in spite of the fact that Strand (1919) treated Spilosominae as a synonym of Arctiinae.Arora & Chaudhary (1982) and Arora (1983) followed the classification given by Seitz (1910).Holloway (1988) used Spilosoma Curtis (=Spilosomoa Stephens) as a valid generic name.Koda (1988) brought out an important publication on the generic classification of subfamily Arctiinae of the Palearctic and Oriental regions based on the male and female genitalia.He re-characterized the genus Spilosoma Curtis and Spilarctia Butler and provided suitable status to both these genera in this publication.Kirti & Singh (1994) studied the genitalic structures of four Indian species i.e.Spilarctia multiguttata Walker, S. casignata Kollar, S. obliqua Walker and S. comma Walker.In the present study a large sample of 43 representatives were collected from different localities of Western Ghats of India.On close examination of morphological characters, seven species were separated.Out of these, six species were identified from the relevant literature and by comparison with the collections preserved in different national museums viz., Indian Agricultural Research Institute (IARI), New Delhi, Forest Research Institute (FRI), Dehradun and Natural History Museum (NHM), London.One species could not be identified from these sources.This species is described here as new to science. MATERIALS AND METHODS The material for the present study i.e., the adult moths of family Arctiidae were collected exclusively from fluorescent lights during night hours from different localities in the Western Ghats of India.The collected moths were killed with ethyl acetate vapors in the killing bottle.The freshly killed specimens were pinned and stretched on adjustable wooden stretching boxes.The pinned specimens were dried for 2-3 days in the improvised drying chambers.The properly dried specimens were then preserved in air tight wooden boxes, containing naphthalene balls as fumigants. To study wing venation permanent slides of fore and hind wings were made.For this, the methodology given by Common (1970) and advocated by Zimmerman (1978) was followed.For the study of external male and female genitalia, the entire abdomen of the preserved moths was removed, as cutting only the last few segments often damages constituent parts of male and female genitalia (Robinson 1976).The detached abdomen was put in 10% KOH for 12-14 hr. in order to soften chitin and dissolve muscles and other soft parts.The KOH treated material was washed in distilled water and residual traces of KOH were later removed by putting it in 1% glacial acetic acid.The abdomen was dissected in 50% alcohol for taking out the genitalia and adhering unwanted material was cleared in the subsequent grades.After proper dehydration, the material was cleared in clove oil and preserved in a ratio of 3:1 alcohol and glycerol.The diagrams were drawn with the help of a graph eye piece fitted in a zoom binocular.BUTLER Butler,1875, Cistula Entomolgica, 2: 39.Type species: Phalaena lutea Hufnagel, 1766, Germany: Berlin; type deposited in Natural History Museum (NHM), London, subsequent designation by Kirby 1877 in Rye Zoo.Rec. 12 : 431. Diagnosis: Labial palpi porrect or porrectly rosteriform.Antennae bipectinate in males, ciliated in females.Forewing with vein R 1 arising from cell; R 2 , R 3 , R 4 and R 5 from a common stalk; M 1 from upper angle; M 2 from or slightly beyond lower angle of cell.Hindwing with vein Sc+R 1 originating towards base of cell; Rs and M 1 from upper angle; M 2 from lower angle or towards middle of discocellulars.Hind tibia with two pairs of spurs.Male genitalia with uncus moderately long, broad at base and gradually narrowing towards tip; acrotergite well developed; fenestrula absent; saccus present; valvae simple with costa narrow and linear, sometimes produced at proximal end; sacculus present, valvula and cucullus not clearly differentiated; juxta trapejoid; aedeagus moderately long and broad; vesica membranous with irroration of small spines; ductus ejaculatorius entering subapically. Female genitalia with corpus bursae membranous, signum present or may be absent; ductus seminalis entering ductus bursae. (Figure 1) Remarks: Only three female specimens of mona Swinhoe were collected in 1885 by Swinhoe from Bombay and Mahabaleshwar.Till date, no male representative of this species was studied and associated for mona Swinhoe except Kaleka (2005).It seems that this Indian worker has wrongly identified the above said species collected from northeastern India, because, in the female genitalia of mona Swinhoe three signa are present whereas, Kaleka (2005) has mentioned that the signum is missing in this species.Kaleka ( 2005) not only shifted it under genus Thanatarctia Butler on the basis of external male and female genitalic structures, but also provided wrong information in his publication that the species mona Swinhoe was studied by Koda (1988) under the genus Spilarctia Butler. In the present study only two female representatives were collected from Mahabaleshwar and Matheran, which clearly point out that geographically the species is very much restricted.The detailed study of morphological and female genitalic structures of species under reference confirms that it is better to place it under the genus Spilarctia Butler rather than under Thanatarctia Butler or Diacria Hübner.Hence, the proper status of mona Swinhoe has been provided in the present work.(Figure 4) Head with frons and vertex ochreous.Antennae bipectinate in males; scape and pedicel ochreous; shaft and branches dark brown.Eyes fuscus green with black spots or patches.Labial palpi porrect; laiden with crimson scales; third segment brown. Thorax, collar and tegula ochreous, thorax with a small black streak.Forewing with ground colour ochreous, slightly irrorated with crimson scales; costa suffused with crimson scales; a basal black spot; antemedial spot on vein 1A; a black speck at end of cell; an oblique series of postmedial spots on both sides of veins, not reaching costa; traces of submarginal series of black spots; underside with irroration of crimson scales; a black spot at end of cell; fringe ochreous; vein R 1 from cell; R 2 , R 3 , R 4 and R 5 from a common stalk; M 1 from upper angle; M 2 slightly beyond angle; M 3 from angle of cell; Cu 1 near angle of cell; Cu 2 from middle of cell.Hindwing with ground colour ochreous; inner margin suffused with crimson scales; a black spot at end of cell; more or less complete series of submarginal spots; underside same; fringe ochreous; vein Sc+R 1 originating before middle of cell; Rs and M 1 from upper angle; M 2 towards middle of discocellulars; M 3 and Cu 1 from lower angle; Cu 2 from middle of cell.Legs black brown; coxae and trochanter suffused with crimson scales; hind tibia with two pair of spurs. Remarks: Morphologically the species under reference is closely allied to obliqua Walker.But the perusal of external male genitalic structures reveals that it is a different species.Its distinct male genitalic characters like shape of vinculum, juxta and valvae justify its status. Etymology: The name of the species belong to the district of its type locality i.e., Coorg (Kodagu). Remarks: The species has been discussed in considerable detail by many previous authors like Koda (1988) and Kirti & Singh (1994).Hence, the description is being omitted in the present study, whereas the illustrations are given for the sake of comparison. Remarks: The species under reference was shifted as a 'form' of obliqua Walker under genus Diacrisia Hübner by Hampson in 1901.The present work deals with the detail study of its male and female genitalic structures which confirms its status as a distinct species.Hence, the original combination of todara Moore with genus Spilarctia Butler has been revived. Spilarctia castanea (Hampson) comb. nov. (Figure 2) Hampson in 1901neaHampson, 1893: 9, male, type locality Sri Lanka, type depository NHM, London and examined by the junior author.Hampson was described under genus Diacrisia Hübner byHampson in 1901.Critical examination of its external genitalic structures revealed that it does not conform to the characterization of type species of Diacrisia Hübner.Therefore, the new combination for this species has been proposed by transferring it under Spilarctia Butler.The male genitalia of this taxon has been described and illustrated in detail for the first time.The species has also been recorded for the first time from India.Uncus strongly built, sickle shaped, setosed with fine setae, sclerotized, tip pointed; acrotergite well developed; fenestrula absent; tegumen longer than uncus, u-shaped; vinculum shorter than tegumen, v-shaped; saccus well developed.Valvae with costa narrow and weakly sclerotized; sacculus sclerotized, produced to an outgrowth towards distal end; harpe+ampulla simple plate like; cucullus rounded; valvula simple, setosed with long setae.Transtilla sclerotized bar like; juxta well developed, plate like; aedeagus long and moderately broad, almost straight; carina penis convex lens like with convex surface bearing small spines; vesica membranous with irroration of small spine, two patches of large spines present; ductus ejaculatorius entering subapically.The above said species bifascia Hampson was shifted under genus Diacrisia Hübner as a 'form' of Diacrisia obliqua Walker by Hampson in 1901.But the detail examination of genitalic structures like shape of uncus, juxta, valvae and aedegus of bifascia Hampson clearly conform that it is better to place this taxa under genus Spilarctia Butler.Therefore, the original status of this species has been revived in the present work and male genitalic structures are discussed and illustrated in detail for the first time.
2,605.4
2010-06-26T00:00:00.000
[ "Biology" ]
Post-partum testosterone administration partially reverses the effects of perinatal cadmium exposure on sexual behavior in rats This study investigated the effects of perinatal cadmium exposure on sexual behavior, organ weight, and testosterone levels in adult rats. We examined whether immediate postpartum testosterone administration is able to reverse the toxic effects of the metal. Forty pregnant Wistar rats were divided into three groups: 1) control, 2) 10 mg kg-1 cadmium chloride per day, and 3) 20 mg kg-1 cadmium chloride per day. These dams were treated on gestational days 18 and 21 and from lactation 1 to 7. Immediately after birth, half of the offspring from the experimental and control groups received 50 μl (i.p.) of 0.2% testosterone. Male sexual behavior, histological analysis and weight of organs as well as serum testosterone levels were assessed. Results showed that both cadmium doses disrupted sexual behavior in male rats, and postnatal treatment with testosterone reversed the toxic effects of 10 mg kg-1 cadmium and attenuated the effects of 20 mg kg-1 cadmium. Body weight and absolute testis, epididymis, and seminal vesicle weight were decreased by the higher cadmium dose, and testosterone supplementation did not reverse these effects. Serum testosterone levels were unaffected by both cadmium doses. No histological changes were detected in all organs analyzed. Maternal cadmium exposure effects in sexual parameters of male rat offspring were explained by the altered masculinization of the hypothalamus. We suggest that cadmium damaged cerebral sexual differentiation by its actions as an endocrine disruptor and supported by the changes discretely observed from early life during sexual development to adult life, reflected by sexual behavior. Testosterone supplementation after birth reversed some crucial parameters directly related to sexual behavior. Steroid hormones play a significant role in the brain and neuroendocrine system both pre-and neonatally, resulting in gender dimorphism in the behavioral and metabolic aspects of reproduction in adulthood (Jacobson & Gorski, 1981;MacLusky & Naftolin, 1981). Brain sexual differentiation occurs during the perinatal period after an abrupt discharge of testicular testosterone in males. In male rats, testosterone surges markedly occur on days 18-19 of gestation (Ward & Weisz, 1984) and again during the first few hours following parturition (Corbier, Kerdelhue, Picon, & Roffi, 1978). Thus, early exposure to androgens from the developing testes results in masculinization and defeminization of the brain. The former entails permanent actions that support typical male copulatory behaviors and patterns of gonadotropin secretion. Testosterone per se is not responsible for masculinizing the brain (Roselli & Klosterman, 1998). This process requires the conversion of androgen to estrogen, and the neural aromatization of androgens to estrogens is known to be a critical step in the development and adult expression of male sexual behavior in various species (Freeman & Rissman, 1996;Lephart, 1996). During this period of brain sexual differentiation, testosterone or its metabolites are fundamental for the masculinization and defeminization of sexual behavior, establishment of gonadotropin secretion patterns, and various morphological indices. Alterations in the process of hypothalamic sexual differentiation, if present, are generally perceived only at puberty or during adult reproductive life (Gerardin, Bernardi, Moreira, & Pereira, 2006;Piffer & Pereira, 2004). We previously examined whether immediate postpartum testosterone administration is able to reverse the toxic effects of perinatal cadmium treatment on physical and reflexologic development in rat pups (Couto-Moraes, Felicio, & Bernardi, 2010). Testosterone administration was not able to reverse the effects of cadmium, even on those parameters more directly related to the androgenic system such as the descent of the testis and anogenital distance delays. The present study examined two aspects of cadmium exposure: 1) long-term effects of perinatal exposure to 10 and 20 mg/ kg cadmium on sexual aspects were examined, and 2) whether postnatal testosterone treatment can reverse the disruptive effects of cadmium on male sexual parameters. Thus, sexual behavior, serum testosterone levels, weight, and reproductive organ histology were evaluated. Animals Adult male and female Wistar rats obtained from the Department of Pathology, School of Veterinary Medicine and University of São Paulo, Brazil and weighing approximately 310 and 230 g, respectively, 100 and 75 days old, respectively, were used. The animals were housed in polypropylene cages (40 × 50 × 20 cm) under controlled temperature (20 ± 2°C) and humidity (70 ± 5%) with a 12 h/12 h light/dark cycle (lights on at 6:00 AM). Food (Nuvilab CR1, species-specific ration) and water (filtered in porcelain) were provided ad libitum throughout the study. All procedures were approved by the Animal Care Committee (protocol no. 435/2004-FMZ-USP) in accordance with the guidelines of the Committee on Care and Use of Laboratory Animal Resources, National Research Council (USA). Nulliparous female rats (75 days) were manipulated daily by the investigators for 14 consecutive days to avoid stress interference on female rats estrus cycle (Lovick, 2012). After this time, female estrus was determined by vaginal cytology. Estrous rats were randomly divided into pairs and placed with one fertile male overnight. On the following morning, the success of mating was confirmed by the presence of spermatozoa in vaginal smears. This day was considered gestational day 0 (GD0). Two pregnant female rats were maintained in each cage until GD18, after which time they were isolated to build their nests. Drugs Cadmium chloride (CdCl 2 , J. T. Baker) was diluted in distilled water at 1% and 2% concentrations and orally administered in a volume of 1 ml kg -1 body weight, according to female body weight fluctuation. Cadmium doses were chosen based on previous studies of our group showing that maternal exposure to the higher dose used here promoted changes in offspring sexual sphere (Salvatori et al., 2004). Testosterone propionate (Sigma) diluted in almond oil (Lederc) to a concentration of 0.2% was administered i.p. in a 50-µl volume according to Gratan & Selmanoff (1994). Nankeen black paint (Acrilex) was used to mark littermates. Formation of the experimental groups Forty dams were divided into three groups. Two experimental groups (n = 16 per group) were treated orally by gavage once daily with 10 or 20 mg kg -1 cadmium according to the following regimen: on GD18 and GD21 and daily from postnatal day 1 (PND1) to PND7. These treatment periods correspond to the male brain sexual differentiation in rats. The control group was treated with 0.9% NaCl according to the same schedule (1 ml kg -1 , gavage, n = 8). Immediately after birth, half of the pups from the experimental groups received an i.p. injection of testosterone propionate, and the other half received almond oil under the same conditions as the control group. Thus, the following experimental groups were formed: SS-perinatal saline + saline solution in adult age (n = 8/litter), CDS10perinatal cadmium 10 mg kg -1 + saline solution in adult age (n = 8/litter), CDT10-perinatal cadmium 10 mg kg -1 + testosterone in adult age (n = 8/litter), CDS20perinatal cadmium 20 mg kg -1 + saline solution in adult age (n = 8/litter), CDT20-perinatal cadmium 20 mg kg -1 + testosterone in adult age (n = 8/litter). Offspring studies Immediately after birth, all pups were examined externally for the presence of gross abnormalities. They were sexed and weighed, leaving a total of eight pups (four males and four females) with each dam until weaning on PND21. On this day, the littermates were separated, housed together by gender, and grouped under the same laboratory conditions as their parents. Three male pups of eight litters/treatment were marked on the right foreleg used to evaluate the sexual behavior, body and organs weights, histological analysis and testosterone serum levels. The remaining male and female offspring were used in other experiments in our laboratory. Sexual behavior On PND100, sexual behavior in male rat offspring was evaluated according to Felicio, Palermo-Neto, & Nasello (1989). The apparatus had the following characteristics: wood box painted gray (56 × 35 × 31 cm), superior portion with mobile lid, and one front glass wall. Animals were maintained under controlled conditions on a partially reversed light/dark cycle (lights on at 10:00 PM and off at 10:00 AM) for at least 24 days before the test. All sexual behavior tests were held 4 to 8 h after the beginning of the dark period. To minimize the possible influence of circadian changes on sexual behavior, control and experimental animals were alternated. A layer of sawdust served as bedding and new sawdust and sawdust originating from self-house cages formed it. During all of the tests, two 40-W red lamps illuminated the test room. For the observations, each male rat was individually placed in the box for 5 min before the mating test to adapt to the new environment. One sexually receptive female was then placed in the box, and the following parameters were measured for a period of 40 min: latency to first mount, latency to first intromission, latency to first ejaculation, number of mounts until first ejaculation, number of intromissions until first ejaculation, latency to post-ejaculatory mount, latency to post-ejaculatory intromission, and total number of ejaculations. The frequency of mounts per minute (quotient of ratio between the number of mounts until the first ejaculation and difference in time between the first ejaculation and first mount) and the frequency of intromissions per minute (quotient of ratio between the number of intromissions until the first ejaculation and difference in time between the first ejaculation and first intromission) were calculated. Body and organ weight and histological evaluation of reproductive organs On PND100, adult rats were weighed. After euthanasia by decapitation, the rats were carefully dissected, and the following reproductive organs were collected: testes, epididymis, seminal vesicle, and ventral prostate. These organs were then washed in saline solution and dried on hygienic paper. The measurement of organ weight was performed using a digital scale (Mettler). Finally, the organ fragments were collected and fixed with 10% phosphate-buffered formalin until histology was performed. Additionally, the absolute weights were calculated. For the histological evaluation, after fixation, the organ fragments were passed through the usual stages of paraffin inclusion. Paraffin-embedded tissues were then sectioned (5 μm) and stained with hematoxylin and eosin (HE) for examination by light microscopy. Serum testosterone levels After euthanasia, trunk-blood samples were taken to measure testosterone levels. Serum testosterone in adult rats (on PND100) was measured using a solidphase radioimmunoassay and commercial kit (Coata-Count) purchased from Diagnostic Products (Los Angeles, CA, USA). One 50-μl aliquot of serum was dispensed into each assay tube, and then 1 ml of buffer that contained the tracer (40000 cpm/tube) was added. A standard curve was set with minimal concentrations Figure 1. Diagram of experiments. Cadmium (10 and 20 mg kg -1 ) was administered to dams on gestational day 18 (GD18) and from GD21 to postnatal day 7 (PND7). The pups received testosterone supplementation (50 µl of a 0.2% solution, i.p.) or almond oil immediately after birth. On PND100, sexual behavior, organ weight, and testosterone levels were evaluated. of testosterone from 1.24 ng/dl. The mixture was incubated for 3 h at 37ºC. The tubes were then decanted, and the radioactivity bound to the tube was measured in a gamma counter with a built-in computer, which calculated the final values of testosterone in ng/dl serum. A single assay was performed. Intra-assay coefficient of variation ranged from 7.33% to 7.66%, and inter-assay coefficient of variation was less than 3.7%. The hormone assay was performed in the Laboratory of Hormonal Dosages, Department of Animal Reproduction, School of Veterinary Medicine, University of São Paulo. Statistical analysis One-way analysis of variance (ANOVA) followed by the Tukey multiple comparison test were used for data analysis. In all cases, p < 0.05 was considered statistically significant. Statistical analyses were performed using Instat software, version 3.01 (GraphPad, San Diego, CA, USA). Sexual behavior The mean latencies to first mount (F 4,35 = 8.57, p < 0.0001) and first intromission (F 4,35 = 9.28, p < 0.0001) increased in pups perinatally exposed to 10 mg kg -1 cadmium without testosterone supplementation. Testosterone supplementation reversed these effects of cadmium. Pups exposed to 20 mg kg -1 cadmium exhibited increased latencies to first mount and first intromission, and testosterone supplementation attenuated these effects. The latency to first ejaculation increased (F 4,35 = 14.23, p < 0.0001) with exposure to the higher dose of cadmium. Testosterone supplementation did not reverse this effect. Compared with the control group, no effects of either cadmium treatment or testosterone supplementation were found on post-ejaculatory latency (F 4,35 = 2.15, p = 0.09; Figure 2). The number of mounts until first ejaculation (F 4,35 = 2.97, p < 0.03), number of intromissions until ejaculation (F 4,35 = 0.51, p < 0.0001), and number of ejaculations (F 4,35 = 5.30, p < 0.001) were different between groups (Figure 3). The Tukey multiple comparison test indicated that the number of intromissions until ejaculation and total number of mounts increased in pups perinatally exposed to the higher cadmium dose. Testosterone supplementation attenuated the perinatal effects of cadmium. The total number of ejaculations in 40 min (F 4,35 = 8.57, p < 0.0001) decreased in experimental pups perinatally exposed to the lower and higher doses of cadmium. In both cases, testosterone supplementation reversed the perinatal effects of cadmium. With regard to sexual parameters (Table 1), the frequency of intromissions per minute was different between groups (F 4,25 = 4.41, p = 0.007). The post hoc test revealed an increase in this parameter in pups perinatally exposed to the higher dose of cadmium with testosterone supplementation compared with the control group. No significant differences were found between groups in the frequency of mounts per minute. Figure 2. First mount, first intromission, first ejaculation, and post-ejaculatory mount latencies in male rats perinatally treated with cadmium that received or did not receive testosterone supplementation immediately after birth. SS group: maternal treatment with saline and the pups were treated with almond oil. CDS10 and CDS20 groups: maternal treatment with cadmium and the pups were treated with almond oil. CDT10 and CDT20 groups: maternal treatment with cadmium and the pups were treated with testosterone (one-way ANOVA followed by the Tukey multiple comparison test). *p < 0.05, **p < 0.01, ***p < 0.001 compared with control group or between pairs. Differences between groups treated or not with testosterone are indicated by the bar on the groups. Data are expressed as mean ± SD. n = 8/groups. Weight and histological evaluation of reproductive organs In adults, the absolute weights of the body (F 4,25 = 25.05, p < 0.0001), testes (F 4,25 = 22.90, p < 0.0001), epididymis (F 4,25 = 164.30, p < 0.0001), and seminal vesicle (F 4,25 = 43.84, p < 0.0001) were significantly different between groups. No differences were found in prostate weight (F 4,25 = 1.75, p = 0.16) or serum testosterone levels (F 4,45 = 0.04, p = 0.99) between groups (Figure 4).The Tukey multiple comparison test revealed that the weights of the body, testes, epidydimis, and seminal vesicle similarly decreased in the postnatally testosterone-treated and -untreated groups compared with the control group. Additionally, testosterone treatment in 10 mg kg -1 cadmium-exposed rats reversed the decrease in epidydimis weight. Histological evaluation of the reproductive organs did not reveal changes in the normal pattern of reproductive organs in experimental rats (data not shown). Discussion The present study found that perinatal cadmium exposure disrupted sexual behavior in male rats, and postnatal treatment with testosterone reversed or attenuated these effects. Descriptions of male sexual behavior in nonhuman animals have distinguished two separate phases: a highly variable sequence of behaviors that involves attracting and courting a female followed by a highly stereotyped copulatory sequence. This initial variable phase is often referred to as the appetitive phase, whereas the highly stereotyped copulatory phase is often referred to as the consummatory (motivational) phase. The consummatory (copulatory) aspects of male sexual behavior must be expressed in a coordinated manner to produce a functionally adapted behavioral sequence that can result in the fertilization of the female and successful reproduction (Balthazart & Ball, 2007). The method used here to analyze the sexual behavior of male offspring allowed us to distinguish between sexual Figure 2 for group descriptions. *p < 0.05, **p < 0.01, ***p < 0.001 compared with control group or between pairs (one way ANOVA followed by the Tukey multiple comparison test). Differences between groups treated or not with testosterone are indicated by the bar on the groups. Data are expressed as mean ± SD. n = 8/groups. . Absolute weights of the body, testes, epididymis, seminal vesicle, and prostate and testosterone levels in adult male rats perinatally treated with cadmium that received testosterone supplementation or almond oil immediately after birth. See Figure 2 for group descriptions. *p < 0.05, **p < 0.01, ***p < 0.001 compared with control group or between pairs (one-way ANOVA followed by the Tukey multiple comparison test). Differences between groups treated or not with testosterone are indicated by the bar on the groups. n = 8/groups. motivation (i.e., the ease with which sexual behavior is activated, or "libido") and the execution of the copulatory acts (i.e., performance or "potency" (Meisel & Joppa, 1994). Mount latency is a measure of sexual motivation. The same is valid for intromission latency but, in this case, it requires penile erection and the coordinated activity of the striated penile muscles; therefore, it is not entirely determined by sexual motivation (Agmo, 1999). The number of mounts or intromissions reflects sexual motivation, but it may be confounded by other intervening factors and should be interpreted with caution (Agmo, 1997). Rats prenatally treated with cadmium showed a decreased number of total mounts and consequently a greater number of intromissions. Compared with the control group, rats prenatally treated with cadmium did not show significant differences in mount frequency, an index of motor activity, suggesting that the motor aspects of sexual behavior were not affected by prenatal treatment with the metal. Additionally, the increased latency to the first ejaculation and lower number of ejaculations appeared to reflect a decrease in sexual behavior potency. Thus, perinatal exposure to cadmium reduced both the motivational and performance aspects of male sexual behavior. Cadmium binds to estrogen receptor-α (ER-α) and androgen receptors (ARs) and activates them (Martin et al., 2002;Stoica, Katzenellenbogen, & Martin, 2000). Therefore, the steroidal endocrine-disrupting effect of cadmium (Piasek, Laskey, Kostial, & Blanusa, 2002) could explain these results. This metal can interact with the estrogen nuclear receptor and trigger cellular mechanisms in estrogen-dependent processes in the central nervous system, particularly in the pituitary gland, in both the male and female reproductive systems (Sonnenschein & Soto, 1998;Temple, Scordalakes, Bodo, Gustafsson, & Rissman, 2003) showing that ER knockout mice exhibited a tendency toward an increase in serum testosterone levels during puberty, an increase in the numbers of complete and incomplete mounts, and an increase in ejaculation latency. Additionally, the same authors proposed that, during puberty, estrogen plays a critical role in the establishment of sexual behavior by regulating not only sexual motivation but also the ejaculatory process. Therefore, there was damage to the motivational and copulatory aspects of sexual behavior. Previously, our group found that prenatal exposure during the embryogenic period disrupted male sexual behavior (Salvatori et al., 2004). These effects were attributable to the endocrine disrupting-like properties of this metal. The present results also showed that postnatal testosterone supplementation reversed or attenuated the deleterious effects of perinatal cadmium exposure on the appetitive and consummatory parameters of sexual behavior. The increases in the latency to the first mount and number of intromissions induced by perinatal cadmium treatment were reversed or attenuated by postnatal testosterone treatment. The latency to ejaculation was not affected by the lower cadmium dose but was increased by the higher dose. In the latter case, postnatal testosterone administration did not reverse the effect. Although both cadmium doses did not alter the postejaculatory latencies, testosterone treatment reduced this sexual parameter. The parameters linked to male sexual behavior were also influenced by postnatal hormone treatment. Thus, the increases in the number of intromissions and total number of mounts induced by the higher cadmium dose were attenuated. Moreover, the reduced number of ejaculations observed after both cadmium doses were completely reversed. Although the effects of androgens, particularly testosterone, on male sexual behavior are indirect (i.e., after aromatization to estrogen), the present data showed that postnatal testosterone supplementation is critical in the prevention of the endocrine-disrupting effect of cadmium on male sexual behavior. The absolute weights of the body, testes, epididymis, seminal vesicle, and ventral prostate were also evaluated. Compared with the control group, 20 mg kg -1 cadmium decreased the absolute weights of the body, testes, epididymis, and seminal vesicle. Cadmium (10 mg kg -1 ) also reduced the weight of the epidydimis. We previously observed that in infancy, the weight gain between birth and weaning was reduced by perinatal exposure to 10 and 20 mg kg -1 cadmium (Couto-Moraes et al., 2010). Testosterone did not reverse this effect. In adults, this effect persisted only with the higher dose, suggesting that during development this damage was attenuated in rats exposed to 10 mg kg -1 . During this age, testosterone supplementation did not reverse this effect. Perinatal exposure to the higher cadmium dose produced severe toxicity in reproductive organs, reflected by the decreased weights of the testes, epidydimis, and seminal vesicle. Cadmium accumulates in reproductive organs, leading to damage (Al-Azemi et al., 2010;Li et al., 2010;Mendiola et al., 2011). Cadmium-induced testicular dysfunction is initially mediated by its effects on the occludin/ZO-1/focal adhesion kinase (FAK) complex at the blood-testis barrier [BTB], causing a redistribution of proteins at the Sertoli-Sertoli cell interface. This leads to BTB disruption. The damaging effects of this toxicant on testicular function are mediated by downstream mitogen-activated protein kinases (MAPK) which, in turn, perturb actin bundling and accelerate actinbranching activity, causing a disruption of Sertoli cell tight junction (TJ)-barrier function at the BTB and perturbing spermatid adhesion at the apical ectoplasmic specialization (apical ES), a testis-specific anchoring junction type, which leads to the premature release of germ cells from the testis (for review, see Cheng et al., 2011). The epidydimis and seminal vesicle were also affected by perinatal exposure to cadmium. Sperm maturation occurs in the epididymis. This process permits the progressive motility, survival, and fertilization success of spermatozoa (Hinton, Meadowcroft, & Wardle, 1995). Zenick, Blackburn, Hope, Oudiz, & Goeden (1984) showed that toxic agents can affect spermatozoa maturation, function, and survival. Calcium can act directly on male gametes or indirectly by compromising epididymal function. In the present study, the decreased weight indicated organ damage. This suggests interference with fertility, which has been reported by several studies. The same facts could explain the reduction of seminal vesicle weight. However, lesions at the microscopic level were not confirmed by histological examination. Several studies showed that cadmium treatment in adult rodents induced histological changes in all of the organs collected in the present study, including the ventral prostate (Arriazu, Pozuelo, Martin, Rodriguez, & Santamaria, 2005) and testes (Hew, Heath, Jiwa, & Welsh, 1993;Yu, Hsiao, Yang, Lin, & Chen, 1997;Zhou et al., 1999). Some testicular alterations caused by cadmium can be observed only using 0.1% (Liao et al., 2006) or 1 mg/kg daily for 4 weeks (Haffor & Abou-Tarboush, 2004). Cadmium-induced histological changes in other organs can be consistently observed using specific techniques. Perinatal or neonatal exposure to some endocrine disruptors impacts testicular weight and steroidogenesis only during infancy, but spermatogenesis can be completely restored during puberty (Kuwada et al., 2002). The lack of differences found in the histology of the testes may be explained by this fact. Testosterone is the most important and sensitive hormone in the male reproductive system, and the overall evaluation of androgen levels was accomplished by measuring this hormone. No difference was found in serum testosterone levels. Serum testosterone levels, together with the weight measurements and histological evaluation of the reproductive organs, permitted the establishment of morphofunctional correlations and indirect evaluation of the integrity of the hypothalamichypophysis-testes axis, aiding with the evaluation of the toxic effects of one agent (i.e., cadmium) on the male reproductive system. Altogether, these results indicate that perinatal maternal exposure to cadmium under the present conditions altered sexual parameters in male rat offspring. The results may be explained by alterations in the masculinization of the hypothalamus. Thus, we suggest that the damage to cerebral sexual differentiation was caused by cadmium, reflected by its actions as an endocrine disruptor and supported by the changes discretely observed from early life (i.e., during sexual development) to adulthood (i.e., sexual behavior). Additionally, testosterone administration immediately after birth was able to reverse some of the crucial parameters that are more directly related to the androgenic system.
5,710.6
2012-12-01T00:00:00.000
[ "Biology", "Psychology" ]
Exosomes in the tumor microenvironment of cholangiocarcinoma: current status and future perspectives Cholangiocarcinoma (CCA) refers to an aggressive malignancy with a high fatality rate and poor prognosis. Globally, the morbidity of CCA is increasing for the past few decades, which has progressed into a disease that gravely endangers human health. Exosomes belong to a class of extracellular vesicles (EVs) with diameters ranging from 40 to 150 nm that can be discharged by all living cells. As communication messengers of the intercellular network, exosomes carry a diverse range of cargoes such as proteins, nucleic acids, lipids, and metabolic substances, which are capable of conveying biological information across different cell types to mediate various physiological activities or pathological changes. Increasing studies have demonstrated that exosomes in the tumor microenvironment participate in regulating tumorigenesis and progression via multiple approaches in the tumor microenvironment. Here, we reviewed the current research progress of exosomes in the context of cancer and particularly highlighted their functions in modulating the development of CCA. Furthermore, the potential values of exosomes as diagnostic and therapeutic targets in CCA were overviewed as well. EVs are enveloped by a lipid bilayer that can be discharged by numerous cell sorts. According to the size and formation pathway of vesicles, it can be classified into two major subsets approximately, called ectosomes and exosomes [5]. The former are vesicles with a diameter of 50 nm ~ 1 μm via plasma membrane budding outward directly, while exosomes are EVs ranging from 40 ~ 150 nm in diameter generated in the opposite way, which involves plasma membrane invagination and endosomal formation [5]. Exosomes contain multiple substances and are broadly distributed in different body fluids like plasma, urine, bile, and cerebrospinal fluid (CSF), which play important roles in a variety of normal or abnormal biological behaviors [6][7][8]. Recently, researches about cancer exosomes have received tremendous attention. Intercellular communication in the microenvironment plays a significant role in regulating tumor development, where exosomes are key messengers that mediate this cell-to-cell communication [5,9]. Previous researches have illustrated that exosomes participate in tumorigenesis or metastasis in multiple ways, their potential usages in cancer diagnosis and prognosis have also been deeply explored [9]. Although certain studies have reviewed the roles of EVs in the progression of CCA [10], a more comprehensive summarization of exosomes in CCA remains insufficient up to now. In this article, we systematically summarized the research status of exosomes in the tumor fields. Based on the existing researches of exosomes in CCA, we specifically emphasized their significant roles in regulating tumor development and potential values in diagnosis and treatment. Biogenesis, secretion and internalization As a type of EVs, the synthesis progress of exosomes involves three major phases: 1) plasma membrane invagination and early endosomes formation. 2) intraluminal vesicles (ILVs) and intracellular multivesicular bodies (MVBs) generation. 3) the fusion of MVBs and plasma membrane leads to exosomes secretion [9]. Generally, the biogenesis of MVBs mainly depends on the following two pathways: endosomal sorting complexes required for transport (ESCRT)-dependent or ESCRT-independent mechanisms, and the former is the most classic pathway [6]. Once mature, MVBs can integrate with autophagosomes then degrade through the lysosomal pathway or secrete into extracellular space as exosomes by fusing with the plasma membrane [11]. In this biogenesis and secretion process, other components such as tumor susceptibility gene 101 (TSG101), Rab family of GTPases (like Rab27A and Rab27B), soluble N-ethylmaleimidesensitive factor attachment protein receptor (SNARE) complexes, apoptosis-linked gene 2-interacting protein X (Alix), ceramide, tetraspanins (CD63, CD9, CD81), and phospholipids are also getting involved [5,12,13]. Considering the difference in the origin and microenvironment, exosomes have a strong heterogeneity, which is mainly reflected in the regulation of the target cell functions [5]. Once exosomes are secreted by the host cells, they can be absorbed by target cells through various approaches like endocytosis, plasma membrane integration, and specific protein interactions [14]. Among these internalization ways, endocytosis is the most widely studied pattern. According to the characteristics of components involved in endocytosis, several subtypes are broadly divided up, including phagocytosis, macropinocytosis, clathrin-mediated endocytosis (CME), and caveolin-dependent endocytosis (CDE), as well as lipid raft-mediated internalization [15]. Moreover, several proteins such as tetraspanins, integrins, proteoglycans, and lectins, also participate in the internalization of exosomes by the unique ligand-receptor interactions [9]. However, on account of the heterogeneity of exosomes, whether or not exosomes uptake is specific remains controversial (15). Therefore, it is essential to further investigate the detailed routes of exosomes uptake (Fig. 1). Isolation and identification Currently, frequently-used isolation strategies include centrifugation (differential or density gradient centrifugation), particle size separation, size-exclusion chromatography, microfluidic technique, and immunoaffinity capture [16]. Until now, the most common method is still differential centrifugation due to its high exosome yields and relatively cheap price. However, it also has some deficiencies like complicated procedures, low separation efficiency, and susceptibility to contamination by soluble substances in cell culture medium or other body fluids [17]. Other isolation methods like size-exclusion chromatography, with a relatively high yield but difficult to achieve mass production, and immunoaffinity capture, advanced in specific separation yet costly with low yields [16,17]. So far there is not a standardized method that can achieve both economic and high purity at the same time. Therefore, the exploration of better purification methods remains a major challenge in the exosomerelated fields. In terms of identification, the International Society of Extracellular Vesicles proposed to identify exosomes mainly from the following three aspects: 1) Exosomal morphology identification, 2) Exosomal size detection, 3) Exosomal biomarkers identification [18,19]. Among them, transmission electron microscopy (TEM), cryo-electron microscopy (Cryo-EM), and atomic force microscopy (AFM) are the most direct methods for visual observation of exosomes [20]. Real-time nanoparticle tracking technology based on the principle of Brownian motion can be used to obtain the size distribution of exosomes [21]. In addition, enzyme-linked immunosorbent assay (ELISA), flow cytometry (FCM), and western blotting (WB) are available means to detect the specific proteins or other markers expressed on exosomes [20,22]. Reportedly, several transmembrane proteins like CD9, CD63, and CD81 are considered to be representative hallmarks, however, a recent study suggested that compared with other tetraspanins, CD63 is the unique biomarker, while CD9 and CD81 are not specific for exosomes [23]. Moreover, other components related to the formation of exosomes such as Alix, TSG101, and Heat shock proteins (HSP) can also serve as classical hallmarks [5]. The roles of exosomes in malignancies Since exosomes have played an essential role in multiple pathological changes through mediating intercellular communication, it has also received enormous concerns in the cancer area over the past few years [9]. Related studies have pointed out that cancer-cell-derived exosomes can modulate tumor progression through a variety of mechanisms [9,24]. Besides, as mentioned above, exosomes contain complex cargoes that are widespread in various body fluids, which also partly represent the heterogeneity of their parental cells, making them available for cancer diagnosis and prognostic by serving Fig. 1 Biogenesis, secretion, and internalization of exosomes. The formation of exosomes initially depends on the invagination of the plasma membrane, followed by the generation of ILVs and MVBs. Once mature, MVBs can fuse with lysosomes and be degraded, or integrate with the plasma membrane and finally get released, i.e., exosomes. During this process of synthesis and secretion, ESCRT-dependent and ESCRT-independent mechanisms are two common approaches, other components like the Rab family of GTPases, SNARE, ceramide, and tetraspanins are also involved. Exosomes can be uptake by receptor cells to perform specific functions through various mechanisms, such as phagocytosis, macro-pinocytosis, ligand-receptor interaction, CME, and CDE. As plasma membrane-derived vesicles with lipid bilayer structure, exosomes carry a variety of components, including RNAs (mRNA, MiRNA, LncRNA, and CircRNA), proteins (TSG101, Alix, HSP, CD9, CD63, and CD81) and metabolites, etc. as novel biomarkers [25]. Moreover, recent studies have focused more on the tumor microenvironment (TME), where the signal interaction mediated by exosomes also makes a difference in tumor development [5,26]. Exosomes induce or accelerate tumorigenesis. Exosomes secreted by HCC cancer cells promoted tumorigenesis through the Hedgehog pathway [27]. Mirna-224-5p-enriched exosomes secreted by non-small cell lung cancer (NSCLC) cells accelerated neoplasia by directly binding with the androgen receptor (AR) [28]. On the contrary, exosomes distributed in the plasma of patients with medulloblastoma inhibited tumorigenesis by targeting FOXP4 (forkhead box protein 4) and EZH2 (enhancer of zeste 2 polycomb repressive complex 2 subunit) directly through their miRNA cargoes [29]. Exosomes have also been shown to be involved in tumor angiogenesis, which is a critical step in tumor progression. Exosomes loaded with miR-205 secreted by tumor cells induced angiogenesis via the PTEN(phosphatase and tensin homolog)/AKT pathway in Ovarian cancer [30]. Exosomal miR-25-3p derived from Colorectal cancer (CRC) cells could be absorbed by endothelial cells to facilitate, angiogenesis and increasing vascular permeability via targeting KLF2 (kruppel like factor 2) and KLF4 (kruppel like factor 4). Moreover, both in vitro study and clinical data suggested that exosomal miR-25-3p also related to the formation of pre-metastatic niche, making it a promising sign for CRC metastasis [31]. In the context of cancer, soluble E-cadherin (sE-cad)-enriched exosomes were potent stimulators of angiogenesis and might may relate to the formation of malignant ascites and widespread peritoneal metastasis in ovarian cancer patients [32]. TME consists of a group of cellular and noncellular components, including fibroblasts, immune cells such as macrophages, neutrophils, and lymphocytes, as well as cytokines, blood vessels, and extracellular matrix, etc [26]. And the crosstalk among different cell types mediated by exosomes has proven to be strongly related to tumor progression and therapeutic response. For one perspective, cancer-cell-derived exosomes regulate the function of stromal cells in the microenvironment. For example, exosomes from HCC cells induced the activation of cancer-associated fibroblasts (CAFs) activation to promote lung metastasis through their miRNA cargoes [33]. Exosomes secreted by epithelial ovarian cancer (EOC) cells under hypoxic conditions could regulate the macrophage polarization by transferring their miRNAs (miR-21-3p, miR-125b-5p, miR-181d-5p) to promote tumor proliferation and metastasis [34]. From another perspective, exosomes originated from other infiltrating cells in the TME can also modulate the biological behavior of cancer cells. Exosomal miR-34a-5p could transfer from CAFs to cancer cells, subsequently induced epithelial-mesenchymal transition (EMT) via the AKT/GSK-3β(glycogen synthase kinase 3 beta)/β-catenin pathway in oral squamous cell carcinoma (OSCC) [35]. Exosomes derived from M2 macrophage mediated an intercellular transfer of the integrin α M β 2 and promoted HCC metastasis through activating the MMP9 (matrix metalloproteinase 9) signaling pathway [36]. Moreover, the functions of exosomes in tumor immunity have also been explored to some degree. Hypoxia-induced tumor exosomes were abundant in chemokines and cytokines like CSF-1(colony stimulating factor 1), MCP-1(monocyte chemoattractant protein-1), and TGFβ (transforming growth factor beta), which can modify the host immune microenvironment and enhance tumor progression via influencing the macrophage recruitment and polarization [37]. Circ-UHRF1 (ubiquitin-like with PHD and ring finger domain 1), existed in plasma exosomes secreted by HCC patients, which leading to immunosuppression by inhibiting NKs (natural killer cells) activity via circUHRF1/miR-449c-5p/ TIM-3(T cell immunoglobulin domain and mucin domain 3) axis [38]. While such exosome-mediated signal transmission can exert antitumor effects under certain circumstances. For example, exosomes derived from DCs (dendritic cells) were reported that might become a novel vaccine applied in tumor immunotherapy. Exosomes secreted by α-fetoprotein (AFP)-positive DCs could effectively improve the immune microenvironment of mice models with HCC, making it a hopeful new strategy for immunotherapy of HCC [39]. In addition to the components contained in exosomes, their external molecules also participate in tumor immunoregulation. PD-L1(programmed death 1 ligand), known as a natural ligand for PD-1 (programmed death 1), can suppress the immunocompetence of T cells, B cells, and monocytes by directly binding with PD-1 on their surface to promote tumor immune escape [40]. It has been demonstrated that anti-PD-1/PD-L1 therapeutics has achieved a great success in multiple cancers, including metastatic melanoma, NSCLC, glioblastoma, and colon cancer, while the problem of drug resistance largely limits their clinical application [41]. Recent researches have indicated that exosomes derived from cancer cells also express PD-L1. PD-L1 + exosomes can impair immune functions and promote tumor growth in the similar way as above, which may also result in a low response to anti-PD-L1 therapy. Therefore, targeting the PD-L1 expressed on exosomes is expected to improve the present situation of cancer immunotherapy [40,42]. Palliative treatments like radiotherapy, chemotherapy, and targeted therapy are recommended choices for the later period tumor patients, while it is also disappointing because tumors are prone to become tolerant of these transmitted between cancer cells in exosomes enhanced docetaxel resistance in lung adenocarcinoma (LUAD) by inducing autophagy and regulating the macrophage polarization [43]. And tumor-associated macrophages (TAMs) derived exosomes were capable of inducing drug resistance in multiple cancers through their miRNA or lncRNA cargoes as well [44][45][46][47]. Moreover, miRNA-522, abundant in exosomes released from CAFs, could promote acquired chemotherapy resistance and inhibit ferroptosis, thus supporting tumor progression of gastric cancer [48]. In addition, other researchers have also noted that exosomes play a unique role in improving drug resistance. For example, miR-567-enriched exosomes reversed trastuzumab resistance via suppressing autophagy by targeting ATG5(autophagy related 5), promising to serve as a potential therapeutic target for breast cancer patients [49]. As for the value of exosomes in tumor diagnosis and prognosis, there is a lot of researches to back it up. Previous studies mainly focused on its nucleic acids cargoes, especially small RNA molecules such as classical oncogenic miR-21, miR-155, and anti-oncogenic miRNAs like miR-146a and miR-34a, which are differentially expressed between tumor cells and non-tumor cells, enabling them to diagnose multiple cancers at an early-stage, like pancreas, colorectum, liver, and breast cancers [50]. Besides miRNAs, exosomal proteins also have clinical significance [51]. Glypican-1 (GPC1)-positive exosomes could be used as an early diagnosis tool for patients with pancreatic cancers, which also performed better in prognosis prediction compared with CA19-9 [52]. By constructing mouse liver damage models and the proteomic analysis of their urine exosomes, twenty-eight novel proteins were identified and four of them are promising were expected to be used as non-invasive indicators in hepatic disease [53]. Based on the evidence of these researches, the combination of multiple components involved in exosomes may help enhance the specificity and sensitivity of cancer diagnosis, while further research is still needed. To sum up, exosomes are closely related to tumor progression, and it has been well explored in a wide range of digestive malignancies. However, relevant studies in CCA are still insufficient, and existing researches have not been systematically described either. Here, the current research progress of exosomes in CCA will be comprehensively introduced in the next part. Exosomes in CCA progression As the most well-studied component of exosomes, the critical roles of ncRNAs in CCA have been extensively reported as well. Several dysregulated exosomal miRNAs have been detected between CCA cells and normal biliary epithelial cells through miRNA profiling analysis. Among these differentially expressed miR-NAs, miR-205-5p, miR-200c-3p and miR-200b-3p, were significantly abundant in exosomes, while miR-34c-5p and miR-199 clusters were down-regulated distinctly. Subsequently, KEGG (kyoto encyclopedia of genes and genomes) enrichment analysis of target genes predicted by differentially expressed miRNAs suggested that dysregulated miRNAs were closely related to multiple cancer-associated pathways, and down-regulating the expression of miR-205-5p effectively suppressed the invasion and metastasis of CCA cells in vitro, indicating that it might take an essential part in CCA progression [54]. And in the CCA microenvironment, EVs derived from cancer cells could induce bone marrow mesenchymal stem cells (BMSCs) to differentiate into fibroblasts, accompanied by a significant up-regulation in the myofibroblast markers like α-SMA (alpha-smooth muscle actin), FAP (fibroblast activation protein alpha), and vimentin. Such differentiation enhanced the migratory capability of BMSCs and contributed to tumor extracellular matrix formation, facilitating the development of CCA ultimately. As a response, some soluble mediators such as IL-6 (interleukin-6) could be selectively released from BMSCs as well, and in turn, enhancing the proliferation capacity of tumor cells by stimulating the STAT3 (signal transducer and activator of transcription 3) signaling pathway [55]. Another study focused on the interaction between CCA cells and fibroblasts observed that down-regulating the miR-34c in exosomes derived from CCA cells could mediate the activation of CAFs to promote cancer development [56]. Interestingly, certain miRNAs transferred in the form of exosomes can also act as tumor suppressors. In the co-culture system of CCA cell line HuCCT1 and hepatic stellate cell line LX2, several markedly down-regulated miRNAs in LX2 cells were identified, and among them, miR-195 received more attention. Further functional experiments showed that the capacities of growth and invasion of CCA cells decreased significantly after upregulating the miR-195 in LX2 cells, indicating that miR-195 might act as a tumor suppressor in this cell culture model. Subsequent studies on the mechanism demonstrated that miR-195 was loaded in EVs secreted by LX2 cells, and performed this anti-tumor effect by directly transferring to CCA cells [57]. Similarly, intercellular transmission of miR-30e in EVs can also inhibit EMT of receptor cells via targeting Snail and then suppress invasion and metastasis of CCA [58]. Besides miRNAs, other exosomal ncRNAs like cir-cRNA and lncRNA have been reported in CCA as well. Circ-0000284 was markedly up-regulated in CCA cells compared to normal bile duct cells, promoted CCA development as a competitive endogenous RNA via miR-637/LY6E (lymphocyte antigen 6 family member E) pathway. The study also pointed out that circ-0000284 transmission via exosomes could induce a malignant transformation of normal cells adjacent to cancer [59]. Circ-CCAC1 (Cholangiocarcinoma-associated circular RNA 1) facilitated CCA growth and migration via sponging miR-514a-5p to elevate the expression of YY1 (Yin Yang 1) and CAMLG (calcium modulating ligand). And a high level of circ-CCAC1 was detected CCA-derived EVs, these EVs could be transmitted to endothelial cells to disrupt the continuity of vascular endothelial barrier and stimulate the formation of new blood vessels, promoting tumor growth and metastasis in turn [60]. In addition to nucleic acid molecules, the protein components contained in exosomes are also related to the malignant progression of CCA. For example, FZD 10 (frizzled class receptor 10) proteins, related to the Wnt signaling pathway, were detected in CCA-derived exosomes that could promote cell proliferation and might be involved in mediating the cancer reactivation and distant metastasis [61]. Several common cancer-related proteins, like integrin α/β, lactadherin, and vitronectin, were identified in CCA-derived exosomes, could induce invasion and migration of cholangiocytes by up-regulating the expression of β-catenin [62]. As mentioned above, exosomes also participate in regulating tumor progression by modulating the immune microenvironment. And in CCA, Chen et al. have observed that cancer-related exosomes could reduce the population of cytokine-induced killer cells (CIKs), leading to a secretory reduction of TNF-α (tumor necrosis factor-α) and perforin, thus inhibiting anti-tumor activity and promoting tumor immune escape ultimately [63]. In conclusion, these studies have confirmed that exosomes are closely related to the progression of CCA, providing a new perspective for understanding the regulatory mechanisms of tumor development (Fig. 2). Exosomes in CCA diagnosis Due to the stealthiness and heterogeneity of CCA, traditional image logical examination or laboratory test cannot achieve early diagnosis or dig out characteristics that reflect tumorigenesis. Recently, an emerging method of diagnosis called "liquid biopsy" has attracted more attention that may make up for the deficiency of traditional detection methods in cancer diagnosis. Several explorations aimed to develop the exosomes as objects for liquid biopsy have also been conducted in CCA. Researches aimed at bile-derived exosomes have also identified several circulating exosomal lncRNAs, which are closely related to oncogenic signaling pathways (like p53 and RAS signaling pathways) that may be conducive to the diagnosis of CCA [67]. Another high-quality research has identified four miRNAs (miR-96-5p, miR-191-5p, miR-151a-5p, and miR-4732-3p) that enriched in exosomes isolated from blood samples of CCA patients, which are promising for early diagnosis of CCA, especially stage II patients [68]. In addition, some researchers have paid more attention to the proteins contained in CCA exosomes. By using proteomics approaches, Heat shock protein 90 (HSP90) was confirmed that differentially phosphorylated in invasive CCA cells and was expected to serve as an implement for indicating the metastatic of CCA [69]. Besides, proteomic analysis showed that Claudin-3 was enriched in human bile-derived exosomes, and might become a novel biomarker for CCA as well [70]. In comparison with serum exosomes isolated from normal individuals, several specific proteins were observed to be concentrated in exosomes of CCA patients. In detail, a combination of three of them, including AMPN (aminopeptidase N), VNN1 (pantetheinase), and PIGR (polymeric immunoglobulin receptor), performed better in diagnosis that might become alternative serum biomarkers in CCA. And in the same study, another three exosomal proteins: FIBG (fibrinogen gamma chain), A1AG1 (alpha1-acid glycoprotein 1), and S100A8 (S100 calcium binding protein A8), were expected to become an effective basis for differential diagnosis with PSC [71]. In summary, these researches have emphasized the potential value of exosomes in CCA diagnosis and prognosis, while large cohort validation is still needed in the future (Table 1). Progression in adjuvant therapy of CCA and the potential application of exosomes At present, for CCA patients at an advanced stage that cannot accept radical surgery, gemcitabine combined with cisplatin is the preferred recommended option. More than three months longer median survival time is MALAT1 metastasis associated lung adenocarcinoma transcript 1, Cripto-1 teratocarcinoma-derived growth factor 1 (TDGF-1), UBE2C ubiquitin-conjugating enzyme E2C, SERPINB1 serine protease inhibitor B1, CMIP c-Maf inducing protein, GAD1 glutamate decarboxylase 1, NDKP1 nucleoside diphosphate kinase 1, CDS1 CDP-diacylglycerol synthase 1, CKS1B cyclin-dependent kinase regulatory subunit 1, AMPN aminopeptidase N, VNN1 pantetheinase, PIGR polymeric immunoglobulin receptor, FIBG fibrinogen gamma chain, A1AG1 alpha1-acid glycoprotein 1, S100A8 S100 calcium binding protein A8, PSC primary sclerosing cholangitis, UC ulcerative colitis, PHCCA perihilar cholangiocarcinoma observed in patients who received this combination compared to gemcitabine treatment alone (gemcitabine + cisplatin group: 11.7 months; gemcitabine: 8.1 months) [72]. In addition, Rachna et al. have also reported that treatment with gemcitabine-cisplatin plus nab-paclitaxel displayed a better survival benefit versus gemcitabinecisplatin alone in a phase II clinical trial [73], promising to become a new first-line therapeutic regimen for CCA. However, due to the significant heterogeneity of CCA, chemotherapy-based systemic treatments often have limited efficacy [74]. In recent years, with the maturity of genomics approaches, the molecular pathological mechanism of cholangiocarcinoma has been revealed gradually, providing new possibilities for individualized and targeted therapies [75]. Studies have found that different genomic profiles are shown in three CCA subtypes. To be more specific, patients with iCCA show a high mutation frequency in IDH1/2, KRAS, BAP1, TP53, as well as FGFR fusions, while PRKACA, PRKACB, and ELF3 mutations are more likely to appear in pCCA/dCCA patients [76,77]. According to these mutations and fusions, CCA can be further divided into different molecular subtypes and prognostic assessment becomes available on this basis. Furthermore, several targeted drugs aimed at these mutations have already entered clinical trials [75]. For example, infigratinib (NVP-BGJ398) and erdafitinib, known as pan-FGFR inhibitors, displayed an excellent anti-tumor capacity, with a positive therapeutic response and controllable security, observed from relevant phase I and phase II clinical data [78,79]. Moreover, immunotherapeutic strategies represented by anti-PD-1/anti-PD-L1 monoclonal antibodies have shown a significant remission rate in multiple human malignancies [80]. In advanced biliary tract cancers, several clinical trials have also reported that some progress has been made in patients who received pembrolizumab therapeutic (an immune checkpoint inhibitor targeting PD-1), with an objective response rate varying from 5.8% ~ 13% and a median progression-free survival of about 2 months [81]. However, relevant clinical exploration is still insufficient and unsatisfactory in CCA. Exosomes are lipid bilayer-enveloped extracellular structures that can protect their cargoes from degradation. And most importantly, exosomes secreted by different cells can partly represent the heterogeneity of their parental cells, conferring them excellent biocompatibility compared to other drug delivery vehicles like liposomes or lipid-based nanoparticles. Therefore, it seems that exosomes are more suitable as drug carriers for cancer treatment [5]. Compared to direct therapy, exosomes as drug carriers perform better uptake efficiency and lower toxicity. Engineering exosome content to transport therapeutic nucleic acids, proteins, or drugs directly to cancer cells shows an exciting potential in cancer treatment [82]. For example, miR-122, which acted as an anti-tumor miRNA in HCC, could inhibit cancer development via suppressing EMT and angiogenesis and enhancing the chemosensitivity [83]. It was reported that adipose tissue-derived mesenchymal stem cells (AMSCs) derived exosomes could effectively transfer miR-122 to HCC cells, thereby increasing HCC chemosensitivity by modulating the expression of downstream molecules [84]. Similarly, transferring miR-199a-enriched exosomes from modified AMSCs to HCC cells enhanced the sensitivity of doxorubicin treatment by inhibiting the mTOR (mechanistic target of rapamycin kinase) signaling pathway [85]. Moreover, patients with pancreatic ductal adenocarcinoma are susceptible to becoming chemoresistant, leading to a dismal therapeutic response and a poor prognosis. To improve this current status of treatment, Zhou et al. have loaded paclitaxel and gemcitabine monophosphate (an intermediate product of gemcitabine metabolism) into BMSCs-derived exosomes which can be absorbed by tumor cells. Further experiments showed an excellent penetration based on this Exo delivery platform, contributing to a favorable anti-tumor efficacy and relatively mild systemic toxicity [86]. These findings have opened up the way for exosomes to be used in tumor-targeted therapy. Nevertheless, the application of exosomes as therapeutic targets in CCA is still in infancy, the current research focus is still inclined to their roles in tumor diagnosis and development, as we mentioned before. As for their therapeutic potential, existing studies have found that transferring several tumor-suppressive miRNAs (miR-195 and miR-30e) through exosomes can effectively inhibit CCA development [57,58]. Vaccination of hamsters combined with Opisthorchis viverrini EVs and their surface recombinant tetraspanins has been demonstrated to reduce the burden of infection through inducing antibody reactions, exerting a protective role to avoid the appearance of CCA [87]. A relevant study reported that methotrexate-equipped EVs derived from cancer cells could effectively relieve malignant biliary obstruction by inducing pyroptotic cell death in patients with CCA [88]. Additionally, the latest research focused on the TME of CCA also brings forceful support for EV-based therapeutics. Exosomal miR-183-5p derived from cancer cells suppressed immune responses and promoted iCCA progression through up-regulating the expression of PD-L1 in macrophages via miR-183-5p/PTEN/AKT axis, promising to be a novel target for overcoming therapeutic tolerance of immune checkpoint inhibitors in iCCA [89]. Moreover, circ-0020256, produced by TAMs and loaded 20:117 in exosomes, significantly facilitated the proliferation and metastasis of CCA cells, cutting off the cross-talk mediated by exosomal circ-0020256 between CCA cells and TAMs might be a hopefully therapeutic strategy [90]. All these findings provide strong evidence to support the application of exosomes in the treatment of CCA. However, mainly of them are based only on animal experiments, meaning that there is still a long way from clinical trials. Conclusions and perspectives A growing number of studies support the thesis that the microenvironment plays a significant role in the development of multiple human cancers. Given that exosomes are key components of the microenvironment and act as messengers of intercellular communication, relevant studies have become hot topics of cancer research. Exosomes can be separated from various body fluids, and certain RNAs, proteins, or metabolites involved in them have advantages in stability and abundance, giving them unique advantages in tumor diagnosis and prognosis. Moreover, since exosomes have excellent histocompatibility and can partly reflect the heterogeneity of their parental cells, modification of exosomes for cancer treatment also shows great potential. However, there are still many problems to be solved in the clinical application of exosomes. First, current exosome isolation and purification techniques need to be further optimized to achieve better efficiency and convenience. Next, the molecular mechanisms of exosomes in CCA malignant progression remains unclear, hence the exploration of internal details remains needed to develop potential therapeutic targets. Last but not least, the artificial synthetic exosome techniques are immature at the present stage, related clinical trials are also inadequate, thus developing exosomes as drug carriers in cancer treatment still faces lots of challenges. Nevertheless, with the continuous progress of characterization, separation, purification, and modification technologies, we firmly believe that exosomes will eventually be applied in the diagnosis and treatment of CCA.
6,496
2022-03-07T00:00:00.000
[ "Biology" ]
Visual Prompting based Incremental Learning for Semantic Segmentation of Multiplex Immuno-Flourescence Microscopy Imagery Deep learning approaches are state-of-the-art for semantic segmentation of medical images, but unlike many deep learning applications, medical segmentation is characterized by small amounts of annotated training data. Thus, while mainstream deep learning approaches focus on performance in domains with large training sets, researchers in the medical imaging field must apply new methods in creative ways to meet the more constrained requirements of medical datasets. We propose a framework for incrementally fine-tuning a multi-class segmentation of a high-resolution multiplex (multi-channel) immuno-flourescence image of a rat brain section, using a minimal amount of labelling from a human expert. Our framework begins with a modified Swin-UNet architecture that treats each biomarker in the multiplex image separately and learns an initial “global” segmentation (pre-training). This is followed by incremental learning and refinement of each class using a very limited amount of additional labeled data provided by a human expert for each region and its surroundings. This incremental learning utilizes the multi-class weights as an initialization and uses the additional labels to steer the network and optimize it for each region in the image. In this way, an expert can identify errors in the multi-class segmentation and rapidly correct them by supplying the model with additional annotations hand-picked from the region. In addition to increasing the speed of annotation and reducing the amount of labelling, we show that our proposed method outperforms a traditional multi-class segmentation by a large margin. of labelling, we show that our proposed method outperforms a traditional multi-class segmentation by a large margin. Image segmentation, transformers, incremental learning Introduction Accurate image semantic segmentation is the foundation of many automated clinical diagnosis tools, and improvements in semantic segmentation have immediate downstream benefits to computer-aided diagnosis systems that rely on robust segmentation of organs or tissue. In recent years, deep learning based semantic segmentation has shown promise for medical image segmentation tasks.The CNN-based UNet [1], for example, is the de facto standard for medical segmentation [2].By using an encoder-decoder architecture with skip connections to combine hierarchical features, UNet has been shown to provide good semantic segmentation performance for many tasks.Due to it's success and widespread adoption, many variants have been developed for different segmentation domains [3][4][5], such as Attention UNet [6], DenseUNet [7], and UNet++ [8].Attention UNet filters features passed through skip connections at each scale with a mechanism called an attention gate.The function of the attention gate in the context of Attention Unet is to suppress feature activations in irrelevant regions of the image, improving segmentation accuracy.Attention coefficients at each gate select only the feature responses that preserve activations relevant to the specific task.Existing features are multiplied with these coefficients element-wise to produce the skip connection output.DenseUNet uses dense blocks (a densely-connected set of convolutions) to extract features rather than single convolutions, with the aim of bringing the benefits of DenseNets to the segmentation of microscopy images.The addition of these dense blocks has benefits such as reducing the number of trainable parameters without reducing network depth, and inherent regularization, which is particularly useful in the usual medical imaging situations where few annotated training cases are available.UNet++ adds nested and dense connections between the encoder and decoder at each scale, in addition to the skip connections found in a plain UNet.By convolving feature maps from the encoder prior to fusion with the decoder, the two feature maps are made more semantically similar, improving segmentation. More recently, inspired by the UNet and the recent success of vision transformers for computer vision tasks, Swin-Unet [9] was developed as a transformer-based variant of the UNet.Originally used in natural language translation [10], transformers pass an input sequence to an encoder, where features are extracted with alternating selfattention and fully-connected layers, with residual connections bypassing each layer.This feature sequence is then decoded in a similar fashion with groups of self-attention, fully-connected layers, and residual connections to form an output representation.By using positionally embedded patches of an image as the input sequence and adding a classifier, the language transformer has been adapted to successfully classifying images.Where a standard UNet uses convolutions at different scales for hierarchical feature extraction, Swin-Unet uses Swin Transformers [11] at different scales to accomplish the same.Because transformers and self-attention overcome the inherent locality of Convolutional Neural Networks (CNNs), Swin-Unet has been shown to outperform the standard convolutional based UNet architecture in publicly-available semantic segmentation benchmarks [9], as well as in private datasets [12,13], making it the state-of-the-art in medical semantic segmentation. Brain segmentation is most commonly performed on magnetic resonance (MR) and computational tomographic (CT) images due to the relative ease of data collection [14][15][16][17].A different method, Immuno-fluorescence (IF) [18], is a nucleus-staining technique that allows visualization of many cells and synapses within a tissue segment.IF microscopy has been paired with semantic segmentation algorithms to better understand the underlying tissue structure [19].Today, this work is dominated by CNN-based methods that have been adapted to microscopy images [20,21].Among these is Cellpose [22], a general segmentation framework designed to segment many different types of cells.Cellpose uses a variant of the watershed algorithm [23], a predeep-learning routine for cellular segmentation, to preprocess an input image into a topological map based on a human annotator.A custom UNet then classifies the gradients of the map to segment the image.Though competitive at instance segmentation, this method is not optimal for semantic segmentation, and may not be suitable for multi-channel data. By leveraging a multitude of biomarkers when acquiring IF images, highly multiplexed IF microscopy [24] has the potential to provide a rich multi-channel representation of the tissue for accurate identification of all relevant cell phenotypes, which constitute the tissue and exhibit specific cell patterning distributions that define unique anatomical regions that make up an organ structure (e.g.within the rat brain), which can be used in downstream segmentation models. Despite the promising developments in semantic segmentation algorithms, there are some pitfalls to consider when deploying them for biomedical imaging tasks (such as multi-channel IF microscopy).These deep learning models often require a large amount of labeled data (even more so for vision transformers that do not have inductive bias and need a lot of data to learn representations via self-attention).Additionally, recent developments such as the traditional UNet as well as the recent Swin-UNet were originally designed only for color and grayscale images and do not intrinsically leverage the rich multi-channel information provided by highly multiplexed IF microscopy imagery.Finally, traditional semantic segmentation frameworks consider the segmentation of the entire image as a single "global" task.This does not take into consideration the observation that analyzing "local" anatomical structures within the context of only surrounding anatomical regions can enhance the discrimination potential of segmenting out regions of interest. In this work, we propose a multi-channel Swin-Unet (a Swin-Unet that has been adapted to extract information from highly multiplexed immuno-flourescence imagery) as well as a framework to incrementally fine-tune a multi-class segmentation network with a domain expert in the loop.The domain expert can help steer the fine-tuning of the network by providing a few representative patches ("clicks") on the image per region.We show that exceptional segmentation can be produced with only a single training dataset and minimal additional labelling from the image where segmentation is desired, overcoming inherent limitations of state-of-the-art semantic segmentation models. This paper is organized as follows: Section II describes related works in the area of medical image segmentation.Section III presents both our multi-channel Swin-Unet and our proposed framework of incremental fine-tuning a global semantic segmentation network.In Section IV, we discuss the experimental setup and results of applying this approach on an IF-stained rat brain imagery.Finally, we provide concluding remarks and possibilities for future work in Section V. Related Work Medical image segmentation is unique from other image processing problems in that most supervised learning paradigms require a large quantity of labeled data -something that is difficult to come by and often prohibitively expensive to attain.Thus, recent approaches to the medical segmentation problem have focused on making robust predictions from small datasets.The CNN-based U-Net has been applied with great success to many sparsely labeled biomedical datasets, and has since spawned a number of variants, owing to both it's wide adoption in the biomedical research community and the diversity of medical imaging applications [2,25,26]. UNet is composed of an encoder, decoder, and skip connections that share information from the former to the latter after each downsampling operation.The encoder follows the structure of a typical CNN, composed of a collection of repeated 3×3 convolutions and max pool downsampling operations.The decoder has the same structure, but up-convolutions are used to upsample the data whereas max pooling was used for downsampling.Skip connections pass shallow, non-downsampled features from the encoder to the decoder, where they are concatenated with deep features which have passed through the entire model.The combination of upsampling and skip connections allows for unprecedented localization capability for a CNN model, even when applied to a small dataset. Recently, transformer-and attention-based approaches to image classification and segmentation have gained popularity [27][28][29], outperforming CNN-based models at many tasks [30,31].The first transformer was designed for language translation, and eschewed convolutions and recurrence entirely in favor of an exclusively selfattention-based architecture [10].Self-attention is competitive due to it's ability to efficiently model long-term dependencies, which CNNs and recurrent networks inherently struggle with.Dosovitskiy et.al. [32] developed the first competitive transformer-based vision model, Vision Transformer (ViT), and designed it to be as similar as possible to the preceding Natural Language Processing (NLP) Transformer architecture.Whereas a sequence of words forms the input to the NLP Transformer, a ViT input is formed by partitioning an image into non-overlapping patches, and feeding them to the transformer sequentially.Thus, in terms of the preceding NLP Transformer, each patch of the image is analogous to a word in a sentence. Because sentences tend to be less informationally dense than images, and due to the quadratic time complexity of self-attention, ViT is not practical for use on large or high-resolution images.Out of the need for scaling came the Shifted-Window (Swin) Transformer.Instead of computing self-attention across the entire set of patches, The Swin Transformer partitions the set of patches into groups of non-overlapping windows, and computes attention for each window.To account for the lack of attention at the edges of the windows, a second transformer is applied with shifted windows that model connections between the first transformer's windows, eliminating any "blind spots".Thus, Swin Transformers always come in pairs.The shifted window method makes self-attention linear with respect to input size rather than quadratic, making the Swin Transformer feasible for processing large images. The transformers described to this point are standalone image classifiers, but have been applied competitively to segmentation problems by inclusion of another module [33][34][35].Swin-Unet, for example, gives competitive semantic segmentation results when used as an encoder for the CNN-based UperNet [36].However, the semantic segmentation challenge of medical imaging is uniquely defined by a lack of training data, and the models described so far are not designed to account for this.Many models build on the existing CNN-based UNet architecture by incorporating transformers in creative ways.TransUnet, for example, uses ViT as an encoder within a UNet-like CNN architecture [37].TransFuse [38] runs a CNN and transformer encoder in parallel and fuses the results.One very well-performing medical segmentation model is the Swin-Unet [9].Swin-Unet is unique from the other mentioned architectures in that it is completely self-attention based, convolutions are used only for patch partitioning and embedding (rather than feature learning), and recurrence is eschewed entirely.The Swin-Unet achieves state-of-the-art performance by using a series of Swin Transformers in conjunction with a transformer-specific downsampling operation to encode deep representations.Encoded features are then upsampled and combined with features from the encoder via skip connections, yielding a segmentation in a very similar fashion to the CNN-based UNet. Proposed Work In this section we introduce our incremental learning approach to semantic segmentation of multi-channel images.We begin by describing a custom Swin-Unet model and it's advantages over an off-the-shelf model in the context of our multiplex immuno-flourescence microscopy imagery.We then describe our proposed method to incrementally fine-tune the "global" model for image sub-regions, aided by a human domain expert.Finally, we provide a strided inference method to retain smoothness between image patches. Multi-Channel Swin-Unet Our custom Swin-Unet is shown in Fig. 1.We opt to use a 48 × 48 input size rather than the standard 224 × 224 for our model.This window size provides the spatial context needed to capture the region specific phenomena such as cells and neurons and their immediate neighborhood.Patch Embedding: Both the standard Swin-Unet and our version begin with partitioning of the image into patches and embedding them.To adapt our model to multi-channel data, patch partitioning and embedding is performed by a single, separable 2D convolution layer, whereas the original Swin-Unet uses a non-separable Multi-Head Self-Attention: The transformer computes MSA from three equivalent copies of the input sequence x.By convention, these copies are known as the query, key, and value vectors, denoted Q, K, and V , where Q, K, V, x ∈ R n×C .n is the number of patches in the input sequence and C is the latent channel depth of each patch.For multi-head attention, each of Q, K, and V are partitioned channel-wise into h sub-vectors, such that each sub-vector , the vectors therein are each linearly projected back to R n×C and an attention head is assigned to the triplet. Having multiple heads work on different segments of the feature space in parallel enables the model to efficiently attend to multiple representations of the input sequence.For example, one attention head may model relationships between adjacent patches, while another models relationships between distant patches.For each head, self-attention itself is computed as follows: where Qi = QW Q i , and W Q i is the parameter matrix of the linear projection of Q i .Selfattention models the relationship between features of each patch to features of other patches.The output matrix gives the strength of the relationship between patches.Finally, the outputs of the attention heads are concatenated and the result is linearly projected back to the input dimension.The window partition is cyclically shifted such that blind spots between the two are minimized.In this case, the partition is shifted two patches leftwards and two patches upwards. Windowing: Because time and space complexity of MSA grow exponentially with the number of patches [9], the patch set is split into non-overlapping windows, and MSA is computed for each window.The default Swin-Unet uses a window size of 7; we choose to keep as close to this number as possible while conforming to the shape of our input by using a window size of 6.With this windowing method alone, there would be no attention context at the borders of the windows, since MSA has only been computed within each window.For this reason, Swin Transformers come in pairs-the second transformer's windows are shifted such that they build connections between the windows of the previous layer, eliminating any "blind spots" (see Fig. 2).Patch Merging and Expansion: Patch merging blocks serve as the network's downsampling operation, similar to pooling in CNNs.In a patch merging block, each group of 2x2 adjacent depth-d patches are stacked into a single patch of depth 4d, then linearly projected down to 2d.As in CNN pooling, the set of patches has been downsampled by a factor of 4, and the remaining patches have greater depth.This enables a hierarchical representation of learnable features. Similarly, patch expansion is the upsampling procedure of Swin-Uet, similar to transposed convolution upsampling in a vanilla UNet.The operation returns the set of patches to the same feature dimension that they had prior to patch merging.The set of patches is passed to a linear projection, then rearranged such that the height and width of each patch is doubled whilst the depth is halved.Finally, each of these spatially expanded patches is partitioned into a 2 × 2 group of smaller patches, returning them to their pre-merging dimension and quantity. As in a CNN-based UNet, shallow features from the encoder are passed along via skip connections, and concatenated to deep features processed through the decoder.As with a vanilla UNet, this allows learning of shallow features unaffected by downsampling in addition to deep semantic features passed through the entire network. Following the choice to make the input size of our Swin-Unet 48x48 is a reduction in depth of the network.Because our initial image size is small, patch merging becomes impossible at the bottom of the full network, as there is no deeper representation possible.We remedy this by simply removing one Transformer and Patch Merging block from the encoder, and one Transformer and Patch Expanding block from the decoder.As a result, there is no place for the usual skip connection at the deepest point of the network, so our model contains only two.As a result, we have only 4,990,032 learnable parameters, about a quarter of the 20,074,092 learnable parameters in a standard Swin-Unet. Incremental Learning In this section we describe the proposed incremental learning framework.We begin by training our multi-channel Swin-UNet to segment a large, multi-class, multi-channel image.Specifically, we chose half of a section to train this model, and tested/deployed the learned model on the other half.In a typical use of the Swin-Unet, train and test images are shaped and resampled to fit the 224 × 224 input size, such that every train and test case processed is a "full image/frame" of what is being segmented.In our case, we consider a multi-channel brain slice that is too large and high-resolution to conform to this input size.Increasing the input size to the Swin-Unet increases computation cost exponentially [9], so the large image is partitioned into patches to form the training set.Segmentation must similarly be performed patch-by-patch. Our proposed incremental learning framework is centered on the following observation: A "global" multi-class segmentation is bound to be sub-optimal relative to each region that is being segmented.On the other hand, if the segmentation model is focused on a smaller subset of the image (i.e., if the problem is defined as segmenting a specific anatomical region from its surrounding regions), the complexity of the underlying task reduces.With this in mind, our framework uses the global segmentation task as a prior for fine-tuning the segmentation of individual anatomical regions.Specifically, we pre-train the network on the global segmentation task using all the available labeled training data, and then fine-tune the model for each anatomical region in the test region using a few labels (such as could be provided by a domain expert). Given a segmentation of a large, multi-class, multi-channel image from the Swin-Unet model, an under-performing class is selected, and it's region is isolated from the rest of the image.We then re-frame the segmentation as a binary problem rather than multi-class, considering only the class of interest and the background of the subregion.Using a GUI that we developed for this purpose, a human domain expert may now visually form a new small binary training set by choosing patches from inside and outside the class region, and labelling each patch as belonging either to the class of interest or to its background.Note that because of the global pre-training, very few "clicks" are needed to fine-tune the model to any anatomical region of interest.For our GUI, we choose to build a plugin with Napari, which is a multi-dimensional image viewer [39]. In essence, we form a new binary Swin-Unet model seeded from the learned weights of the multi-class Swin-Unet model.We change the output layer to predict 2 classes instead of several, and train (fine-tune) the model with the human-defined "clicks" (set of patches).This can then be repeated until all regions of interest have been incrementally fine-tuned.Training (fine-tuning) this binary model is much faster than training the multi-class model because the feature extraction part of the network has already been pre-trained on a very similar (global segmentation) task using a large quantity of labeled training data, and, (2) the incremental training patches (no more than 30 training patches and often requires fewer than 10) from the test image provided by the domain expert steer the deep network to optimize itself for segmenting the specific region of interest. The speed and ease of this procedure lends itself well to an incremental approach.The segmentation from the first pass of the binary model often contains inaccuracies that are clear to the human eye.However, it is straightforward to add or remove a few patches to account for the model's mistake, and resume training.We repeat this procedure, fine-tuning and editing the training set repeatedly, until the segmentation can no longer be improved in a clear way by adding or removing patches.In a practical scenario, a domain expert can utilize visual cues and other context to provide these minimal labels from the test-domain for this task. The result of this approach is a set of binary fine-tuned models for every class in the dataset.We achieve better segmentation performance with these models than could have been hoped for with a multi-class model, and have done so with a minimal amount of labelling effort during testing/deployment.Furthermore, by using this method it is not necessary to spend time and resources seeking the best possible multi-class segmentation; only a model good enough to seed the binary networks is required. Sliding-Window Inference Smoothing Because the image we would like to perform segmentation on is larger than can be reasonably fit into the input of a Swin-Unet model, the image must be split into patches, which are segmented individually.A consequence of this approach is distortion of the segmentation at edges where the patches meet.This can result in a "blocky" or "pixelated" segmentation that hurts performance. To remedy this, we propose a sliding-window inference method.Our goal is to eliminate artifacts where patches meet by forming additional, overlapping patch sets at different 2D offsets, computing inference (segmentation prediction) over the patches at each of these offsets, then combining the set of overlapping offset inference windows in a way that will smooth the segmentation.We choose some integer f which is a factor of the image size, 48.We define the number of unique inference windows as n = ( 48 f ) 2 , and compose each of the offset values x and y from the set: Finally, we form the output inference map W smoothed by taking the statistical mode of each predicted pixel from the offset windows.That is, where W x,y is the inference window at offset x, y, and it's pixels are indexed by i, j.Smaller values of f give a greater number of windows and therefore a smoother inference, at the expense of more computational resources.Note that setting f to 48 (the patch size) is equivalent to standard (non-windowed) inference.In this work, we choose the smallest possible value, f = 4, that can be accommodated with our computational resources, given the large size of the image. Dataset Description In this work, we use imagery of a rat brain section, acquired using our large-scale highly multiplexed immunoflourescence imaging framework [24].The original imagery comprises of 50 high-resolution (29398 × 43054 pixels) biomarker channels scanned from a single rat brain slice, with each biomarker identifying a specific resident cell type and its unique cellular distributions in different anatomically-identifiable parts of the brain, which match the classical atlas regions previously mapped using traditional using low-plex Nissl and Hematoxylin-Eosin brigfield imaging techniques [40].In this work, we downsampled the slice by a factor of 5 to 5880 × 8622 pixels to fit our computational environment (GPU memory).We identify 10 anatomical regions of interest which we seek to accurately segment: • Fimbria of the hippocampus (fi ) • Stria medullaris of the thalamus (sm) 7 biomarkers for which the regions of interest are most visible are selected to form a 7-channel image.We chose the following 7 biomarkers to form our channels: • CNPase (oligodentrocytes, soma and processes-specific) • GFAP (astrocytes, processes-specific) • NeuN (neurons, nucleus and soma-specific) • OLIG2 (oligodendrocytes, nucleus-specific) • Parvalbumin (interneurons, soma and processes-specific) • S100 (astrocytes, nucleus and soma-specific) • Tyrosine Hydroxylase (catecholaminergic neurons, soma and processes-specific) Because the brain sections are roughly symmetrical, the training data for the Swin-Unet model is formed from selected patches on the left half of the brain image, while the right half is reserved for testing the model's performance.To form the training set, the image is partitioned into 8 × 48 × 48 slices.Any slice in the left half of the image that contains one of the 10 regions of interest is added to the training set, plus 10% of all other slices, randomly sampled (See Fig. 3).A threshold is applied to prevent training on the dark background of the brain image.We test the model on the right half of the section using the sliding-window inference described previously.We use the Intersection over Union (IoU) metric to quantify and compare performance of the resulting segmentations.IoU is defined: Where A is the model segmentation and B is the ground truth annotation. Experimental Setup and Results We first show the multi-class inference results of our custom Swin-Unet on the dataset, and ablate with a comparable Unet model (see Fig. 5).We use an off-the-shelf UNet for this comparison with a single modification: the convolutions in the UNet model have been made separable for a better comparison with our modified Swin-Unet model on multi-channel data.Table 1 shows the IoU scores for each region and the average IoU across all regions, for each model.The custom Swin-Unet performs better than the separable UNet by a large margin on most classes. Using our multi-channel Swin-Unet on the multiplex IF microscopy imagery produces the segmentation shown in Fig. 5.Some regions are well-segmented, while others are poorly segmented (e.g.over-segmented and under-segmented).One of the classes was not predicted at all.Fig. 4 shows the results of deploying two shots of the proposed incremental learning framework for each of the 10 regions of interest from the original segmentation, and Table 2 shows the IoU for each of these segmentations. When applying the multi-class model to each individual region, we frame the problem as a binary one rather than multi-class, i.e., any segmentation made for a region besides the region of interest is treated as a common background class.This will allow for a straightforward comparison between the multi-class model and the fine-tuned binary models. The fine-tuned models outperformed the multi-class model in every region but two, which lagged by .02 and .03IoU points.In the case of the Rt region, the multi-class segmentation was so exceptional (0.97 IoU) that there was not much improvement to be gained by applying our method in the first place.To show global performance, we consider the IoU of every region and take the mean.The mean IoU of the multi-class model was 0.62, while the mean of the second and final iteration of the fine-tuned binary models was 0.81, a global IoU improvement of 0.19. For analysis of the performance of individual regions, it is useful to consider these regions in terms of the following three types: 1. Regions that have textures and edges plainly visible to the human eye in at least one channel (biomarker).These regions can be easily segmented by an untrained human.2. Regions that have few or no human-visible edges, but have a texture or cellular structure that differs from the surrounding regions.These regions may have enough context between all channels of the image for a model to make a good segmentation, but would be difficult for a human to precisely discern.3. Regions that are not visible to the human eye, and cannot be distinguished from surrounding regions.These regions are practically impossible for a human to segment through visual cues. Type 1 regions include MHb, Rt, and sm.These regions were segmented well by the multi-class model, so local fine-tuning yielded little or no improvement.Type 2 regions include MGP, VMH, opt, and fi.All of these regions saw major improvement from incremental learning.Type 3 regions include mt and Stg.Both of these very small regions saw large improvements from incremental learning.cc is a particular region of interest, because it's inference region practically spans the entire image.One might expect a large amount of false-positive segmentations due to having so many different regions in the same domain, but the fine-tuned model is able to identify the region of interest reasonably well in two rounds with plenty of labelling.Additionally, the finetuned model captures the lower part of the cc region, which the multi-class model was unable to segment.This demonstrates the scalability of this approach to large regions. The multi-class model takes about 30 minutes to train for 1000 epochs, whereas fine-tuning each region takes 1 to 4 minutes, requiring at most 400 epochs, but more typically around 100 epochs.All training is done on a cluster of 3 Nvidia GeForce GTX Titan X GPUs.Inference time varies by region.Inference for the full multi-class image and the large cc region both take about 25 minutes on the above hardware.For the remaining regions, inference time is under 4 minutes. As a supplemental demonstration of the efficacy of our incremental fine-tuning method, we apply the multi-class model, without any additional fine-tuning, to an adjacent brain slice.Predictably, the initial segmentation is poor, as the variability between sections can negatively impact segmentation performance.There are large swaths of mis-segmented background regions, and, though there are many regions that are localized correctly, they are labelled incorrectly.Once again, local fine-tuning of the multi-class model provides better segmentations.Although there is no ground truth atlas for this brain slice, the improvement is visually apparent in Fig. 6 5 Conclusion In this work, we introduced an incremental learning framework that leverages the representation power of a multi-channel Swin-UNet for semantic segmentation of multi-channel IF microscopy images.We validated this approach on a multiplex IF image representing a rat brain section.Our model improved the segmentation of 8 out of 10 regions, and improved the average overall IoU by 0.19.Further, starting from an appropriate pre-training, our work shows that this framework can be used to incrementally (and efficiently) adapt the model to different images and different regions within an image (e.g.different subjects, different sections) with minimal user input in terms of additional labeling.This framework can also be readily utilized for other biomedical semantic segmentation tasks as well. Fig. 2 : Fig.2: An example of shifted window partitioning for a Swin Transformer block.The window partition is cyclically shifted such that blind spots between the two are minimized.In this case, the partition is shifted two patches leftwards and two patches upwards. Fig. 3 : Fig. 3: One channel of the brain image overlaid with the ground truth labels for all 10 classes.The patches forming the training set are shown as white squares. Fig. 6 : Fig. 6: Segmentation of a new rat brain slice before and after local fine-tuning.The fine-tuning result is achieved with an average of about 25 user-provided mouse clicks per region, excluding the large cc region (111 clicks). Table 1 : Multi-class IOU scores for the separable UNet and full-parameter Swin-Unet Table 2 : Improving segmentation results with human-aided binary fine-tuning of local regions.The segmentations in the left column are multi-class; segmentations of the class of interest are colored brown.The other two columns show the patches used to fine-tune the binary model that generated the shown segmentation.
7,240.8
2023-12-25T00:00:00.000
[ "Medicine", "Computer Science" ]
Strategies towards Targeting Gαi/s Proteins: Scanning of Protein‐Protein Interaction Sites To Overcome Inaccessibility Abstract Heterotrimeric G proteins are classified into four subfamilies and play a key role in signal transduction. They transmit extracellular signals to intracellular effectors subsequent to the activation of G protein‐coupled receptors (GPCRs), which are targeted by over 30 % of FDA‐approved drugs. However, addressing G proteins as drug targets represents a compelling alternative, for example, when G proteins act independently of the corresponding GPCRs, or in cases of complex multifunctional diseases, when a large number of different GPCRs are involved. In contrast to Gαq, efforts to target Gαi/s by suitable chemical compounds has not been successful so far. Here, a comprehensive analysis was conducted examining the most important interface regions of Gαi/s with its upstream and downstream interaction partners. By assigning the existing compounds and the performed approaches to the respective interfaces, the druggability of the individual interfaces was ranked to provide perspectives for selective targeting of Gαi/s in the future. Introduction G protein-coupled receptors (GPCRs) represent the largest family of transmembrane receptors with more than 800 members controlling the signal transduction of physiologically important processes. Through extracellular stimuli of the GPCRs, the signal is transmitted via membrane-bound, intracellularly localized heterotrimeric G proteins to intracellular effectors. [1][2][3] The indisputable importance of GPCR-mediated signal transduction is demonstrated by the fact that over 30 % of the FDAapproved drugs target GPCRs ( Figure 1A). [4,5] The attractiveness of addressing GPCRs lies in easily accessible druggable sites at the cell surface. [4,6] GPCRs are targeted for numerous diseases, including Alzheimer's disease and cancer. In particular, oncogenic mutations of GPCRs and G proteins have been identified in a significant number of tumors. [4,[7][8][9][10] As randomly mutated GPCRs can occur, it is difficult to develop drugs that respond to each of these mutations. Furthermore, multiple GPCR signaling pathways may be involved in multifactorial diseases, such as asthma or cancer, making it unsuitable to address the GPCRs individually. [1,2,11] Therefore, targeting the downstream G proteins may be an appropriate alternative, further strengthened by the fact that overexpression, abnormal activation, mutations, and dysregulation of G proteins are attributed with diseases such as cancer ( Figure 1B, C). [7,8,10] Besides cancer, G proteins are also associated with cardiovascular diseases, for example, heart failure, diabetes, and chronic inflammatory diseases like asthma. [1,12,13] G proteins are often referred to as "undruggable" because they cannot be adequately targeted pharmacologically. [14] The intracellular location and the consequent lack of accessible sites on the cell surface is one of the reasons. Thus, molecules addressing G proteins need to pass the cell membrane to influence their activity. Of particular interest is the Gα subunit, which acts as a molecular switch by binding guanine nucleotide diphosphate (GDP, inactive) or guanine nucleotide triphosphate (GTP, active). With respect to Gα, the four existing G protein subfamilies, Gαs, Gαi, Gαq/11, and Gα12/13 and their subtypes (Gαs: Gαs, Gαolf; Gαi: Gαi1-3, GαoA/B, Gαt1-2, Gαgust, Gαz; Gαq/11: Gαq, Gα11, Gα14-16; Gα12/13: Gα12, Gα13), have a high sequence and structural similarity, making it difficult to selectively address only one subfamily. [16][17][18] The development of selective and efficient G protein activators or inhibitors ("modulators") is of crucial importance, as they can be used as tools to gain deeper insights into G protein-mediated signaling and as lead structures to design therapeutic drugs. In this regard, various strategies have been applied to identify and develop modulators of G protein activity. For example, the investigation of natural compounds led to the discovery of G proteins in 1980, for which A. G. Gilman and M. Rodbell were awarded with the Nobel Prize for Physiology and Medicine in 1994. [19][20][21] Another possibility for the identification of G protein modulators are high-throughput screening techniques, which are commonly used to identify small molecules and peptides. Due to the structural similarity of the G protein subfamilies, small molecules might have only moderate target specificity, as can be exemplified with the imidazopyrazine derivatives BIM-46174 and BIM-46187. [22] Nevertheless, small molecules are able to interact with proteins specifically on protein "hot-spots". [23] G proteins generally communicate through protein-protein interactions (PPIs) to regulate cellular processes. [24] In this context, the disruption of PPIs can lead to a specific modulation of the protein activity. [25,26] Thus, (macrocyclic) peptides are meanwhile regarded as suitable medium-sized molecules to interrupt PPIs, while the requirement for cell penetration can be met by incorporation of cell-penetrating peptide (CPP) sequences, as demonstrated for Cyclorasin 9 A5, targeting the small G protein KRas. [25,[27][28][29][30][31] Today, peptidic modulators can be identified by several methods, including (computational) structurebased design or combinatorial approaches. [32][33][34][35] Concerning Gα proteins, only the Gαq subfamily can be addressed sufficiently by the two naturally occurring cyclic depsipeptides YM-254890 and FR900359, which selectively inhibit the Gαq-mediated signaling pathway and are widely used in pharmacological studies, such as in uveal melanoma or asthma research. [1,[36][37][38][39][40][41] As modulators like FR900359 and YM-254890 are still missing for Gαi and Gαs, we examined the existing strategies and developments to provide a comprehen-sive analysis of Gαi/s as targets for chemical tools as well as their interface regions (to GPCRs, Gβγ, effectors, accessory proteins), which are crucial for respective signal transduction pathways. Thus, this review aims at establishing the essential prerequisite for the future development of highly specific and potent modulators and tools for the investigation of G proteins and their involvement in diseases. Gαi/s Interfaces: Determinants of G Protein Signaling For the development of Gαi/s modulators, it is essential to understand their different signaling determinants ( Figure S1 in the Supporting Information). A ligand binding to a GPCR results in conformational changes of the GPCR and the associated G protein and thus the GDP dissociation from the Gα subunit. The resulting empty-pocket conformation has a very short lifetime due to the high GTP concentration within the cell, which facilitates rapid GTP binding to Gα. [42] The latter induces the dissociation of the heterotrimer into GTP-bound Gα and Gβγ, which can address different intracellular effectors (Figure S1). [16,17,42] The signaling is terminated by the intrinsic GTPase activity of Gα, which causes GTP hydrolysis to GDP and phosphate. Following reformation of the heterotrimer, the GDPbound G protein is restored to its original inactive state. [16,17] Further accessory proteins such as AGS proteins (activators of G protein signaling) or RGS proteins (regulators of G protein signaling) can stimulate G protein signaling or accelerate its deactivation. [43,44] AGS or RGS proteins can act as 1) GDIs (guanine nucleotide dissociation inhibitors), which stabilize the inactive, GDP-bound state and thus inhibit the activation of G proteins, [45] 2) GEFs (guanine nucleotide exchange factors), which can accelerate the exchange of GDP by GTP, [45] 3) GEMs (guanine-nucleotide exchange modulators), which have a bifunctional activity (GDI or GEF) depending on the G protein substrate, [46] and 4) GAPs (GTPase accelerating proteins), which enhance GTP hydrolysis and thus terminate the Gα signaling ( Figure S1). [45,47] Concerning the intracellular effectors ( Figure S1), the Gαs subfamily stimulates the membrane-bound adenylyl cyclase (AC), which catalyzes the formation of cyclic adenosine monophosphate (cAMP) from adenosine triphosphate (ATP). On the contrary, the Gαi subfamily members Gαi1-3 and Gαz inhibit AC and consequently the formation of cAMP. [48] Subsequently, cAMP can stimulate various downstream signaling pathways. Furthermore, Gαt1-2 stimulates photoreceptor phosphodiesterase (PDE), Gαgust is thought to stimulate PDE activity and absence of Gαo was found to be associated with ion channels' regulation. [16,48,49] In order to map out possible directions for future strategies of Gα protein-targeted compound design based on the proteins' interface regions, it is required to analyze the structures of Gαi/s in the different activation states and ligandcomplexed forms. Several X-ray and NMR structural analyses were reported in the past decades, [16,50] starting from the crystal structure analysis of Gαt in the active, GTPγS (guanosine-5'-O-(γ-thio)triphosphate)-bound state (1993), and the inactive, GDPbound state (1994). [51,52] The Gα subunit has a conserved protein fold consisting of two domains, the GTPase domain (or Rasdomain, six-stranded β-sheet motif (β1-6) surrounded by five helices (α1-5)), which is structurally homologous to small G proteins and elongation factors of the G protein superfamily, Her research focuses on bioactive peptides and proteins with therapeutic potential, including peptidic Gα modulators. and the helical domain (six α-helix bundle, with a large central helix (αA) surrounded by five smaller helices (αBÀ F)), which is unique for heterotrimeric G proteins ( Figure S2). [51,52] Both domains are connected by two polypeptide segments, linker 1 and linker 2, resulting in the following sequence of structural elements starting from the N-terminal α-helix (αN): αN, β1, α1, linker 1, αA-F, linker 2, β2, β3, α2, β4, α3, β5, αG, α4, β6, α5. [51,52] Only the α3-β5 loop and the α4-β6 loop of Gαi1 and Gαs differ in their sequence and structural conformation within the conserved GTPase domain, which possibly influences the Gα binding to GPCRs and effectors. [53] The Gαi subfamily exhibits a high degree of conservation in sequence and structure, mostly distinguishable by minor differences in the helical domain. [53] In between the two domains is a deep cleft, where the respective guanine nucleotide is bound (Section 2.2). [51,52] Upon G protein activation, conformational changes occur in three adjacent regions, namely Switch I (linker 1, beginning of β2), Switch II (Cterminus of β3, α2, α2-β4 loop) and Switch III (β4-α3 loop, Figure S2), which are mainly located in the GTPase domain. [16,51,52] All Gα subunits, except Gαt, are reversibly posttranslationally modified (PTM) with palmitate on a N-terminal cysteine. [16] Gαi subfamily members are additionally irreversibly myristoylated on an N-terminal glycine, which has a significant influence on αN. The latter is disordered in the unmodified state and gets ordered upon Gβγ binding, while the ordered αN in case of a myristoylated Gαi results in no further structural change during Gβγ binding. Furthermore, myristoylation might affect the effector interaction (Sections 2.4 and 3.4). Overall, PTMs are important for the regulation of membrane association and PPIs. [16,17,50] The knowledge about the Gα structure supports the development of artificial modulators and the identification of natural products that influence the Gα protein activity. Therefore, it is helpful to know, that mostly the surface of the GTPase domain mediates interactions to GPCRs (Section 2.1), Gβγ (Section 2.3), downstream effectors (Section 2.4), and accessory proteins (Section 2.5, Figure 2). [50,53] The composition of the nucleotide binding pocket and the GTPase mechanism (Section 2.2) essentially contribute to the development of new Gα protein modulators. [44] In the following, we describe the individual interface regions and their impact on the G protein-mediated signaling as well as the nature of the guanine nucleotide binding pocket in more detail. Our aim is to provide a more specific classification of the already known modulators (Section 3) by understanding the interface areas (Section 2), to assess the druggability of individual protein regions and thus to develop strategies for the identification of novel modulators. Gαi/s-GPCR For their pioneering work on GPCRs, Robert J. Lefkowitz and Brian K. Kobilka were awarded with the Nobel Prize in Chemistry in 2012, [56,57] which stresses the importance of G protein-mediated signaling. GPCRs are characterized by seven transmembrane-spanning α-helices (TM1-7), which are connected by three intracellular (ICL1-3) and three extracellular loops (ECL1-3). The N-terminus is extracellular and the Cterminus, which contains an α-helix (HX8) in class A GPCRs, is located intracellularly ( Figure S3). [50] The TMs connect the extracellular ligand binding site with the intracellular binding site for the heterotrimeric G protein. Interestingly, the GPCR-G protein interface is about 30 Å apart from the GDP binding pocket, thus allosteric conformational changes within the interface and Gα result in the receptor-mediated GDP release. During reorganization of the cytoplasmic GPCR region upon receptor activation, the rotation and large outward movement [15] B) Putative primary Gα protein coupling, based on the classification of GPCR signaling according to Sriram et al. [5] C) Involvement of Gαi/s subfamilies in multiple disorders such as cancer, heart failure, endocrine disorders or thrombosis, adapted from Li et al. [1] of TM6 together with the rearrangements of TM1, TM4, TM5 and TM7 is characteristic. [58][59][60] This results in a cytoplasmic cavity, which can be occupied by the C-terminus of the Gα subunit, especially the "wavy hook" (distal C-terminus) and α5, after rotation and translation ( Figure S3). [50,[60][61][62] The resulting GPCR-Gα interface is formed predominantly by hydrophobic interactions between TM3, TM5-7, ICL3, HX8, and the Gα Cterminal part (α4, α4-β6 loop, β6, α5). A second, less extensive interface is established between αN, αN-β1 hinge, β1, β2-β3 loop, α5, and ICL2 ( Figure S3). In addition, further Gα interactions (α3-β5 loop, α2, α2-β4 loop) with the GPCRs are described. [24,50,58,60,63] Regarding the GPCR-G protein coupling selectivity, a significant difference between Gi-and Gs-GPCR complexes is the relative position of α5 (different rotation and orientation within Gαi/s) and TM6 (outward movement less intense for Githan for Gs-coupled GPCRs). This results in a wider open G protein binding pocket for Gs-coupled receptors and enables the binding of the sterically larger C-terminus of Gαs (α5 tilted up), whereas α5 of Gi binds relatively further down in the TM pocket allowing capping interactions with TM7/HX8. [58][59][60][61][62][63][64] Consequently, the Gα C-terminus is mainly responsible for the affinity and specificity of the G protein-GPCR interaction. [50,65,66] Beside α5, an impact of αN, the αN-β1 loop, the α4-β6 region, and α4 on the specificity of G protein coupling has been suggested, due to specificity determining residues within these regions. [24,50] Furthermore, TM6, ICL2 and ICL3 were related to mediate the coupling selectivity. [50,59,61,63] Gαi/s-nucleotide G proteins are called molecular switches, switching between the GDP-bound ("off") and the GTP-bound ("on") state to regulate the downstream signaling. [1,16] The determinants of nucleotide binding are based on the architecture of the binding pocket ( Figure 3), which structurally alters within 1) GDP release and formation of the empty-pocket conformation, 2) GTP insertion and heterotrimer dissociation, 3) the GTPase reaction, and 4) the phosphate release together with the heterotrimer reassociation. In the following, the Common Gα Numbering system in the D.S.P. format (D: domain, with G: GTPase domain, H: helical domain; S: consensus secondary structure, with S: strand, H: helix; P: position within the secondary structure element, all in superscriptions) according to Flock et al. [67] is used to describe the involved Gα residues and to facilitate a comparison between the different Gα subtypes and subfamilies. Loops are written as lower case letters of the flanking secondary structure elements. [67] The guanine nucleotide binding pocket is located deep in the core of Gα between both domains ( Figure 3). [51,52] The nucleoside contacts are formed by interactions with both domains, whereas the phosphate contacts are mainly established with linker 2 and the GTPase domain. [52,68] Two conserved motifs, the NKXD G.S5.7-G.HG.2 -motif and the TCA(T/V)DT G.s6h5.1-G.H5.1 motif ("TCAT-motif") are crucial for the binding of the guanine base and the stabilization of GDP in the binding pocket. [16,69] The phosphate binding is mediated by the highly conserved Ploop, GXGESGKST G.s1h1.1-G.H1.3 , which connects β1 with α1. Furthermore, the RXXTXGI G.hfs2.2-G.S2.1 motif and the DXXG G.S3.7-G.s3h2.2 motif are important for Mg 2 + binding, whereby the latter motif connects the Mg 2 + binding site with Switch II. [16,67,[69][70][71][72] Mg 2 + is octahedrally surrounded by ligands and coordinated by four water molecules, Ser43 G.H1.2 (P-loop) as well as the βphosphate in the inactive state. [51,52,73] GDP release and formation of the empty-pocket conformation. For GDP dissociation, domain separation is required along with the destabilization of the GDP-binding contacts mediated by , effectors (yellow) and accessory proteins (red, most common areas depicted) within the GDP-bound (violet) Gαi1 homology model (from PDB IDs: 3UMS [54] and 5JS8 [55] ). GPCR-induced conformational changes inside the G protein. [58,72,[74][75][76][77] The conformational changes in α5 cause structural rearrangements of the adjacent β6-α5 loop (contains TCAT motif, Figure 3) and the reduction of hydrophobic interactions between α5 and α1, β2, and β3, and thus a destabilization and structural change of α1 (contains P-loop, Figure 3). As a consequence, the interface between the helical domain and the GTPase domain is disrupted and the GDP affinity is reduced. [58,76,[78][79][80] However, the reduced contacts of α5 with β1-3 are compensated by new interactions to β4-β6, which stabilize the receptor-bound complex. [80] Beyond that, the αN-β1-loop contributes significantly to GDP dissociation by disturbing P-loop contacts to GDP. [17,58,72,76] The GDP release is favored as a result of the reduced GDP contacts along with a higher structural dynamic in the nucleotide-binding region. [58,72] In the resulting ternary complex, the helical domain exhibits increased dynamics and moves away from the GTPase domain. [76] In addition, the structure of the nucleotide binding pocket, especially the β6-α5 loop, is more dynamic and exhibits a larger solvent-accessible surface area, which promotes fast GTP binding induced by the high intracellular GTP concentration. [81] GTP binding and dissociation of the heterotrimer. GTP binding leads to stabilization of α1 and the interdomain interface and induces the reclosure of both domains to a more rigid Gα structure. [55,63,76,80] Herein, Mg 2 + and GTP are deeply buried in the binding pocket due to rearrangements of Switch I (Arg174 G.hfs2.2 , Thr177 G.hfs2.5 , RXXTXGI motif), Switch II (Gly199 G.s3h2.2 and α2), and Switch III ( Figure 3A, C, F). [69] The structural changes within Switch I are induced by hydrogen bond formation between the γ-phosphate of GTP with Thr177 G.hfs2.5 and Arg174 G.hfs2.2 , and the replacement of two water ligands on Mg 2 + by Thr177 G.hfs2.5 and the γ-phosphate. [52,68] The conformational change of Switch I towards the Mg 2 + binding site causes the interruption of Gα-Gβγ interactions and thus [52] (A), GTPγS-bound: PDB ID: 1TND [51] (C), nucleotides in violet), domain arrangement [84] of Gα proteins (B) and contacts of nucleotides (D) to the P-loop (blue), RXXTXGI (yellow), DXXG (orange), NKXD (green), TCAT (red), helical domain (cyan) are shown. Dotted lines indicate hydrogen bonds and grey bars van der Waals interactions. Residues are named according to the crystal structures. contributes to the dissociation of the heterotrimer. The structural changes in Switch I and Switch II are connected through a newly formed hydrogen bond network. [52,68] Rearrangements in Switch II are initiated by a hydrogen bond formation between Gly199 G.s3h2.2 and the γ-phosphate of GTP, which is coupled to conformational changes of α2 conveyed by a hydrogen bond of Gly198 G.s3h2.1 with Trp207 G.H2.7 . During this process, contacts of the conserved Arg201 G.H2.1 , Arg204 G.H2.4 (ion pairs with Glu241 G.H3.4 , Switch III) and Trp207 G.H2.7 to conserved residues in α3 are formed. [52,68] Switch III (e. g., Glu232 G.s4h3.10 , Glu241 G.H3.4 ) responds to the conformational changes of Switch II by forming a network of polar interactions with Arg201 G.H2.1 , Arg204 G.H2.4 , and the Gly199 G.s3h2.2 . [52,73] Additional residues within the β4-α3-loop and α3 stabilize the active conformation of Switch III through interaction with the helical domain. [73] The GTP binding leads to a destabilization of the heterotrimer, mainly by changes within Switch II, and initiates dissociation into Gα and Gβγ (Section 2.3). [73] GTPase reaction. During GTP hydrolysis, the highly conserved Arg174 G.hfs2.2 ("arginine finger", Switch I, RXXTXGI motif) decisively stabilizes the pentavalent transition state by interacting with the βand γ-phosphates of GTP ( Figure 3D). [68,82] Additionally, the highly conserved Gln200 G.s3h2.3 (Switch II) is essential for the hydrolysis by interacting with the γ-phosphate and the nucleophilic water, which initiates the in-line attack on the γ-phosphate. [68,83] Hence, mutations of Arg174 G.hfs2.2 or Gln200 G.s3h2. 3 have been observed in a number of human tumors, demonstrating the importance of these residues and the GTPase reaction for the G protein signaling. [82] Within the hydrolysis mechanism, the water molecule is further stabilized by the Thr177 G.hfs2.5 . [68][69][70]83] RGS proteins are able to accelerate the GTPase activity (Section 2.5). Dissociation of phosphate and heterotrimer reassociation. In the resulting Gα·GDP·Pi complex, Switch I moves marginally away from the catalytic site leading to a weaker Mg 2 + binding and a hydrogen bond formation of Arg174 G.hfs2.2 with the βphosphate and Pi, as well as Thr177 G.hfs2.5 and Lys176 G.hfs2.4 . Switch II undergoes a significant structural change, which breaks the ionic interactions with Switch III, resulting in a disordered Switch III. Thereby, Gln200 G.s3h2.3 is shifted away from the active center, a transient phosphate binding site is formed and the Pi release is enabled. [83] The latter results in disordered parts of the Switch II and thus, Switch I shifts away from the nucleotide binding site, whereby Lys176 G.hfs2.4 rotates out of the active center, along with Mg 2 + and Thr177 G.hfs2.5 . Then, Arg174 G.hfs2.2 is only weakly associated with the αand βphosphate. [83] As Switch II is crucial for effector recruitment and Gβγ binding (Section 2.3, 2.4), the structural changes in Switch II reduce the affinity towards the effectors and promote Gβγ binding. [73] The binding of Gβγ rearranged Switch II and, furthermore, the conformational changes within Switch I and Switch II seal the GDP in the nucleotide binding pocket. [83] Gαi/s-Gβγ Gβγ is composed of two polypeptide chains, Gβ and Gγ, which can only be separated under denaturing conditions. [18,85] Crystal structure analyses revealed that Gβ exhibits an N-terminal αhelix and a seven bladed propeller structure composed of seven WD40 sequence repeats with four twisted β-strands per propeller blade ( Figure S4). Gγ comprises two α-helices, with the N-terminal helix binding to the N-terminal helix of Gβ via coiled-coil interactions and the C-terminal helix engages with the propeller. The membrane association is controlled by prenylation of the Gγ C-terminus. [85][86][87][88] The contacts between Gα and Gβγ are primarily made via two interface regions between Gα and Gβ ( Figure S4). The first interface is established between the top of the Gβ propeller by hydrophobic interactions with the hydrophobic pocket of Gα formed by Switch I and Switch II (especially β2, β3, β3-α2 loop, α2, Figure S4). This interface is additionally stabilized by hydrophilic/ionic interactions. The second interface is located between blade 1 of the Gβ propeller and αN of Gα. There is no structural evidence for direct interactions of Gα and Gγ. [53,[85][86][87][88] The structure of Gα in the heterotrimer differs from free Gα. [86,87] In the heterotrimer, the αN helix is continuous, whereas in the free state the N-terminus can exhibit various structures. [86,87] The myristoylation of the Nterminus increases the affinity of Gα to Gβγ (Section 2). [89] The GTP-induced conformational changes especially in Switch II (Section 2.2) lead to the heterotrimer dissociation by interruption of the stabilizing contacts within the first interface. [85][86][87][88] Gαi/s-effector proteins Crystal structure experiments of Gα-effector complexes showed that the effectors insert hydrophobic side chains into a pocket formed by the N-terminus of α2 (Switch II) and α3. The effector specificity is defined by contacts with the C-termini of α2 and α3 as well as interactions with the α2-β4 loop and the α3-β5 loop. [16,49,53,[90][91][92] Since the α3-β5 loop differs in sequence and structure between the subfamilies, it was assumed that it plays the key role in effector selectivity. [49,53] A further contribution of the α4-β6 loop was also reported. [16,53,90,93] The Gαi and Gαs subfamily can interact with different effectors, however, both subfamilies have an opposite effect on the AC, whereby Gαs can bind to and activate all membranebound isoforms of AC (ACI-IX) and Gαi1 and the near paralogs can only address certain AC isoforms (ACI, V, VI). [90,[94][95][96] The AC consists of a cytosolic N-terminus, two transmembrane domains separated by the cytosolic domain C1 (C1a-b), and followed by a further cytosolic domain C2 (C2a-b, Figure S5). The active site is located in the interface between C1 and C2. [97] The Gαs-AC interface is established between Switch II (α2 and α2-β4 loop) by insertion of α2 into the groove of AC (formed by C2), and the α3-β5 loop with C1 and C2. At the same time, Phe991(C2) binds into the Switch II/α3 cleft. [91][92][93]95] Mutagenesis experiments and molecular docking studies indicate that the Gαi-AC interface is located between C1 and Switch I-III as well as αB, which is opposite to the Gαs binding site on AC ( Figure S5). Thus the binding of Gαs and Gαi to the AC is not competitive. [53,90,93,98] Further studies with Gαs and Gαt showed that the N-terminus is crucial for effector binding. In the Gαs subfamily, no PTM is necessary for the stimulatory function, whereas myristoylation of the Gαi subfamily is required for AC inhibition. [16,53,97,99,100] After GTP hydrolysis, Gα dissociates from AC due to a lower affinity of the Gα·GDP compared to Gα·GTP. Although Gα·GDP still has the ability to interact with effectors, its potency is lower than that of Gα·GTP. Reassociation with Gβγ terminates effector signaling since the Gα binding site for Gβγ (inactive state, Section 2.3) largely overlaps with the effector binding site (active state). [16] GDIs. GDIs comprise one to four GPR motifs (G protein regulating motif, TMGEEDFFDLLAKSQSKRMDDQRVDLAG, [105,106] also known as GoLoco motif, consensus XXΦΦXΩΩX[+] XQπXRΩXXQR, [107,108] Φ: hydrophobic, Ω: aromatic, π: small, X: any amino acid)). The GPR motifs bind to and stabilize Gαi·GDP, thereby inhibiting the nucleotide exchange and the accompanied G protein activation ( Figure S6). GDIs can prevent the association of Gα with Gβγ through overlapping interface regions, which may lead to prolonged Gβγ signaling. [45,103,108,109] The binding of the GPR motif is directed to Switch II/α3, where Arg of the Asp/GluÀ GlnÀ Arg triad of the GPR motif is oriented towards the GDP binding pocket and directly interacts with the αand β-phosphate of GDP. [45] The insertion of Arg is enabled by the conformation of Gln (triad), which interacts with Gln147 H.hdhe.2 and Asn149 H.hdhe.4 of Gαi. The GPR motif also establishes contacts to Switch I and changes its conformation, for example, Arg178 G.hfs2.2 (RXXTXGI motif, Section 2.2) is displaced by a salt bridge with Glu43 G.s1h1.1 (P-loop) and forms contacts to the GDP ribose entity. Further conformational changes occur in Switch II and Switch III. The C-terminal part of the GPR motif binds along the interdomain region, thus possibly restricting interdomain movements and preventing GDP dissociation. [102,[108][109][110][111] Gαi specificity is assumed to be mediated by contacts with the helical domain (αA-αB loop, αB-αC loop), [102,[108][109][110][111] and/or an acidic residue in the GTPase domain that influences the orientation of Glu43 G.s1h1.1 . [112] GEFs. The chaperones for nucleotide-free Gα subunits Ric8 A (resistance to inhibitors of cholinesterase, Gαi/q/12/13-specific) and Ric8b (Gαs/olf-specific) also function as GEFs through partial Gα unfolding (in absence of Gβγ). [43,113,114] They bind preferentially to Gα·GDP, cause GDP dissociation by domain separation and stabilize the empty pocket conformation, although GTP binding leads to Ric8 dissociation due to a lower binding affinity ( Figure S6). [114,115] Three Gα contact sites for Ric8 proteins were referred: α5, β4-6 and Switch II/α3 together with the P-loop. [113,114,116] Similar to GPCRs, Ric8 interaction leads to a major structural changes of α5 and detachment from the hydrophobic β-sheet core (β4-6), which also rotates and is then stabilized by Ric8. The α5 movement disrupts the nucleotide contacts of the TCAT motif and the NKXD motif and destabilizes the purine binding site (Section 2.2). The antiparallel β2-β3 hairpin moves away from the GTPase core, which destabilizes and disordered α1 and thus leading to domain separation of Gα, destabilization of the P-loop contacts to GDP and enhanced GDP dissociation. [113,[116][117][118] The interaction of Ric8 A probably shifts Switch II to the binding position of the γ-phosphate, which is associated with conformational changes in Switch I and promotes GTP binding. [116][117][118] The interruption of the contacts between Switch II and Ric8 A during GTP binding leads to the reorganization of β2 and β3, and Ric8 A dissociation. The selectivity determinants of Ric8 are probably family-specific residues of Gα (α5), whereby the majority of Ric8 A and Ric8B residues are conserved in the Gα contact region. [113,[116][117][118] GEMs. GEMs are the most recently discovered class of G protein-affecting proteins, with GIV (Gα-interacting, vesicleassociated protein) being first described as GEM (GEF for Gαi, GDI for Gαs). [46,119] GEMs possess a common motif (~30 amino acids, core consensus ΦTΦX[D/E]FΦ-motif, [120] Φ: hydrophobic, X: any amino acid) that selectively binds to the GDP-bound or empty-pocket conformation and affect monomeric Gα (Figure S6). [84,121] So far, only the GEF binding to Gαi3 has been structurally analyzed. The binding of the GEM motif to the cleft formed by Switch II (mainly contacts with Gln204 G.s3h2.3 , Trp211 G.H2.7 , Phe215 G.h2s4.1 ), α3 and the α3-β5 loop, induce conformational changes in Switch I (RXXTXGI motif), β1, and the P-loop and thus in the phosphate binding, which is sufficient for Gα activation. [84,121] Allosterically induced conformational change of the β2-β3 loop with associated α5 movement and disturbances in the interdomain interface (Switch III, αD-αE loop) is also observed, with the latter potentially resulting in domain separation. [84,121] The binding site of the GEM motif partially overlaps with the GDI and the Gβγ binding site. [84,121] GAPs. GAPs interact with Gα·GTP and are able to catalyze GTP hydrolysis by stabilizing the transition state. The respective RGS proteins contain a functionally conserved RGS domain (1 20 [B1] amino acids, "RGS box"), which is responsible for the Gα interaction and the catalytic activity. [45,103,122] The RGS domain forms an interface to Gα, recognizing and stabilizing mainly residues in Switch I-III ( Figure S6). Three critical contacts are reported: 1) A hydrogen bond between Asn128 (RGS4) and Gln204 G.s3h2.3 (Switch II), which orients Gln204 G.s3h2.3 (Section 2.2) to stabilize the γ-phosphate and the nucleophilic water molecule. Asn128 also interacts with Switch II, thus stabilizing the conformation of Switch I and II. 2) A hydrogen bond between Asn88 (RGS4) and Thr182 G.hfs2.6 (Switch I), which brings Switch I-II into the conformation of the transition state, thereby Thr182 G.hfs2.6 (Switch I) gets in contact with Lys210 G.H2.6 and Glu207 G.H2.3 (Switch II). 3) Asp163 (RGS4) stabilizes Thr182 G.hfs2.6 (Switch I), allowing the adjacent Thr181 G.hfs2.5 (Switch I) to stabilize the Mg 2 + and to bring the nucleophilic water into an ideal position for GTP hydrolysis. [44,91,104,122,123] RGS contacts with Switch III and the helical domain (αA, αB-αC loop) are differently pronounced in the subtypes of the Gαi subfamily and possibly contribute to Gα selectivity and the potency of GAP activity. [91,104,122,[124][125][126][127] The binding side of RGS proteins is consistent with the fact that RGS proteins are antagonists for effectors. [122,127] The specificity of the Gαi subfamily compared to the RGS-GAP incompetent Gαs subfamily can be explained by differences in the primary structure of the switch regions. [91,104,122,124,125] Modulators Targeting Gαi/s Interfaces The analysis of the Gα interface regions demonstrates that the contact regions are predominantly located in the GTPase domain (especially Switch I-III, β-sheet core, α3, N-and Cterminus). The helical domain is crucial for the nucleotide exchange and may serve as a specificity feature within the Gα subfamilies, as Gαi subfamily members are mostly distinguishable by minor differences in the helical domain. [53] The analysis also reveal which regions are exposed at the Gα surface and can be targeted by potential modulators. For example, Switch II/α3 may be regarded as "druggable" because it is addressed by Gβγ (Section 2.3), effectors (Section 2.4), and accessory proteins (Section 2.5). The latter show that binding to this region may have a functional impact on Gα and therefore represents an interesting model for modulator development (Section 3.5). Additionally, α5 (important for G protein activation, allosteric connection to nucleotide binding pocket), and αN (important in GPCR coupling, Gβγ binding and PTMs), are also interesting target structures (Section 3.1, 3.3). In the following, the individual interfaces are investigated for already known Gα binders and/or modulators as well as their identification methods. The classification of the individual interfaces according to their druggability provides important perspectives for future modulator development. Gαi/s-GPCR Within the Gα-GPCR interface, the C-terminus (wavy hock, α5) and the N-terminus (αN, αN-β1, β1) play significant roles in the allosterically induced GDP release ( Figure S3). The essential function of the C-terminus for the GPCR coupling as well as its selectivity was recognized very early. For this reason, antibodies targeting the C-terminus of the Gα subunit were developed (Supporting Text in the Supporting Information, Figure S10). Natural compounds A number of natural compounds have been described for the Gα-GPCR interface. These include a bacterial exotoxin and numerous cationic amphiphilic substances, such as venom peptides from bees or wasps, whereby the latters can reversibly influence the Gα protein activity (Figure 4). Pertussis toxin (PTX, 105 kDa [128] ), is an exotoxin from Bordetella pertussis and inhibits the Gαi subfamily (except Gαz, Figure 4A, B). It can exert a mono-ADP-ribosyl transferase activity, covalently and irreversibly transferring an ADP-ribose element from nicotinamide adenine dinucleotide (NAD + ) to the C-terminal Cys G.H5.23 conserved in the Gαi subfamily. Consequently, Gi uncouples from the receptor, cannot be activated, and remains GDP-bound leading to cAMP accumulation and various pathological effects in the host cell. [1,21,[128][129][130] In addition, G protein-independent actions have also been described, which renders PTX together with its irreversible modification incapable for clinical use. Nevertheless, PTX has been applied in numerous studies to analyze Gαi-specific effects. [1,129,131,132] A variety of cationic, amphiphilic substances, including neuropeptides, hormones, venom peptides, and polyamines, exhibited activating properties on purified G proteins. They have a high proportion of hydrophobic and basic groups orienting in an amphipathic α-helical structure in the presence of phospholipids ( Figure 4C), and allowing them to penetrate the cell membrane. [134,135] Prominent members of this group are [129] PTX transfers the ADP-ribose element from nicotinamide adenine dinucleotide (NAD + ) to Gαi Cys G.H5. 23 . B) Crystal structure of PTX (gray, PDB ID: 1PRT [128] ). The S1 subunit (magenta) is important for Gαi inhibition. C) G protein-bound NMR structure ensemble (14 structures) of mastoparan-X (H-INWKGIAA-MAKKLL-NH 2 , PDB ID: 1 A13 [133] ). the wasp venom 14mer peptide mastoparan (H-INLKALAALAK-KIL-NH 2 ) and the bee venom 26mer peptide melittin (H-GIGA-VLKVLTTGLPALISWIKRKRQQ-NH 2 ). Both venom toxins are able to disrupt cell membrane phospholipids and to cause lysis. [131,[136][137][138][139] Mastoparan and related analogs (mastoparans) increase the rate of GTP binding in a GEF-like manner and the GTPase activity for Gi/o, but have only a weak effect on Gt and Gs (except mastoparan-S, H-INWKGIASM-α-aminoisobutyryl-RQVL-NH 2 ). [131,133,134,136,140,141] Mastoparan has been shown to engage the Gα N-and the C-terminus and competes with GPCRs for G protein binding and thus has been used as low-molecularweight GPCR mimetic. [133,[142][143][144][145][146][147] Melittin comprises a predominantly hydrophobic N-terminus and a hydrophilic C-terminus. It stimulates Gi activity and inhibits Gs activity, which consequently leads to inhibition of AC activity. [139,148,149] Furthermore, activating effects on G proteins and their GTPase activity were reported for the neurokinin substance P (H-RPKPQQFFGLM-NH 2 ), synthetic polyamine component 48/80 (C48/80, mixed polymer of p-methoxy-N-methyl phenylethylamine crosslinked by formaldehyde), the mast cell degranulating peptide (H-IKCNCKRHVIKPHICRKICGKN-NH 2 , MCD), and other cationic amphiphilic substances. [132,134,136,142,[150][151][152][153][154][155][156][157] Altogether, these compounds are considered as pharmacological tools and candidates with potential therapeutic applications. [137,158] In the context of Gα modulators, the broad use of compounds such as melittin and mastoparan, is restrictive because of their dose-and celltype dependency, nonspecific targeting and thereby induction of various biochemical effects. [159,160] In summary, the natural compounds interact mainly via the Gα C-terminus, which appears well exposed and druggable, and thus cause GPCR-G protein uncoupling. For PTX, this results in a permanent inhibition of Gi, whereas the cationic amphiphilic peptides lead to GPCR-independent activation and signaling. The latter is a valuable starting point for tool development at the G protein level, which circumvents the need to address many GPCRs in multifactorial diseases. Synthetic compounds The described modulators from natural sources revealed that cationic hydrophobic substances are able to act as G protein modulators. Thus, these compounds have been further investigated. One synthetic compound is the polyamine C48/80 (Section 3.1.1), which activates Gi/o and stimulates GTPase activity. [141,142] In addition, other cationic hydrophilic substances such as hydrophobic amines [136,157] or derivatives of the lead mastoparan [136,138,161] have also been described as Gα modulators. Quaternary hydrophobic amines have been referred in the context of mastoparan and can affect the activity of purified recombinant G proteins. For example, benzalkonium chloride (BAC) antagonizes the Gi stimulation of mastoparan by inhibiting the GDP exchange, whereas BAC alone slightly increases the basal GDP exchange at high concentrations. In contrast, BAC and other quaternary amines has been suggested to stimulate the nucleotide exchange and the GTPase activity of Go in response to the phospholipid concentration. [136] Other quaternary long-chain alkylamines displayed equally stimulatory properties on Go, whereas short-chain amines were ineffective. However, high concentrations of hydrophobic amines destabilize the G protein and might lead to denaturation. [136,157] Overall, these amines are considered unsuitable for the modulation of Gα protein activity, since they may also bind unselectively to other proteins and influence their activity. In numerous studies, various derivatives of mastoparan (synthetic and natural) were investigated to explore the structural determinants, including net charge, spacing, charge localization, and proportion of α-helical conformation (Figure 4C), which define activity and cytotoxicity of the lead. [136,138,147,161,162] To reduce the cytotoxicity of mastoparan towards mammalian cells, [I 5 , R 8 ]-MP was developed by replacing Ala5Ile and Ala8Arg, resulting in antimicrobial activity against bacteria and fungi but no cytotoxicity in HEK293 cells or hemolytic effects towards human erythrocytes. [138] Consequently, mastoparan is a prototype substance for the derivation of valuable antiinfective agents from naturally occurring antimicrobial peptides. However, due to G protein-independent side effects, these compounds are less attractive as G protein modulators. [138] In addition to mastoparans, GPCR-derived peptides have been extensively studied in order to gain insight into G protein-GPCR coupling and coupling selectivity. [163][164][165][166] These GPCR-derived peptides, however, have a comparably low potential, since each peptide can only interfere with the G protein signaling of a few receptors possessing, for example, similar ICL regions. In summary, although the Gα-GPCR interface appears to be druggable, the existing modulators for this interface have many drawbacks for application as tool compounds. The interface might not be well suited for selective Gα targeting, due to the fact that there are multiple GPCRs adressing the same Gα subfamily. Thus, the selective modulation of one distinct Gα protein within the Gα-GPCR interface requires different modulators to affect one G protein signaling cascade entirely. Apart from this, this interface shows potential for exploiting the different coupling selectivities of a GPCR to a Gα protein to selectively affect a special GPCR-Gα interaction. In this context, however, it appears easier to address the extracellular druggable sites of a GPCR. Gαi/s-nucleotide The nucleotide binding pocket is not a typical PPI interface like the other regions described, wherein, different guanine nucleotides (GNPs, Figure S7) are able to bind. As GNPs are not classical modulators and can bind unspecific to other guanine nucleotide-binding proteins, we will only briefly discuss them here. More detailed information can be found in the supporting information. One application of GNPs is the ability to induce different activity states, as demonstrated by various crystal structure experiments and studies for quantifying the percentage of active G protein. [51,52,68,167,168] Altogether, GNPs represent crucial tools for the analysis of G protein-affecting compounds, as they can be used, for example, in radioactive or fluorescently labeled form, to determine the impact of the tested compound on the nucleotide exchange as well as on the GTPase activity. [167,169,170] Consequently, GNPs proofed to be efficient for various applications. [51,52,68,[167][168][169][170] Gαi/s-Gβγ There are not many modulators that address the Gα-Gβγ interface by approaching Gα, thus we decide not to subdivide this section. As shown in Section 2.3, Gα contacts Gβγ on the switch regions and αN ( Figure S4). [86] The G protein activation enables the heterotrimer dissociation, whereby upon reassociation, the signaling is terminated since the effectors and Gβγ share Gα binding sites (Section 2.3, 2.4). [87,171,172] Furthermore, AGS class II proteins, such as AGS3 (contains four GPR motifs, Section 2.5), are able to dissociate the heterotrimer, since the GPR motif attaches and changes the conformation of Switch II close to the Gα-Gβγ interface. Consequently, modulators identified or developed for the Gα-accessory protein interface may also affect the Gα-Gβγ interaction (Sections 2.5, 3.5, Figure S6). [109,[173][174][175] Moreover, Gβγ seems to compete with the fluorescently labeled Alexa532-RGS4 protein for binding with high affinity to Gαi·GDP·AlF4 À , which implies that Gβγ can inhibit the action of GAPs by binding to Gα. [176] Apart from that, the prenylation of Gγ (Section 2.3) anchors Gβγ in the plasma membrane and is highly required for the interaction with Gα and effectors. [177][178][179] Based on the G protein signaling partners, peptides that bind to Gα on the Gα-Gβγ interface were developed. Kimple et al. [109] exploited the RGS14 GoLoco region to design R14GL (DIEGLVELLNRVQSSGAHDQRGLLRKEDLVLPEFLQ) derived from rat RGS14 (also accessory protein interface), that binds to Gαi between Switch II and α3 but not to Gαo, whereas the interaction with Switch II imbricates the contact of Gαi1·GDP and Gβγ. [109] Subsequently, Wang et al. [182] developed a Gβderived peptide exhibiting the respective Gαi1-binding sequence of a second Gβγ binding site on Gα, which was able to interrupt the respective Gαi1·GDP-Gβγ association. [182] In addition to the natural partners within G protein signaling, researchers intended to study PPIs by targeting the Gα-Gβγ interface via different screening approaches. In this regard, Gβγ modulators have also been developed, however, are not described herein. [85,180] Suramin (1, Figure 5) is a drug discovered by Bayer in 1916 and used to treat the African sleeping disease. Initial studies implied that suramin binds directly to Gαs, hinders the heterotrimer reassociation and thus the G proteinreceptor coupling. [1,183,184] Afterwards, experiments revealed that suramin inhibits the GDP release from Gα. However, suramin exhibits reduced selectivity, since it can inhibit Gαi and Gαs. [1] Consequently, different suramin analogs have been developed such as NF449 (2) and NF503 (3, Figure 5), which were superior to the other, comprising a higher selectivity for Gαi and Gαs. [1,2,181,183,[185][186][187] The structural basis and the pharmacological importance of these agents needs to be more specified in the future. A further suramin derivative (NF023, Figure 7, Section 3.5.2.1.) was identified to target the Gαi3-GIV binding site. [188] A major drawback of these compounds is their limited cell penetration due to the high negative charge of the sulfonic acid groups, thus decreasing their pharmacological potential. [2] Based on the aforementioned reports, it can be concluded that this interface overlaps with the Gα-effector and -accessory protein interface which hamper a clear distinction. Thus, these common sites might be valuable targets for future therapeutic applications. [180,189] Figure 5. Chemical structures of suramin (1) and its analogues: NF449 (2) and NF503 (3). [1,180,181] ChemMedChem Reviews doi.org/10.1002/cmdc.202100039 Gαi/s-effector proteins Effectors of Gα are enzymes, proteins or ion channels with AC belonging to the most important effectors, which can be affected by Gαi and Gαs (Section 2.4, Figure S5). [48,53,190,191] As already mentioned, Gαi myristoylation is required for its inhibitory effects to distinct AC isoforms. [99] These findings provide a precious opportunity to modulate the Gα protein activity with PTM-like modifications. Apart from that, natural molecules that impair the association of Gαi/s and their downstream effectors are rare. Only accessory proteins, such as RGS16 (Section 2.5), can be given here since they may act antagonistically with respect to G protein-effector binding. In this regard, RGS16 was shown to bind to Gαt/o·GDP·AlF 4 À affecting the Gαt/o signaling pathway by inactivating the G protein-effector binding. [104,192,193] Based on these observations, the discovery of natural compounds or PTMs is anticipated to broaden the knowledge about this interface. Likewise, there are only few examples of synthetic compounds that address this interface, which is why we have not divided this section further. It was already known in the 1970s that forskolin (Fsk) activates AC in a receptor-independent way. [93,190] What is striking though, is the contribution of the Fsk-Gαs·GTPγS complex in raising the binding affinity to two AC analogs, VC1 (ACV) and IIC2 (ACII) and their catalytic activity ( Figure S8). [93] Furthermore, Yoo et al. [194] constructed AC-derived peptides and found that a peptide encoding C2-α'2 (899-926), and two more peptides, namely C1-β4-β5-α4 and C2-α3'-β4', possessed inhibitory features regarding Gαs stimulation on full length ACII and ACVI (69 % inhibition for the C1-peptide and 89 % for the C2-peptides). Despite the aforementioned peptides, additionally tested peptides exhibited higher IC 50 values, whereas others showed no inhibition. [194] In summary, although crystal structures have provided insights into the Gαi/s effector binding, [90,93] the availability of compounds acting on this interface is rather low. [194] A possible explanation could be that the Gα-effector interface is not easy, if not impossible, to be manipulated. On the other hand, this interface overlaps partially with the interface for accessory proteins (Section 2.5, 3.5), making it non-trivial to clearly separate these regions. In our opinion, this interface may not be the most critical in studying G protein modulators, however, should not be neglected. Gαi/s-accessory proteins Accessory proteins themselves are modulators of Gα protein activity, acting as GDI, GEF, GEM, or GAP (Section 2.5, Figure S6). [45,46] Therefore, they serve as important templates for modulator development based on the motifs that are critical for their function and the interface that they bind to. Addressing the Gα-accessory protein interface and the GTPase activity, respectively, was of enormous importance in the past, as inhibition of the Gαs GTPase function by cholera toxin (CTX, Section 3.5.1) led to the discovery of G proteins. [21] Nowadays, accessory proteins have also been considered as drug targets, which is described in numerous excellent reviews. [44,103,[195][196][197] Natural Compounds Regarding natural compounds targeting the Gα-accessory protein interface, it is important to consider that Gβγ (inactive state) and effectors (active state) represent natural competitors for the binding of accessory proteins, since the interface within Gα overlaps significantly (Section 2.3, 2.4, 3.2, 3.4). [16,45,49,109,121,122,127] Furthermore, bacterial exotoxins directly affect the GTP hydrolysis. [198] Cholera toxin (CTX, 84 kDa, [199] Figure 6A, C) is an exotoxin from Vibrio cholerae, the bacterium responsible for the symptoms of the cholera disease. [21] In early studies, it was observed that CTX increased the intracellular cAMP level by a permanent Gαs activation, leading to the discovery of G proteins. [21] The activation was caused by a mono-ADP-ribosyl-transferase activity of CTX (similar to PTX, Section 3.1.1), irreversibly transferring an ADP-ribose element from NAD + to Arg201 G.hfs2.2 (arginine finger, Section 2.2) of Gαs ( Figure 6A). [1,21,193,198,[200][201][202] As a consequence, the GTPase activity is inhibited and Gαs·GTP is prevented from being inactivated. [202][203][204][205] Using a similar mechanism, a heat-labile enterotoxin (HLT, 86 kDa, [206] Figure 6C) from Escherichia coli also selectively modifies and permanently activates Gαs. [1,201,206,207] Furthermore, a toxin from Pasteurella multocida (PMT, 146 kDa, [208] Figure 6B-C) modulates the Gα protein activity of Gαi/q/13. PMT catalyzes the deamidation of Gln205 G.s3h2.3 (Gαi) and conversion to Glu205 G.s3h2.3 , thereby blocking the GTP hydrolysis (Section 2.2, Figure 6B). Consequently, Gαi remains in the active state resulting in a decrease in cAMP level. [1,82,[209][210][211] PMT preferentially interacts, unlike PTX (Section 3.1.1), with monomeric Gα and can prevent conversion with PTX by Gαi deamidation. [211] In addition, Photorhabdus asymbiotica protein toxin (PaTox, 335 kDa, UniProt: C7BKP9, Figure 6C) causes the Gln205 G.s3h2.3 (Gαi) deamidation of Gαi/q/ 11 analogous to PMT and is also capable of catalyzing tyrosine glycosylation of Rho. [212] However, all of these bacterial exotoxins have the disadvantage to unrecoverably modify Gα, thereby irreversibly affecting the G protein activity. Therefore, these modulators have less clinical utility and should rather be regarded as important pharmacological tools that can provide insights into immunological processes or different aspects of G protein signaling. [201] However, it cannot be denied that targeting the GTPase function is a reasonable approach for modulating the Gα activity, since an inhibition maintains the Gα subunit in the active state whereas stimulation accelerates the termination of the signaling pathway. Synthetic compounds The enormous potential of the Gα-accessory protein interface has been recognized with the result that the development of novel tool compounds (small molecules and peptides) was primarily directed towards this interface region. High-through-put techniques, but also virtual design, have been increasingly applied to identify or design novel modulators. Structureactivity relationships derived from crystal structures of complexes or molecular modeling and docking were frequently performed, too. [188,[215][216][217] Small molecules The development of small molecule modulators is a classical approach in medicinal chemistry. In 2006 and 2009, the imidazopyrazine derivatives BIM-46174 (BIM-monomer, 4) and the disulfide-bonded BIM-dimer BIM-46187 (5, both in short: BIM, Figure 7) were introduced, which showed antiproliferative and pain relief effects, respectively, and thus have been proposed as potential anticancer drugs. [11,[218][219][220] For the selection of G protein-directed modulators, a differential screening approach with human cancer MCF-7 cells was applied, comparing the influence of potential modulators on CTX-stimulated cAMP production (Gαs-mediated signaling) with the influence on Fsk-stimulated AC activity (Section 3.4). [218] Both compounds act as pan-inhibitors of Gα protein activity, preferentially silencing Gαq signaling in a cellular context-dependent manner. [22,220] At the molecular level, BIM reversibly binds to Gα·GDP and prevents GTP binding after GDP dissociation. [11,22,220] Consequently, Gα is pharmacologically frozen in the empty-pocket conformation. [22] Using docking experiments and allatom molecular dynamics simulations, Switch II, Switch III, and the αB-αC loop were postulated as BIM binding regions, which could explain the BIM-mediated inhibition through conformational changes in the switch regions that are crucial for GTP binding as well as a restricted domain separation of helical domain and GTPase domain. [11,22] In further studies, BIM was further analyzed with respect to Gαq targeting due to the Gαq preference. [221,222] In a computer-based approach performed in 2014, molecular docking was applied to identify potential small molecules with GDI activity that bind to and stabilize Gαi·GDP in the presence of Gαi·GTP, Gαq·GDP, and Gαq·GTP. [223] Two compounds (0990 (6) and 4630 (7); Figure 7) with GDI selectivity for Gαi1 over Gαq, three compounds (8005, 8770, 4799) with GDI selectivity for Gαq over Gαi1, and three compounds (2967, 6715, and 1026) with GDI activity towards Gαi1 and Gαq were identified. [11,223] Some of these compounds were able to partially block the α2-adrenergic receptor-mediated cAMP regulation promoted by Gαi/o activation, however, neither compound showed the desired inhibitory activity even at high concentrations. [1,223] The quinazoline derivative 0990 was studied in more detail and was suggested to bind to Gαi·GDP (Arg178 G.hfs2.2 /Val199 G.S3.6 or Glu43 G.s1h1.1 /Gln79 H.HA.14 or Gln79 H.HA.14 / Lys180 G.hfs2.4 ), all mimicking important Gαi1-GDI interactions. In structure-activity relationship studies, the basic hydrophobic 1XTC [213] ), heat-labile enterotoxin (HLT, PDB ID: 1LTS [207] ), P. multocida toxin (PMT, PDB ID: 2EC5 [214] ) and the P. asymbiotica protein toxin (PaTox) glycosyltransferase domain (PDB ID: 4MIX [212] ) in complex with UDP-GlcNAc (violet). phenyl-quinazoline-aniline core was shown to be crucial for the GDI activity. [11,223] In 2017, by an in silico ligand screening and a separate highthroughput screening, the Gαi3-GIV interface (Section 2.5) was addressed, and NF023 (9, suramin derivative, Section 3.3) and ATA (8, aurintricarboxylic acid, both Figure 7) were identified. Both compounds were confirmed as Gαi3 binder and inhibitor of Gαi3-GIV binding. [188] NF023 binds to Switch II, α3 and α3-β5 loop, a binding site that overlaps with the binding site of the GEM motif (Section 2.5). [84,120,121] However, no interference with Gαi3À Gβγ binding was observed, although the interface regions partially overlap (suggested for suramin, Section 3.3). [188] The disadvantage of these small molecules is that NF023 (and suramin) are not cell permeable and can inhibit P2X receptors in addition to Gα subunits, and ATA can also address other targets such as topoisomerase II. [1,188] Apart from that, the authors concluded that the Gαi-GIV interface is defined and druggable and thus of interest for modulator design. [188] The screening approaches employing small molecules demonstrate the possibility to develop Gα modulators. However, a clear drawback is the selectivity of the compounds for the individual subfamilies or G proteins themselves. This is exemplified with BIM, a pan-inhibitor for Gα protein activity, obtained from a screening experiment towards Gαs, while the approach from 2014 identified compounds with Gαi/q selectivity that did not exhibit the anticipated inhibitory activity. NF023 and ATA also address other targets besides Gα and are therefore not specific. Nevertheless, small molecules are important tools to study G protein signaling pathways and to explore the determinants for selectivity between the subfamilies. Peptides The approach of peptide engineering is of particular interest regarding the Gα-accessory protein interface. For example, peptide sequences derived from protein motifs, such as the GPR motif, [106,107] GEM motif, [84,120] and RGS domain, [104,122] which are important for the corresponding functions as GDI, [106][107][108] GEM [119,120] or GAP, [122] can serve as templates for the peptide design. [45,46] GPR proteins and GPR-derived peptides were shown to act as GDIs for Gαi in vitro. [1,102,224,225] Subsequently, CPPs such as a hydrophobic K-FGF-derived peptide sequence (AAVALLPAVL-LALLA) or basic TAT-derived sequence (GRKKRRQRRRPP) were attached N-terminally to a GPR motif (H-TMGEEDFFDLLAKSQ-SKRMDQRVDLAK-NH 2 ) to increase the cell penetration of the GPR peptide. [223] The TAT-GPR construct maintained GDI activity and selectively blocked Gαi regulation of α2-adrenergic-mediated AC activity in HEK293 cells. [223] The TAT-GPR construct has therefore been proposed as a valuable pharmacological tool and potential therapeutics. The authors, however, have tended to consider the development of small molecule inhibitors (Section 3.5.2.1) due to the relatively large size of the construct (40mer peptide). [223] In a similar approach, a GIV-derived peptide (GIV-CT, 210 amino acids), containing the GEM motif and an SH2-like domain, was N-terminally coupled to a TAT-PTD (peptide transduction domain) sequence to increase cell permeability. [226] It has been shown that the construct can bind to Gi in a cellular context and activates it in a GEF-dependent manner. [226] Consequently, peptides derived from accessory protein motifs can affect the Gα protein activity and intracellular modulation can be achieved by CPP attachment. The drawback to the described constructs is that they are relatively (5), [11] compounds 0990 (6) and 4630 (7), [223] aurintricarboxylic acid (ATA, 8) and suramin derivative NF023 (9). [188] large as to be used as chemical tools (e. g., 40mer peptide or protein). mRNA display approach. Along with using the actual protein motifs to develop modulators, they have also been used as templates for high-throughput techniques (peptide sequences in Table S1). For example, the Roberts group used a GPR consensus-derived mRNA display library for the screening against Gαi1·GDP and identified the Gαi·GDP-specific R6A and minimized its sequence to the 9mer peptide R6A-1. Both peptides competed with Gβγ for Gαi1 binding. It was hypothesized that the GDI activity was conserved, however, this was contradicted in later studies for R6A-1. [227,228] R6A-1 binds to Switch II/α3 of Gαi1 and also showed binding to the other Gα subfamilies in the GDP-bound state. [228,229] Therefore, R6A-1 was postulated as a core motif for Gα interaction [227,229] and was subsequently used for the development of Gαi·GDP·AlF 4 À binders [230] and Gαs binders within Switch II/α3. [231] The first approach yielded AR6-05, which competes with Gβγ for Gαi1 binding and favors the GDP-bound more than the GDP·AlF 4 Àbound state. [230] The second approach used a two-step selection process, identifying two Gαs·GDP-specific peptides (GSP), mGSP-1 and mGSP-2, which maintain specific contacts with Switch II/α3 and inhibit the formation of the heterotrimer. It was shown for GSP, mGSP-1, and mGSP-2 that they act as GDI for Gαs, with GSP also acting as GEF for Gαi1, thus showing bifunctional GEM-like properties. [231] Further optimization strategies of R6A-1 included N-methylations in order to increase its proteolytic stability. [232] By using an mRNA display with a macrocyclic peptide construct, the proteolytic stability towards chymotrypsin of the identified Gαi·GDP-selective cycGiBP (10, Figure 8) was significantly increased compared to its linear variant linGiBP. Both peptides compete with R6A for binding to Gαi1, and therefore an equal binding site was assumed. [233] Subsequently, the library was first digested with chymotrypsin, followed by mRNA display selection against Gαi1·GDP, leading to hits with increased chymotrypsin resistance and stability in human plasma. [234] The respective peptides were referred to as cyclic protease resistant peptides (cycPRP-1 (11), cycPRP-3 (12), both Figure 8). Due to the similar core consensus, it was suggested that both peptides also bind to Gαi1 on Switch II/ α3. [234] By using an mRNA display containing also unnatural amino acids, the Gαi·GDP-selective SUPR (13, scanning unnatural protease resistant, Figure 8) was obtained exhibiting a further improved stability in human serum, a half-life of 900 min in liver microsomes and a 35-fold better in vivo stability in mouse compared to cycGiBP. [235] Recently, in a modified mRNA display approach, the Gαs·GTP-selective GsIN-1 (14, Figure 8) was identified using a Random nonstandard Peptide Integrated Discovery (RaPID) system, which also addresses Switch II/α3 and inhibits Gαs. [217] Phage display approach. The first phage display towards Gαi1 was performed with a commercially available peptide library and two peptide families (consensus ΩPXXΩHP (peptide 1) and LPΩXXXH (peptide 3) with Ω: aromatic amino acids) with G protein-activating properties were identified, however, no structural information was described. [236] In another phage display experiment with Gαi1·GDP, the GDP-selective peptide KB-752 was discovered showing GEM-like activity (GEF for Gαi1 and GDI for Gαs) and high similarity to the GEM motif. [215,237] In a crystal structure analysis with Gαi·GDP, the peptide was shown to bind into the hydrophobic cleft of Switch II/α3 (like the GEM motif of GIV, Section 2.5, Figure S6). [215] Altogether, KB-752 is able to inhibit cAMP production through its bifunctional function within the G protein-mediated AC activity, which has been shown in cell membrane preparations. [237] In addition, a consensus to the previously described R6A-1 ([T/Y/F]-W-[WY]-[ED]-[FY]-L) was identified, based on which the Switch II/α3 binding site of R6A-1 and the subsequently developed mRNA display peptides were concluded. [228,231,233] In a second experiment, a phage display was performed with Gαi1·GTPγS, resulting in the active-state selective peptides KB-1753, KB-1746, and KB-1755. [216,238] KB-1753 is capable of inhibiting the interaction of Gαt with its effector cGMP PDEγ and Gαtmediated activation of cGMP degradation, as well as interfering with RGS protein binding. [216,238] Crystal structure analysis of KB-1753 in complex with Gαi1·GDP·AlF 4 À showed that KB-1753 also binds into a conserved hydrophobic pocket between Switch II and α3. [216] Based on results in competition binding assays, it was shown that the Gαi1 binding sites of KB-1753 and KB-1755 as well as of KB-1755 and KB-1746 partially overlap, whereas the binding sites of KB-1753 and KB-1746 do not. Furthermore, KB-1755 was shown to interact with Gα the effector and RGS protein binding region. Thus, KB-1746 was thought to predominantly interact with the RGS binding site of Gα, as KB-1753 predominantly addresses the effector binding site. [216,238] OBOC library screening. In a recent study, using an onebead-one-compound (OBOC) library screening against Gαi1·GDP, we identified a peptide, GPM-1, with high sequence similarity to KB-752 [237] and the GEM-motif, [119,120] which was further modified to increase cell permeability and proteolytic stability. The optimized peptides exhibited GDI activity towards Gαs and GEF activity towards Gαi1 in a GEM-like activity. Thus, the peptides may lower the cAMP concentration in the cellular context via the G protein-mediated AC activity. Using molecular modeling and docking analyses, the peptides were shown to bind to Gαi1·GDP similarly to KB-752 and the GIV-GEM motif within Switch II/α3. Such compounds may thus be considered valuable tools for the study of G protein-mediated signal transduction and pathogenesis (unpublished results). In summary, the peptides described predominantly address the Switch II/α3 region ( Figure S9), which appears to be well exposed and well targetable/druggable. This is demonstrated by the fact that this region is not only targeted in directed approaches, but also in non-directed attempts. The binding cleft between the Switch II α2-helix and α3 is well accessible within both, Gαi and Gαs, in either state of activity, as shown by the diverse peptides presented in this section. The variation in state selectivity and subfamily specificity is due to the varying conformation of the switch regions, which allows only peptides with certain structural features to bind. Thus, addressing the Switch II/α3 region is an interesting objective for future applications of both, peptides, which allow more selective binding due to larger interaction areas, and small molecules. Summary and Outlook G proteins play a crucial role in signal transduction and in a variety of physiological processes. However, this might also indicate that G proteins are involved in the development and progression of diseases in case of malfunctions in respective signaling cascades. GPCRs are already targeted by over 30 % of the FDA-approved drugs and are consequently well druggable through their extracellular ligand binding site. [4,5] However, targeting G proteins is an attractive alternative compared to GPCR-directed drugs, for example, in cases of multifactorial diseases, in which multiple GPCRs are involved, or in cases where the disease pathogenesis occurs downstream of the GPCR at the G protein level. To date, no drugs addressing G Figure 8. Chemical structures of mRNA display-derived peptides targeting the Gα accessory protein interface. The peptides cycGiBP (10), [233] cycPRP-1 (11), cycPRP-3 (12), [234] and Gα SUPR (13) [235] are Gαi1·GDP selective. GsNI-1 (14) [217] is Gαs·GTP selective. ChemMedChem Reviews doi.org/10.1002/cmdc.202100039 proteins have been approved or tested in clinical trials, rendering the development of tool compounds crucial for pharmacological research. [1,2,11,180] The Gα subunit of heterotrimeric G proteins has a high potential for manipulation by modulators, because of its various structural determinants and its role as molecular switch. Here, we examined the five different interaction sites of Gαi/s, namely the Gα-GPCR, the nucleotide binding pocket, the Gα-Gβγ, the Gα-effector, and the Gα-accessory protein interface, in more detail highlighting the structural characteristics of these interactions. Subsequently, all modulators known so far from the literature were assigned to one of these interface regions, and the approach used to identify these modulators was analyzed for its potential to provide an important starting point for targeting these previously "undruggable" proteins in the future. [14] Regarding the Gα-GPCR interface, many natural compounds are known to address the Gα N-and C-termini, which are thus readily accessible to potential modulators, as evidenced for the N-terminus by its post-translational modifications and for the Cterminus by the ability to develop specific antibodies for this region (Supporting Information). However, the substances targeting this interface also exhibit non-G protein-specific activities, which renders them unsuitable for clinical studies and as leads. We consider this interface to be less attractive for modulator development, since the variety of GPCRs with their G protein coupling selectivities only allows to address few specific receptor-mediated signaling pathway simultaneously. Targeting the nucleotide binding pocket by modulators is a suitable tool to study G protein signaling and to evaluate novel modulators occupying different interface regions. GNPs are important to induce artificially different activation states and thus distinct Gα conformations, for example within crystal structure analyses. Furthermore, GNPs are valuable in evaluating whether compounds affect the nucleotide exchange, and exhibit GDI, GEF or GEM activity, or alter the GTPase function, which might be achieved by binding of the respective compound to the Gαi/s-accessory protein interface. Additionally, GNPs are also critical for determining the quality of recombinant G proteins. For modulator development, these compounds are less suitable because they can also target other guanine nucleotide-binding proteins. The assignment of modulators to the Gα-Gβγ and Gαeffector interface is not trivial, since the interaction regions overlap with the contact areas of accessory proteins, depending on the Gα activation state. Thus, these interface areas have potential for being addressed by tool compounds, although the development starting from the accessory proteins is more promising. Finally, the Gα-accessory protein interface might possess the highest potential for modulator design, since accessory proteins themselves influence the Gα activity and can therefore be used as models or lead structures. This is evident from the fact that peptides derived from the GPR or GEM motif can affect the G protein activity in vitro or in conjugation with CPPs intracellularly. In addition to directed approaches that aimed to directly address this interface, non-directed high-throughput techniques also yielded compounds that were able to address this interface. These compounds were frequently associated with modulator properties. Overall, the analysis of this interface has shown that especially the Switch II/α3 region is well exposed and druggable, which has already been described by DiGiacomo et al. [188] in the context of small molecules, but can further be extended to the peptide level. This region could therefore be approached experimentally on the basis of protein motifs or already identified binders/modulators, or theoretically by directed docking experiments using the above-described approaches. Comparing the potential of small molecules with that of peptides indicates that peptides show a higher selectivity due to more specific contacts than small molecules. In addition, the identified peptide modulators of the Switch II/ α3 region demonstrate that state-selective or subfamilyselective modulators can be developed, as the conformation of the Switch II/α3 binding cleft differs accordingly. As a consequence for future investigations, novel modulators may be identified based on the conformation of the Switch II/α3 region, using especially directed high-throughput techniques, but also the already identified compounds, which can be further developed as lead structures. At the same time, the approach of identifying natural compounds should be considered as a valuable strategy, although it might be timeconsuming and non-directed. In conclusion, Gα proteins have an enormous potential for being targeted by pharmacological tools and drugs. Such compounds would provide a viable alternative to circumvent the necessity of targeting GPCRs in the future, especially in the context of multifactorial diseases or diseases associated with downstream defects of GPCR signaling.
15,272.6
2021-02-22T00:00:00.000
[ "Medicine", "Chemistry" ]
The experience of ecological fiscal transfers In many countries, the state owns or manages forests in the national interests of economic development, ecosystem service provision or biodiversity conservation. A national approach to reducing deforestation and forest degradation and the enhancement of forest carbon stocks (REDD+) will thus most likely involve governmental entities at different governance levels from central to local. Sub-national governments that implement REDD+ activities will generate carbon ecosystem services and potentially other co-benefits, such as biodiversity conservation, and in the process incur implementation and opportunity costs for these actions. This occasional paper analyses the literature on ecological fiscal transfers (EFTs), with a focus on experiences in Brazil and Portugal, to draw lessons for how policy instruments for intergovernmental transfers can be designed in a national REDD+ benefit-sharing system. EFTs can be an effective policy instrument for improving revenue adequacy and fiscal equalization across a country. They facilitate financial allocations based on a sub-national government's environmental performance, and could also partly compensate the costs of REDD+ implementation. We find that intergovernmental EFTs targeting sub-national public actors can be an important element of policy mix for REDD+ benefit sharing, particularly in a decentralized governance system, as decisions on forest and land use are being made at sub-national levels. Given the increasing focus and interest on jurisdictional REDD+, EFTs may have a role in filling the shortfall of revenues for REDD+ readiness and for implementing enabling actions related to forest governance. If EFTs are to have efficient and equitable outcomes, however, they will require strong information-sharing and transparency systems on environmental indicators and performance, and the disbursement and spending of EFT funds across all levels Introduction A national approach to REDD+ implementation will most likely involve governmental actors at different governance levels to support and implement REDD+ activities, policies and measures targeting deforestation and forest degradation. This needs to be accounted for in REDD+ benefit-sharing mechanisms (Luttrell et al. 2013;Irawan and Tacconi 2016). In many REDD+ countries, the nation state owns or manages forests for the welfare of the people (Loft et al. 2015). The implementation of REDD+ activities, however, normally restricts the maximization of revenues from other types of land use. Sub-national governmental entities, such as states, municipalities or local communities will thus face substantial costs in implementing REDD+ policies (Santos et al. 2012;Irawan and Tacconi 2016). This generates a need to compensate decentralized governments for the spatial spillovers of carbon ecosystem services and biodiversity conservation of forests. Thus far, however, economic instruments in natural resource management and conservation policies, such as payments for ecosystem services, have focused largely on land users and private actors (e.g. Ring 2008a). The implementation of positive incentives for REDD+ will require institutional arrangements that allow for effective operationalization of policies as well as accountability for performance and benefit distribution across multiple governance levels, including central to local governments. Often, however, insufficient attention is paid to the design of such arrangements, in particular the question of how policy is translated into practical aspects of REDD+ implementation. Recognizing this gap, this brief aims to shed some light on the role ecological fiscal transfers (EFTs) can play as a positive incentive for REDD+ benefit sharing. The (re)distribution of public revenues within and between different levels of government and jurisdictions in nation states is a common public policy (Shah et al. 2007;Ring 2008a). These intergovernmental fiscal transfers (FTs) occur vertically from national level to states and/or from state level to local (municipality and community) levels, and horizontally between governments at the same level (horizontal equalization between states or between rich and poor municipalities). Their aim is to improve revenue adequacy, fiscal equalization and compensation for the costs of generating spillover benefits (positive externalities) to areas beyond the jurisdictional boundaries ). Usually such intergovernmental FTs are based on the ratio of fiscal needs and fiscal capacity. Public expenditure typically occurs for the provision of public goods and services, such as infrastructure, health and education or social welfare programs. Revenue for FTs often comes from public budgets, generated through taxes . As for public infrastructure, environmental protection contributes to the well-being of people within and beyond municipal and regional boundaries. Associated opportunity and implementation costs, e.g. through land-use restrictions and enforcement of the restrictions, are often borne by states and municipalities that provide these environmental public goods (Ring 2008a). EFTs follow the basic logic of FTs, and have been proposed as a positive incentive for environmental protection. Instead of providing compensation for the provision of public infrastructure and services, the idea of EFTs is to compensate public actors such as state or municipal governments for the costs of providing environmental public goods such as biodiversity and ecosystem services. To date, EFTs have been implemented in Brazil (May et al. 2002(May et al. , 2012(May et al. , 2013Ring 2008a) and Portugal (Santos et al. 2012). In other countries, such as Indonesia and Germany, variations of EFTs are being discussed (Ring 2002(Ring , 2008bMumbunan et al. 2012;Schröter-Schlaack et al. 2014). The implementation of national REDD+ schemes faces a comparable situation. In many cases local governments, such as districts and municipalities, will be responsible for REDD+ activities on the ground (Santos et al. 2012;Irawan and Tacconi 2016), while the resulting provision of the environmental public goods, carbon sequestration/ storage and biodiversity conservation will produce benefits beyond state or municipal boundaries. The costs of implementing and enforcing conservation measures and the loss of public revenues, such as forest production taxes or agriculture lease fees, will, however, burden governments of federal states or municipalities that include large areas of forests (Irawan et al. 2013). The majority of funding for REDD+, so far, has been from the public sector. Most funding channels are focusing on investing in REDD+ readiness; preparing countries for the implementation of REDD+ (GCP et al. 2014). However, there is still a huge gap between supply of and demand for emissions reductions from REDD+, and it is unlikely that the funding available is sufficient to provide a strong enough economic incentive and ensure that forest countries continue to change their development pathways (GCP et al. 2014). Based on experiences with EFT implementation, this paper discusses EFTs and their potential for REDD+ benefit sharing. We conclude with lessons learned from EFTs, recognizing that they can provide a ready and functioning financial infrastructure for incentivizing REDD+ policies and actions at the sub-national level. Basic characteristics of ecological fiscal transfers Decisions about where measures for biodiversity conservation and the provision of ecosystem services are to be implemented in a state are often made at the level of the central government, as biodiversity and ecosystem services are generally considered national assets or public goods. Implementation and opportunity costs for providing these natural public goods are often borne by lower governmental levels -if natural Box 1. Ecological fiscal transfers in Brazil The Brazilian Federal Constitution (Article 155) empowers the states to impose a tax "on circulation of goods and services of interstate and inter-municipal transportation and communication (...) (ICMS)". The ICMS is, by far, the principal source of state and local fiscal revenues, constituting 84.5% of all states' revenues in 2010, and an even greater share of municipal revenues (IPEA Data, 2013). In Brazil, EFT revenues are generated by this tax, which corresponds to a value-added tax (Barton et al. 2011). Since its adoption in 1991 by the state of Paraná, the ecological ICMS (ICMS-E) has been increasingly legislated at the state level. The distribution of ICMS to municipalities is regulated by the federal constitution and "twenty-five percent of the total revenues (…) accrue to the municipalities" (Article 158, IV). The same norm states that "the portions of income accruing to municipalities, will be credited according to two criteria: (i) at least three quarters, on the proportion of added value in transactions involving the circulation of goods and the provision of services carried out in their territories, and (ii) up to one quarter, according to the state's legal provisions". Figure 2 illustrates the breakdown of ICMS taxation, and the share that may be apportioned toward ICMS-E. The ICMS-E thus acts as a revenue neutral tool -insofar as its apportionment does not affect total funds available -to promote conservation of biodiversity while compensating a municipality for the PA that exists in its territory. Recent analysis has demonstrated statistically that the introduction of ICMS-E increased the share of PAs; there is a positive significant correlation of ICMS-E with PAs (Droste et al. 2015). In some cases, environmental criteria reflected in the ICMS-E include, in addition to PAs, other factors such as primary sanitation investment and water resource protection. To date, the ICMS-E has been adopted by laws in 16 out of 26 Brazilian states and most of them include a conservation factor in the allocation formula, ranging from 2 to 20% of the share (25%) of ICMS revenues constitutionally devolved to municipalities. Although the instrument was initially adopted in the south and southeast of Brazil, it is by no means restricted to the more economically well-off regions; several of the poorer states in the Amazon have also adopted it. In many cases, the value of the transfer of the ecological ICMS represents a significant amount of the municipal budget, ranging from 28% to 82% of total funds received (Campos 2000). The gross value of resources reallocated to municipalities benefiting from the EFTs by state attained a value of BRL 446 million in 2009 (USD 238 million at that time) in 11 states for which data was available, most of which (BRL 406 million ≈ USD 215 million) was due to the PA criteria. Figure 2. Source of fiscal revenues for ICMS-E (May et al. 2012) resources do not cut across borders. To provide lower-tier governments with the revenue needed, EFTs redistribute public funds from central to decentralized governments (Schröter-Schlaack et al. 2014). Through cost compensation, the aim of EFTs is to set an incentive for local-level governments to implement conservation activities to provide natural capital for overall societal wellbeing (Ring 2008a). To date, the majority of intergovernmental FTs are based on lump sum payments that allow the receiving government to use the transfers in any way it wishes, thereby guaranteeing selfdetermination. These lump sum payments are mostly based on non-environmental indicators, such as population and area, as the majority of public goods and services are provided for the inhabitants of the relevant jurisdiction . EFTs include additional indicators such as total protected area coverage or environmental quality. To further differentiate the levels of FTs some Brazilian states include quality indicators in addition to protected area coverage, for example, the type of protected area (PA) and the land uses allowed in these areas (Grieg-Gran 2000; May et al. 2002;Ring 2008a). The federal states Paraná and Minas Gerais have introduced additional qualitative indicators, such as the quality of planning, implementation and maintenance of the PA. Such a differentiation leads to higher payments for national parks, reserves and areas protected for conducting research, as compared to PAs that allow sustainable use of natural resources. Box 2. Proposals for EFTs as a REDD+ benefit-sharing vehicle in Indonesia Over recent decades, Indonesia has undergone a process of significant political, administrative and fiscal decentralization. The powers for managing natural resources and environment were initially devolved to the district level 1 with significant shares of state revenues (including natural resource revenues) allocated to districts through intergovernmental FTs, with the objective of increasing local accountability, efficiency and effectiveness in natural resource management (Ardiansyah and Jotzo 2013). The bulk of district budgets (80-90%) are financed through the central government's Balancing Fund. This includes a General Allocation Fund, to deal with vertical and horizontal fiscal imbalances and to equalize fiscal capacities for public services across regions, a Special Allocation Fund, for specific programs under line ministries, and a Revenue Sharing Fund, which is derived from natural resource revenues (Mumbunan et al. 2012;Ardiansyah and Jotzo 2013). However, in reality, there is very little accountability as deforestation has continued apace in the decentralization era, with local governments seeking to maximize their revenues from natural resource exploitation and allocations of the different FTs (Barr et al. 2006;Karyaatmadja 2006). Policy interest in the use of EFTs to compensate local governments for foregone revenues from natural resource exploitation and/or costs incurred from conservation has increased with the political backing by former President Susilo Bambang Yudhoyono for cutting emissions by 26% by 2020, as compared to business-as-usual levels (Secretary of Cabinet of the Republic of Indonesia 2011). REDD+ features strongly in the overall strategy for emissions reduction. In its Green Paper on Climate Policy, the Ministry of Finance (2009,65) argues that the best way forward for Indonesia's REDD+ program is to adopt a national-level policy of frameworks and targets (supplemented by selected regional -or project-level approaches, where applicable), with implementation of policy measures at the sub-national level. Several studies have looked at the potential of implementing EFTs for forest conservation and REDD+ in Indonesia (Mumbunan et al. 2012;Ardiansyah and Jotzo 2013;Irawan et al. 2013;Irawan and Tacconi 2016), 1 Much of the power over forests and natural resources has been recentralized to the provincial level since the passage of Law 23 in 2014 (Ardiansyah et al. 2015). The experience of ecological fiscal transfers | 5 most notably from the technical aspects of setting indicators, budget allocation sizes and distribution formulae. These studies identified several challenges: 1. The ability of local governments to absorb the potentially significant increases in financial transfers from REDD+ or climate funds is questioned, as some local governments in Indonesia have accumulated substantial unspent balances due to their low capacity for public service delivery (Alisjahbana 2005). 2. Given that REDD+ will be a performance-based incentive, as recently re-emphasized in the Paris Agreement 2 , local governments need to be able to assess emissions outcomes together with other social and economic outcomes. The local governments' knowledge on emissions data will enable transparency in the allocation and distribution of finances, and the ability to differentiate between general FTs and EFTs is critical to inform their choice of behavior based on two incentive systems. 3. The effectiveness of EFTs in generating emissions reduction outcomes will ultimately depend on the broader political economy of land-use change. The allocation of one of the existing fiscal transfer vehicles in Indonesia (the Revenue Sharing Fund) is based on natural resource revenues generated at the local level 3 thus rewarding forest use and conversion behavior, as demonstrated by the high correlation between revenue levels and deforestation rates on the major Indonesian islands over -2012(Nurfatriani et al. 2015. Shared revenues accounted for about 13.8% of local district budgets in 2012 on average (Irawan and Tacconi 2016), though there can be significant variation between districts depending on the level of resource extraction. How an (REDD+) EFT rewarding forest conservation can compete with this existing set of incentives to change the behavior of local public actors depends not only on the relative size of the incentives, but also, perhaps more crucially, on the socioeconomic and bureaucratic expectations that forest use and conversion will generate employment and development. The local values for forests tend to vary widely across different districts and provinces, and local government perspectives often differ from national objectives and priorities (Irawan and Tacconi 2016). To date, EFTs have been implemented in Brazil (Grieg-Gran 2000; May et al. 2002May et al. , 2012May et al. , 2013 and recently in Portugal (Santos et al. 2012). The following assessment, therefore, largely draws on experiences in those countries ). Ecological effectiveness Since EFTs serve the purpose of compensating costs for implementing environmental protection and creating positive spillovers, generally no additional requirements are made on how these measures are implemented. Therefore, the current literature on EFTs seldom explicitly discusses a direct causal link between payments and improvements in environmental quality (Barton et al. 2011). Yet, an indication of the effectiveness of EFTs can be obtained by comparing the total area and quality of PAs within a jurisdiction prior to the introduction of EFTs and years after the EFTs have been implemented. However, it has been noted that other factors must be taken into account in a given context that might lead to an increase in total PA. Therefore, a comparison with a business-as-usual scenario that includes information on historical trends in PA designation should be applied when assessing the effectiveness of a particular EFT program ). Referring to the case of ICMS-E, Ring (2008a, 491) states it "has become an important stimulus for the creation of new conservation units and for improved environmental management and quality of these areas". Santos et al. (2012) show that EFT contributions in Portugal can be significant to municipalities with large parts of their area granted protection status. They state that, as a result, this may act as an incentive to keep or increase PA coverage. In Brazil, the ICMS-E was initially introduced as a mechanism for compensating landuse restrictions. Over the years, it started to be seen as an incentive mechanism for the establishment of new PAs (Loureiro 1997;May et al. 2002). Figures from the state of Paraná indicate that the total PA within the state has increased by 164.5% since the establishment of ICMS-E in 1991. The majority of the increased protection occurred within the first 10 years of the program, indicating a saturation effect and an increasing scarcity of areas with low opportunity costs in which new PAs can easily be established (Loureiro 2002;Ring et al. 2011). Other studies, however, show that the effectiveness of the ICMS-E in stimulating the creation of new PAs in Brazil is not straightforward. As an initial attempt to evaluate this, May et al. (2012) found that, in the majority of the states analyzed (10 out of 13), the average number of new PAs had declined in absolute terms in the period after the creation of the ICMS-E. Droste et al. (2015) recently found a direct relationship between the increase in PAs and the implementation of the ICMS-E. The authors state that, between 1991 and 2009, there was a significant positive correlation between ICMS-E and PA, meaning that there were, on average, higher shares of PAs with ICMS-E than without. They also found that gross domestic product (GDP) per capita correlates positively and significantly with PA: on average, richer states have higher PA shares (Droste et al. 2015). Besides difficulties in proving a direct causal link between the provision of EFT payments and an increase in PA coverage, a major constraint in targeting and assessing the environmental effectiveness of EFTs is the lack of indicators for measuring environmental quality improvements within the PAs. For Portugal, Santos et al. (2012) explicitly state that quantitative indicators on PA coverage are complemented with quality criteria. Similarly, the distribution of ICMS-E revenues in most states in Brazil is currently based only Assessment of EFTs The experience of ecological fiscal transfers | 7 on quantitative indicators, such as the area of the PA. These take into account the relationships of size in hectares and the conservation factor of PAs contained in the municipality with the overall area of the municipality. A qualitative index which would stimulate efforts to improve local biodiversity protection and management, although included in some legislations, has yet to be regulated and implemented with success. In the case of the state of Paraná, for example, the initial implementation of the scheme was changed to adopt a quality index, which is sensitive to the efforts of municipalities toward PA establishment and maintenance. The index includes biological, physical and chemical indicators of PAs, as well as social and administrative indicators such as management, infrastructure and provision of basic needs to local communities, among others. According to Loureiro (2002), this is why the instrument acts as an incentive, rather than just compensation, and allows each municipality to influence outcomes according to their own conservation decisions and actions. Finally, the addition of qualitative criteria seems not to increase costs, once it is combined with an increase in the number of resources transferred, as established in the state of Paraná legislation. The implementation of actions that aim to increase qualitative indicators of PAs also includes voluntary help from local communities to develop diverse types of actions ranging from management to education (Nascimento et al. 2011). Conditionality of payments The bulk of intergovernmental FTs is allocated as unconditional lump sum payments. This provides freedom for the recipient administration to decide upon use, and thus preserves local autonomythis is also a constitutional precondition of EFTs in Portugal (Santos et al. 2012). However, in their analysis of the ICMS-E, May et al. (2012) regard not earmarking revenues as problematic. The authors see that an important limitation of ICMS-E to environmental management is that the transfer to municipalities is not subject to strict application of resources to environmental matters, since the National Taxation Code provides that taxes not be bound to specific expenditures. According to the authors, it seems logical to assert that, in the absence of social control over the application of these resources, the likelihood of them being used to cover other expenses at the municipal level is high. Thus, some municipalities in Brazil are already considering the inclusion of results-based payments for ICMS-E revenues. The State Environmental Agency of Mato Grosso, for example, has proposed the adoption of a scoring system to evaluate the quality of conservation. Municipalities with a positive score would then receive a revenue increment (Mato Grosso 2009). This would have the potential to form a virtuous circle: the money received would be partly applied to PAs and indigenous lands or the zones surrounding them, and this would thereby generate improvements in the quality of these areas, thus increasing the possibility of raising even greater financial resources for the municipality. According to Nascimento et al. (2011) the experience of the state of Paraná in Brazil has shown that ICMS-E contributes to the higher goals of ecosystem services provision, biodiversity conservation and climate mitigation. The creation of qualitative indicators played a key role in enhancing these objectives without further costs, as local communities have been fully involved in this process and the municipalities have increased their revenues by reaching the different indicators. Cost effectiveness Besides environmental effectiveness, cost effectiveness is a key requirement for conservation measures. In general, a policy option is more cost effective relative to another either if an equal conservation outcome is attained at lower total costs or if its conservation outcome is higher for given total costs (Wätzold et al. 2010). The policy and management costs for establishing and implementing EFTs are considered to be relatively low, because in many cases the administrative structures needed are already in place and political buy-in may be easier as this leverages existing policy instruments. The transaction costs for determining EFT payments depend on the indicators and monitoring procedures selected in each specific case. If, as stated above for Portugal and most Brazilian states, EFT payments are based only on the quantity of the area under protection, these numbers are relatively easy to obtain and costs are low (Ring 2008a). However, if additional quality indicators are applied, monitoring costs may rise. Referring to the case of the European Natura 2000 network, Barton et al. (2011) argue that the effectiveness of EFTs is far greater if quality indicators, such as type of PA and protection status are used. However, this requires regular field validation of PA management quality. In most industrialized countries, such indicators are already being surveyed, with high-resolution monitoring procedures and corresponding capacity in place. In the context of forest-rich developing countries, however, costs of establishing quality-based monitoring systems and the lack of capacity in responsible state agencies may pose a real challenge (Loft et al. 2014). Therefore, a compromise between easily available monitoring data and capacity, on the one hand, and indicators that include quality aspects, on the other, must be assessed at the place of implementation. Further, environmental protection can have high opportunity costs depending on geographic and socioeconomic factors (Börner et al. 2015). May et al. (2013) analyzed ICMS-E allocations in the northwest of Mato Grosso and compared the ICMS-E revenues from PA creation relative to opportunity cost of conversion to pastures. They described the specific contribution of the livestock industry to municipal value added in the area, compared with ICMS-E revenues derived from PAs and indigenous lands. They found that the absolute values of municipal revenues derived from the PA criterion are significantly higher than that from livestock and logging in the municipalities analyzed. Therefore, under certain conditions, PAs can constitute a greater source of municipal ICMS revenue than livestock and logging, despite the predominance of these activities in the gross income of this frontier region. However, it is also probable that there are other sources of spillover municipal revenues derived from service and manufacturing enterprises associated with livestock and timber activity. Equity When discussing distributional equity implications of EFTs, it is important to highlight, that the instrument does not provide 'fresh money' but rather, redistributes existing public funds among different public actors. Thus, EFTs are subsidiary instruments in intergovernmental fiscal relations and their distributional effects depend very much on how the EFT's budget is generated, i.e. the general tax structure as the primary source of the revenues that are being redistributed. In Latin America, for example, the income generated through value-added tax forms a major part of the public budget. Since the VAT is imposed on traded goods they have a substantial impact on the poor (Barton et al. 2011). However, as EFT' are financed through a fixed percentage of ICMS revenues, they can have significant distributional impacts among different subnational actors. Santos et al. (2012) show that the introduction of ecological indicators in the fiscal transfer scheme in Portugal has greatly affected the distribution among municipalities. If the new fiscal transfer regulation were to be applied without recognizing the ecological indicator based on PA coverage, Santos et al. (2012) conclude that all municipalities with more than 70% of their territory under PA regimes would lose out. In Castro Verde municipality, in 2008, for example, the ecological indicator accounts for 38% of FTs allocated and 34% of overall revenues, while in the municipalities of Lisbon, Alerim and Aguiar da Beira the ecological component is zero. Although EFTs are payments between jurisdictions, they have indirect distributional effects on individual land users. EFTs serve the purpose of compensating municipalities (or provinces) for expenses made to supply public ecosystem goods and services, which ideally leads to more effective management and conservation of ecosystems. However, the other side of the coin is that more effective management and conservation may impact local land users in neighboring areas through land-use restrictions, even if their impact is low ). In some municipalities in Brazil, ICMS-E payments may be further distributed to nonmunicipal stakeholders within municipal boundaries. May et al. (2013) find that one of the municipalities in northwest Mato Grosso, despite not having an explicit criterion for distributing ICMS-E resources for socioenvironmental purposes, transferred USD 34,000 (around 2.6% of its total ICMS revenues of USD 1.3 million), in 2012, to two indigenous tribes whose lands lie partially in the municipality (Mato Grosso 2012). The funds were administered by the Indigenous National Foundation with active participation of | 9 indigenous tribes with the aim of guaranteeing procedural equity. As a consequence, resources were invested in different projects (health, land use, etc.) that benefited Indian communities in the indigenous lands of the Enawenê-Nawê and Cinta Larga tribes, located within the municipality. According to respondents from the Enawenê-Nawê tribe who participated in the analysis, the ICMS-E resources transferred to the indigenous people is usually used to support ethnic customs and in monitoring the indigenous land, which involves traveling throughout the territory to prevent intrusion and resource extraction by non-indigenous persons. In the case of the Cinta Larga, the funds are used for activities that increase productivity in nut collection and poultry farming, which also result in monitoring for illegal activities inside the indigenous lands (May et al. 2013). With regards to procedural equity, the provision of EFTs for the establishment of PAs and environmental quality objectives has the potential to raise acceptance of environmental protection measures at the local level of implementation Santos et al. 2012). This presupposes good communication of the relationship between the conservation indicators and the FTs received based on these new indicators . Based on the experience in Portugal, Santos et al. (2012) highlight the need for accompanying EFT implementation with good information and communication strategies, as otherwise locallevel policy makers may not know how much their budgets benefit from this source of funding. Similarly, May et al (2013) show that in northwest Mato Grosso, most of the people managing ICMS-E resources inside the environmental secretariats of the municipalities studied do not know the exact amount that ICMS-E generates and how these benefits are distributed, since they are included in the general public budgetary allocations to the municipality. They find that the state government has made little effort to disseminate information on the share of funding that is distributed through ICMS-E, but relies on civil society organizations to promote its effectiveness. According to the environment secretary of one of the municipalities analyzed by May et al. (2013), "the ICMS-E was not a demand of the local population, it was a topdown initiative". This explains, to some extent, why municipal environment officers in Mato Grosso are unaware of the amounts that are transferred to the municipalities. Further, May et al. (2012) show that there is little transparency in the implementation of municipal budgets, although municipalities are legally obliged to report on the receipt and detailed expenditures of these funds (Ordinance 2759-01). Transparency is, however, crucial to identify EFT benefits for environmental management. The lack of transparency also results in difficulties in assessing distributional issues associated with the mechanism, such as social impacts. Finally, in terms of impact on the poor, qualitative evidence from the states of Paraná and Mato Grosso in Brazil, shows that ICMS-E has a positive impact. For example, it increases access to basic needs such as education, subsistence, health and infrastructure (Nascimento et al. 2011;May et al. 2013). This impact, however, was due to political will, but also to the fact that ICMS-E resources were not earmarked for environmental conservation. This suggests that lump sum transfers can both affect the performance of environmental results and increase distributional equity. This research was carried out by CIFOR as part of the CGIAR Research Program on Forests, Trees and Agroforestry (CRP-FTA). This collaborative program aims to enhance the management and use of forests, agroforestry and tree genetic resources across the landscape from forests to farms. CIFOR leads CRP-FTA in partnership with Bioversity International, CATIE, CIRAD, the International Center for Tropical Agriculture and the World Agroforestry Centre. cifor.org blog.cifor.org In many countries, the state owns or manages forests in the national interests of economic development, ecosystem service provision or biodiversity conservation. A national approach to reducing deforestation and forest degradation and the enhancement of forest carbon stocks (REDD+) will thus most likely involve governmental entities at different governance levels from central to local. Sub-national governments that implement REDD+ activities will generate carbon ecosystem services and potentially other co-benefits, such as biodiversity conservation, and in the process incur implementation and opportunity costs for these actions. This occasional paper analyses the literature on ecological fiscal transfers (EFTs), with a focus on experiences in Brazil and Portugal, to draw lessons for how policy instruments for intergovernmental transfers can be designed in a national REDD+ benefit-sharing system. EFTs can be an effective policy instrument for improving revenue adequacy and fiscal equalization across a country. They facilitate financial allocations based on a sub-national government's environmental performance, and could also partly compensate the costs of REDD+ implementation. We find that intergovernmental EFTs targeting sub-national public actors can be an important element of policy mix for REDD+ benefit sharing, particularly in a decentralized governance system, as decisions on forest and land use are being made at sub-national levels. Given the increasing focus and interest on jurisdictional REDD+, EFTs may have a role in filling the shortfall of revenues for REDD+ readiness and for implementing enabling actions related to forest governance. If EFTs are to have efficient and equitable outcomes, however, they will require strong informationsharing and transparency systems on environmental indicators and performance, and the disbursement and spending of EFT funds across all levels. CIFOR Occasional Papers contain research results that are significant to tropical forest issues. This content has been peer reviewed internally and externally.
7,571.2
2016-01-01T00:00:00.000
[ "Environmental Science", "Economics" ]
Tractor Cab Ergonomics Optimization Based on the Simplified Model of Upper Limb from the Perspective of Public Health The study of tractor ergonomics is both an essential part of public health and a very significant part of the scientific community's focus at the moment. It offers a foundation for the layout design of the tractor cab, making it possible to effectively avoid occupational diseases, minimize the number of safety accidents, and enhance the comfort of operation. Devices are categorized as control rod devices, knob-type devices, and steering wheels according to the various modes of operation of the tractor cab. Steering wheels are also included. The ease of handling of a number of different components was ranked according to how well they performed on the fast evaluation approach for the upper limbs. After that, in accordance with the concept that underpins this evaluation approach, the comfortable range of motion of human upper limb joints is evaluated while undergoing a variety of manipulation modalities. In conjunction with the structure of the human body and the characteristics of its movement, a streamlined point-line structure model of the human upper limb is constructed, with the H-point serving as the reference point. The problem of figuring out how to distribute the control components in the best possible way has been solved, and the optimal distribution range diagram of the steering wheel has been obtained. The ideal height for the distribution of control rod devices is around 300–400 millimeters, whereas the ideal height for the distribution of knob-type devices is approximately 200–500 millimeters. In conclusion, the cab design of the KAT2204 tractor is improved upon thanks to the analysis done in this study, which can be found above. The legitimacy of the research conclusion is confirmed by the fact that the RULA value is lower than 2, which is proved by the fact that the design findings are validated by the Creo Manikin module. The ergonomics of the tractor cab were taken into consideration when using this research approach as a reference. Introduction e tractor is a kind of heavy machinery with a poor working environment and complex control system. Every element in its cab design and the man-machine system design related to the cab is related to the normal and efficient operation of the machinery and the safe and comfortable control of the driver. erefore, in the industrial design of the tractor, the appeal of the humanization concept is particularly prominent [1]. In the study of the tractor cabs, Henry Dreyfuss, an American industrial designer, is regarded as the first person to study tractor ergonomics. In 1950, when Henry Dreyfuss designed the tractor cab for John deer company, he applied the ergonomic design theory in the aircraft pilot ship to the design of control rod devices in the tractor cab, distinguished the control handle by color and shape, and designed it as a standardized part produced by the assembly line [2]. With the development and improvement of ergonomics theory, the seat, display device, control device, driving environment, and safety performance of tractor cab have been deeply studied [3][4][5][6]. Tractor cab ergonomics research is one of the key fields of public health. For example, in the U.S.; one of the public health focuses is the effectiveness of rollover protective structures for preventing injuries associated with agricultural tractors because agriculture ranks fourth among U.S. industries for work-related fatalities. Relevant studies include seating discomfort and operating comfort of tractor cabs [7], and the design and development of tractors need to consider how to reduce safety accidents during operation [8] and prevent occupational diseases of tractor drivers [9]. e weakness of basic research on the man-machine of the tractor cab and the lack of cab design basis and optimization methods restricts the comfort of tractor operation. At the same time, it is necessary to reduce safety accidents during operation and prevent occupational diseases of tractor drivers. At present, the world has focused on the research of cab and seat damping suspension [10,11]. e research on ergonomics of tractor cabs in China started relatively late, mostly focusing on theoretical analysis [12,13]. It can provide a valuable reference for the cab design by using the digital method to carry on the ergonomics research to the tractor cab [14]. At present, the virtual reality assessment system [15] and the field of vision, driving computer-aided system [16] have been used to study the cab ergonomics. RULA (Rapid Upper Limb Assessment [17]) is an important research method of ergonomics based on the risk assessment of muscle imbalance published by Dr. Lynn and Dr. Nigel Corlett of Institution of Occupation Efficiency in 1993, University of Nottingham. is method has been widely used in the development of various products after developments and applications, such as ergonomics evaluation of aircraft maintenance tasks [18], efficiency research of tables and chairs [19], and the research combination of free modulus magnitude estimation [20], which all provide ergonomical references for the design and development of products, as well as the development of the evaluation for finger disorders and an evaluation method based on a load of upper limb based on the core idea of RULA evaluation method [21]. e RULA evaluation method is used for the spatial arrangement of the tractor cab control device in this research. e document can be found here. In conjunction with the RULA evaluation system, an examination of the operating characteristics of each control part of the tractor cab, such as its mode of operation and frequency of use, is carried out. e range of motion of each joint was evaluated using the RULA assessment system, and a mathematical model of the human upper limb was constructed. e last step is to use MATLAB software to do a simulation of the experiment so that we can determine the ideal size of each control device. According to the size range, it may be used to influence the design of the control platform within the tractor cab, improve the comfort and accuracy of the driver, and offer a reference for the study of the ergonomics inside the cab. Factors Affecting the Status of Upper Limb Disorders. In RULA, the main influence factors of the upper limb disorders are the angle of each joint, when each joint is in the right-angle position, the human body muscle disorder is the smallest. However, apart from the influence factors of the joint, the manipulation of the device will also affect the comfort of the driver [22]. At each point within its range, the human wrist is capable of a substantial degree of torsion. When working, the wrist will be in the position that provides the greatest level of comfort when it is in the middle position of the torsional range. On the other hand, if the wrist is in a limited position, there is a greater risk of dislocation. In the actual work, depending on the actual needs of the circumstance, it may occasionally require the staff to maintain an action for a long period or it may require them to frequently repeat an action. Both of these requirements can be addressed by doing the same action. Because of this, there is an increased possibility that employees would develop muscular diseases; hence, it is essential to have distinct conversations about the various kinds of equipment. In addition, it is essential to keep in mind that the potential for developing a muscle problem is influenced in a manner that is distinct for each type of device. Additionally, during the course of the real work, the staff may occasionally be forced to bear a certain load. e condition of the legs will also affect how comfortable the task is for the individual. Because the driver is not required to carry weight in the usual working condition of the cab, the risk that would be caused by carrying a load in the scenario, that is, the focus of this study is not taken into account. is is because the driver is not obliged to carry weight. Analysis of Cab Control Devices. e ergonomic design of the tractor cab is mainly based on the operator's physiological characteristics, sports characteristics, psychological characteristics, physical limits, habits, and other factors, combined with the ergonomic design principles, to reasonably design the structure of control rod devices, display adjustment devices and seats in the tractor cab, minimize the noise and vibration in the cab [23], and provide a convenient and comfortable working environment for the driver [24,25]. is paper takes the high-power wheeled tractor (about 200 HP tractor) as the main research object and analyzes the layout and design dimensions of the tractor cab. From the perspective of ergonomics, the components in the tractor cab can be divided into five categories: hand control components, foot control components, visual display components, driving seat, and driving space [26]. e composition of the cab of general tractors is shown in Figure 1. According to the different modes of operation, and taking the above influencing factors into account, control devices in the cab are classified into lever devices, knob devices, and steering wheels. e risk of imbalance when operating is ranked as level 0 is the minimum and level 2 is the maximum, and the operating devices are summarized are shown in Table 1. Evaluation of Operating Comfort. e RULA is a method that evaluates designs on the basis of muscular disorders; the final score is used to determine whether or not a design is reasonable; if the final score is 1 or 2, the design may be accepted. Rating of the limb disorders, rating of the trunk disorder, and the final score are the three elements that make up the RULA evaluation technique. e final score section is a summary of the first two parts and rates the disorders by synthesizing the risk level of muscular disorder in certain settings. When building the model of the upper limb and solving the problem, the trunk's backward-inclined angle is less than 10°since the body does not lean forward very often when the conditions are those of a tractor cab. After conducting score screening according to the rating criteria of each portion, it will eventually be possible to attain a pleasant moving angle for each joint. Take the vertically downward direction as the 0°position when the control object is the steering wheel. e motion range of the shoulder can be obtained for the upper lift, which is 0°to 20°, and the intersection angle of the upper arm and the forearm is 60°to 100°. When the control object is of the knob type, the motion range of the shoulder can be obtained as 0°to 100°for the upper lift, and the upper arm and forearm intersection angle can be 0°to 100°as well. When the control object is a device of the joystick variety, the motion range of the shoulder can be obtained as 0°to 45°for the upper lift, and the upper arm and forearm intersection angle can range from 0°to 100°. Establishment of the Upper Limb Model. In the analysis of the operating device size range, it needs to combine with the characteristics of the structure of the human body and its movement to calculate the placement range of operating devices, in consideration of the spatial arrangement of the control device, the shoulder width, and turning angle range of the body should also be taken into consideration. Combined with the D-H model commonly used in multirigid body kinematics, wearable robots established the D-H model of human upper limb motion which is used to analyze the human upper limb motion [27]. Based on the basic idea of the D-H model, the human body model is transformed into the point-line structure whose each point represents a joint and line for the skeleton, as is shown in Figure 2. en, take the hinge H point of the human body and the thigh as the axis origin, the tractor driving direction as X-axis direction and Yaxis direction is upward perpendicular to the ground, establishing the plane coordinate system. H point is hip Journal of Environmental and Public Health 3 point, which is in the human body template for the hip joint [28]. In the cab working conditions, when the driver operating all kinds of operates devices, the hip joint position relative to the cab is stationary, so take the H point as the origin of coordinates of the mathematical model for the analysis of human upper limb movement is convenient. And, according to the results of the design of the cab, as long as the spatial position of the H point can be determined, it can be conveniently used to transform the Cartesian coordinate system and then guide the design work. e simplified point-line structure is shown in Figure 2. e coordinates of the location of points can be calculated according to the geometric relations, of which θ 1 represents the body's backward-inclined angle, θ 2 represents the human upper arm lift angle, and θ 3 refers to the angle between the forearm and upper arm extension lines. Size Solution of the Joystick and Knob Device. e joystick and knob type devices are generally distributed in the side way of the driver, part of the control devices in operation even require the driver's small turn. So, in the optimization calculation of the handling device, it also needs to add the relevant parameters based on the original mathematical model. Figure 3 is a top view of the human upper limb model, namely, observing the mathematical model of the line structure along the Y-axis direction, which θ 4 refers to two of the body's shoulder line and the Z-axis angle, namely, the torsion angle of the human body; θ 5 refers to the human arm stretching angle. l 1 , l 2 , l 3 , and l 4 refer to shoulder height, upper arm length, forearm length, and unilateral shoulder width of the human body, respectively. According to the mathematical model established above, the position of the target point is calculated, and it is known that the operating angle of each part of the body is shown in Table 2. In the solution of three-dimensional coordinates of target points, the driver is determined to be at an angle of θ 1 firstly and then raise the upper arm and forearm when operating control rod devices and knob-type devices on the side. e angle between the upper arm and the plumb line is θ 2 . e lifting angle of the forearm relative to the boom is θ 3 . x � l 2 · sin θ 2 + l 3 · sin θ 2 + θ 3 − l 1 · sin θ 1 , y � l 1 · cos θ 1 − l 2 · cos θ 2 − l 3 · cos θ 2 + θ 3 , z � l 4 . (2) Five values have been taken at the same distance within θ 1 , θ 2 , θ 3 , θ 4 , and θ 5 , and then the values are taken into MATLAB to repeat the calculation, with the number of repeat tests 54 times. en, the results of each calculation are mapped in the coordinates of the point, as the joystick device size optimization results are shown in Figure 4, and knob type devices size optimization results are shown in Figure 5. e aforementioned data need to be processed in two dimensions so that the results of Figures 4 and 5 can be presented in a manner that is easier to understand. It is possible to make an educated guess about the value range of the vertical axis in the two figures by looking at Figures 4 and 5. e size optimal range scatter of control rod devices along the vertical axis is shown in Figure 4 (0, 1000). Figure 5 illustrates that the best size range for scattering of knob-type devices along the vertical axis is (0, 600). Take the average of nine segments in the range of Figure 4 from 100 to 1000, and take the average of six segments in the range of Figure 5'(s) from 0 to 600. After projecting the points from each section onto the XOZ plane, one may obtain the scatter diagram for the height section that corresponds to that section. e best range of control rod devices for the side console at the equivalent height is the range that corresponds to the scatter dispersion range. Both Figures 6 and 7 display the scatter plots that were generated as a result. Within this height range, the ideal distribution space for the horizontal space can be seen to be reflected by the position of the points on the outermost and surrounding areas of the scatter diagram. It is possible to conclude that the optimal distribution height of control rod devices is approximately 300-400 millimeters, followed by 200-300 millimeters and 400-500 millimeters, and that the optimal distribution height of knob devices is 200-500 millimeters by observing the area that is surrounded by the points that are further out. Solution of Steering Wheel Size. According to the criteria of the forearm imbalance rating, the forearm imbalance rating level will increase when the current working position of the arm is over the human body's symmetry surface or when the arm is working outside the body. is is because these positions cause the forearm to be out of alignment with the rest of the body. is criterion can demonstrate that the design of the steering wheel location should ideally be in the plane of body symmetry and that the wheel diameter should be about equivalent to the width of a human shoulder. As a result, there is a two-dimensional zone on the human body that constitutes the ideal placement for the steering wheel. Given the situation that the steering wheel is a special control device, the mathematical model can be simplified to improve the efficiency of the analysis and make the optimization results more intuitive. In order to facilitate the solution of the position of the target, the human body model is simplified into a point-line structure, where each point represents the joint and line represents the skeleton, and then take H point as the origin of coordinates, the tractor driving direction as X-axis direction, and Y-axis direction is upward perpendicular to the ground, establishing the plane coordinate system. e simplified point-line structure is shown in Figure 8, and the coordinates of the target points can be obtained according to the geometric relations: e simulation test is conducted by using MATLAB software programming, and the optimal range is covered as far as possible. e range of motion angle of each joint is as follows while the driver is operating the steering wheel, where φ 4 is set for the convenience of calculation: Journal of Environmental and Public Health According to the geometric relations, the calculation formula of the coordinates of the target points can be deduced as follows of which l 1 , l 2 , and l 3 are shoulder height, upper arm length, and forearm length, respectively: We took 10 values at the equal distance within the scope of ϕ 1 , ϕ 2 , and ϕ 3 and then calculated repeatedly in MATLAB; the number of repeat tests was 10 3 times. en, the results of each calculation are plotted in the coordinate system, and the results are shown in Figure 9. Layout Optimization of the Control Device. According to the optimization results obtained by the method described above, an optimization design of a cab layout based on a KAT2204 type tractor was taken as the design object for optimization. e primary areas of the cab that were targeted for optimization were the reversing lever, hand throttle, gear handle, multifunction joystick, and steering wheel. Figure 10 demonstrates the establishment of a rectangular coordinate system with the H point serving as the origin. Table 3 provides information regarding the relative position and size of the various important components. Figure 11; each RULA value of the operating member is not greater than 2, so the analysis method is feasible and effective, and the results obtained can provide a valuable reference for the design of the cab. Figure 11 shows the results of the analysis. Conclusion e conclusion of our work can be summarized as follows: (1) is work provides an analysis of the primary components and research methodologies used in the ergonomics design of the tractor cab. It also proposes a strategy for optimizing the ergonomics of the tractor cab layout based on a simplified model of the upper limb from the point of view of public health. (2) e angle of joint movement under specified situations was selected by the RULA evaluation method, and the simplified model of the human upper limb was constructed based on the D-H model of the human upper limb that was proposed by wearable robots. MATLAB was used to carry out several rapid simulation experiments in order to obtain the optimal range of the operating parts, as well as the optimal design height of the side console. H point was used as the origin of the rectangular coordinate system. H point was used as the origin of the rectangular coordinate system. (3) According to the optimization results obtained by the Manikin module, the results are less than or equal to 2, which can be verified that the optimization results presented in this paper were feasible and effective. e RULA assessment method is capable of being utilized for the purpose of conducting an ergonomic study under certain circumstances. e results of an ergonomic study conducted on a tractor cab using the Pro/E Manikin module have demonstrated that the findings are capable of being disseminated and utilized. is method not only serves as a reference for the investigation of the ergonomics of tractor cabs but also establishes the groundwork for the further improvement of tractor cabs. Data Availability e dataset used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare no conflicts of interest.
5,149.2
2022-08-02T00:00:00.000
[ "Engineering" ]
Heat-Driven Synchronization in Coupled Liquid Crystal Elastomer Spring Self-Oscillators Self-oscillating coupled machines are capable of absorbing energy from the external environment to maintain their own motion and have the advantages of autonomy and portability, which also contribute to the exploration of the field of synchronization and clustering. Based on a thermally responsive liquid crystal elastomer (LCE) spring self-oscillator in a linear temperature field, this paper constructs a coupling and synchronization model of two self-oscillators connected by springs. Based on the existing dynamic LCE model, this paper theoretically reveals the self-oscillation mechanism and synchronization mechanism of two self-oscillators. The results show that adjusting the initial conditions and system parameters causes the coupled system to exhibit two synchronization modes: in-phase mode and anti-phase mode. The work conducted by the driving force compensates for the damping dissipation of the system, thus maintaining self-oscillation. The phase diagrams of different system parameters are drawn to illuminate the self-oscillation and synchronization mechanism. For weak interaction, changing the initial conditions may obtain the modes of in-phase and anti-phase. Under conditions of strong interactions, the system consistently exhibits an in-phase mode. Furthermore, an investigation is conducted on the influence of system parameters, such as the LCE elastic coefficient and spring elastic coefficient, on the amplitudes and frequencies of the two synchronization modes. This study aims to enhance the understanding of self-oscillator synchronization and its potential applications in areas such as energy harvesting, power generation, detection, soft robotics, medical devices and micro/nanodevices. Introduction Self-oscillation refers to the phenomenon where a system generates sustained oscillations or periodic changes without external excitation, due to internal coupling and feedback mechanisms [1][2][3][4][5][6][7]. As a result, self-oscillating systems do not require a continuous energy supply from external sources, reducing energy consumption and system complexity. These systems can be adjusted and controlled by tuning internal parameters and coupling methods. Additionally, self-oscillating systems exhibit great flexibility capable of displaying various oscillatory behaviors such as periodic oscillations [8,9] and chaotic oscillations [10][11][12]. Various feedback mechanisms have been suggested to counteract energy loss attributed to damping dissipation, including the coupling of chemical reactions and large deformations [13][14][15], as well as the self-shading mechanism [16]. Currently, self-oscillation systems are widely used in various scientific and engineering fields, such as sensor technology [10,[17][18][19][20][21][22], soft robots [23,24] and so on. In recent years, the exploration of active materials has further expanded the possibilities of self-oscillating systems. Through ongoing research and development, scientists continue to discover new active materials with unique properties and enhanced performance, such as dielectric elastomers [25], hydrogels [26,27], ionic gels [13], thermal response polymers [28] and liquid crystal elastomer (LCE) [29][30][31]. These active substances produce different responses when stimulated by light [7], heat [9], electricity [32] and Model and Theoretical Formulation In the current section, a coupled self-oscillating system consisting of two LCE fibers and a linear spring under a linear temperature field is proposed. Meanwhile, the governing equations and solution methods of the system are given. Figure 1 illustrates the coupled self-oscillating system within a linear temperature field, which consists of two identical LCE spring oscillators connected by two springs. In the nonstress state, the primary length of the LCE fiber is L 1 and the primary length of the spring is L 2 , as shown in Figure 1a. According to Yakacki et al. [28], LC monomer (RM257) and cross-linking agent (PETMP), etc., are used as raw materials, and LCE fibers can be made by a two-step cross-linking reaction. First, one end of the LCE fiber is fixed, while another end is connected with a spring. The lower end of the spring is connected with another spring through a fixed pulley so that the two LCE fibers can be connected in series. To ensure that the system is force stabilized, the LCE fiber and the spring should be pre-stretched, where the prestretch amount is λ 1 , λ 2 , respectively. In the state of equilibrium, the lengths of the LCE fibers and the springs are λ 1 L 1 and λ 2 L 2 , respectively, as shown in Figure 1b. Then, the equilibrium equation of the system in the non-stress state can be obtained: Dynamic Model of Two LCE Spring Oscillators where F s 1 0 and F s 2 0 are the initial elastic forces of the two springs, respectively; F L 1 0 and F L 2 0 are the initial elastic forces of two LCE fibers, respectively, where F s0 = k(λ 2 L 2 − L 2 ), F L0 = K(λ 1 L 1 − L 1 ). k and K are elastic coefficients of spring and LCE fiber, respectively. In this case, we can obtain the relationship between λ 1 and λ 2 , i.e.: where F s 1 0 = F s 1 0 /mg, F s 2 0 = F s 2 0 /mg, F L 1 0 = F L 1 0 /mg, F L 2 0 = F L 2 0 /mg, k = kL 1 /mg, K = KL 1 /mg and L 2 = L 2 /L 1 . When placed in the linear temperature field, LCE fibers begin to oscillate along the vertical direction, in which the displacements of particles 1 and 2 are w 1 (t) and w 2 (t), respectively, as shown in Figure 1c. The force analysis diagram of the two particles is given in Figure 1d, where F s 1 and F s 2 are the elastic force of the two springs, respectively (referred to as spring force); F L 1 and F L 2 are the elastic force of two fibers (hereinafter referred to as the driving force); and F d ( . w 1 ) and F d ( . w 2 ) are the damping force in the process of vibration. To simplify the analysis, we make the assumption that the damping force is directly proportional to the particle's velocity and always acts in the opposite direction to the particle's motion. The dynamic governing equations of the system can then be obtained and can be applied at any time: where .. dt , and the spring force is Since the system can vibrate continuously without divergence, only nonlinear damping is studied, and it is assumed that: where a 0 and a 1 represent the first and second damping coefficients, respectively. where ) Figure 1. Schematic diagram of two identical LCE fibers connected by two identical springs within the linear temperature field. (a) Reference state; (b) pre-stretched state; (c) current state; (d) force analysis of the mass. Two coupled LCE oscillators can vibrate synchronously within the linear temperature field. Since the system can vibrate continuously without divergence, only nonlinear damping is studied, and it is assumed that: where 0 a and 1 a represent the first and second damping coefficients, respectively. Tension in the LCE Fibers According to the non-uniform deformation of LCE fiber in a linear temperature field, the Lagrangian coordinate system 1 X , 2 X and Euler coordinate system 1 x , 2 x need to be established by taking the particle at the end of LCE fiber as the origin, as shown in Figure 1a,b. When LCE fiber vibrates, the instantaneous position and displacement of a Tension in the LCE Fibers According to the non-uniform deformation of LCE fiber in a linear temperature field, the Lagrangian coordinate system X 1 , X 2 and Euler coordinate system x 1 , x 2 need to be established by taking the particle at the end of LCE fiber as the origin, as shown in Figure 1a,b. When LCE fiber vibrates, the instantaneous position and displacement of a particle can be used as 2). The displacement of the particle is represented by w 1 (t) and w 2 (t), respectively. We assume that the driving force of LCE fiber is linearly dependent on strain: where K is the elastic coefficient of LCE fiber, and one-dimensional strain ε 1 (X, t), ε 2 (X, t) is given by: We assume that the heat-induced strain ε T (X, t) is linearly related to the temperature difference T(X, t) in LCE fiber: where α represents the coefficient of thermal expansion, α < 0 represents thermal contraction, and α > 0 represents thermal expansion. Since the driving force F L (t) is uniform and constant in the LCE fiber, it can be obtained by integrating both sides of Equation (5) from 0 to X and combining with Equations (6) and (7); the driving force at the end X = L of the LCE fiber can be written as: Since the temperature field in LCE fiber is unevenly distributed and changes with time, heat exchange occurs between the fiber and its surroundings, resulting in a temperature distribution denoted by T ext (t). For simplicity, there is an assumption that the radius R is much smaller than the length L so that the temperature field in the LCE fiber can be seen as uniform, i.e., T = T(X, t). In this case, the temperature in the fiber can be obtained: where τ = ρ c h indicates the characteristic time, ρ c is the heat capacity per unit length of the fiber, and h is the heat transfer coefficient. Assume that the steady-state temperature field in the environment is linear: where Q refers to the temperature at x = 0 and β represents the gradient of temperature. By defining the following infinitesimal constants: t = t/ L/g, F L = F L /mg, u = u/L, w = w/L, X = X/L, x = x/l, τ = τ/ L/g, K = KL/mg, α = αT L , T = T/T L , T ext = T ext /T L , β = βL/T L and Q = Q/T L (T L is the temperature at x = L). Thus, the elastic force of LCE fiber can be obtained The solution of the temperature field is [77]: By substituting Equation (12) into Equation (11), the elastic force F L (t) of LCE fiber can be obtained: Governing Equations By defining F d = F d /mg, a 0 = a 0 m L g , a 1 = a 1 L m , and combing with Equations (4) and (13), Equation (3) can be rewritten as: Equation (14) is an ordinary differential equation with second-order variable coefficients, which is difficult to obtain its analytic solution. In this case, the classical fourth-order Runge-Kutta method is adopted to solve Equation (14) numerically, and the steady-state response of LCE fiber is obtained, meaning, the time-history curve of oscillation of the system. Two Modes of Synchronization and Their Mechanisms In the current section, two synchronization modes, namely in-phase mode and antiphase mode, are proposed according to the dynamic Equation (14), and the self-oscillation mechanism and synchronization mechanism are elaborated in detail. To better study the synchronization behaviors of two LCE spring oscillators, it is necessary to obtain the typical values of the dimensionless system parameters. According to the existing experiments [52,54,78,79], the actual values of system parameters are summarized in Table 1, and the dimensionless system parameters are calculated in Table 2. Two Synchronization Modes The time histories of mass displacements can be obtained by setting system parameters K, α, β, a 0 , a 1 , τ, v 1 0 , v 2 0 . The calculation results show that there are two synchronous modes in the system, namely in-phase mode and anti-phase mode, as shown in Figure 2. In calculation, the system parameter is set to: , the two LCE fibers with the same initial velocity first vibrate in the same direction within the linear temperature field. Then, under the influence of damping, the amplitude of self-oscillation gradually decreases and finally stops on the upper side, as shown in Figure 2a,b. Although fibers convert heat into kinetic energy when heated, the converted kinetic energy does not keep them oscillating. When a 0 = 0.02, v 1 0 = 0.1, v 2 0 = 0.5, the fiber will continue to vibrate in the temperature field, and finally evolve into self-oscillation, as shown in Figure 2c,d. In this case, the energy obtained from the temperature field is greater than the damping dissipation, so the self-oscillation is guaranteed. When a 0 = 0.2, v 1 0 = 0.1, v 2 0 = −0.5, the system can maintain the static mode of anti-phase mode, as shown in Figure 2e,f. As a 0 = 0.02, Figure 2g,h plot the displacement-time diagram and phase trajectory diagram in the anti-phase mode. A similar experimental phenomenon was reported by Ghislaine et al. [69], where two liquid crystal network oscillators interacted with each other driven by light and underwent synchronized in-phase and anti-phase oscillations in the steady state. . There exist two synchronous modes of the system, namely, in-phase mode and antiphase mode. Self-Oscillation Mechanism To further investigate the mechanism of self-oscillation of LCE fiber within the linear temperature field, Figure 3a,b plot the time-history curves of LCE fiber for in-phase and anti-phase modes, indicating that two LCE fibers oscillate periodically within the temperature field in in-phase and anti-phase modes. Figure 3c,e plot the curve of the tension of LCE fiber and spring changing with time in in-phase mode, indicating that the tension of LCE fiber and spring change periodically. Figure 3d,f plot the time-varying curves of the driving force and spring force in anti-phase mode, which indicate that the tension of LCE fiber and spring also maintain periodic changes in anti-phase mode. Figure 3g,i show that in the in-phase mode, the LCE fiber and spring tension, along with the displacement, form hysteresis loops, and the region surrounded by the hysteresis loops represents the work done by the LCE fiber tension and spring force. The work done by the driving force of LCE fiber represents the energy input of the system, while the work done by the spring represents the work expended by the resistance. When the energy gain There exist two synchronous modes of the system, namely, in-phase mode and anti-phase mode. Self-Oscillation Mechanism To further investigate the mechanism of self-oscillation of LCE fiber within the linear temperature field, Figure 3a,b plot the time-history curves of LCE fiber for in-phase and antiphase modes, indicating that two LCE fibers oscillate periodically within the temperature field in in-phase and anti-phase modes. Figure 3c,e plot the curve of the tension of LCE fiber and spring changing with time in in-phase mode, indicating that the tension of LCE fiber and spring change periodically. Figure 3d,f plot the time-varying curves of the driving force and spring force in anti-phase mode, which indicate that the tension of LCE fiber and spring also maintain periodic changes in anti-phase mode. Figure 3g,i show that in the in-phase mode, the LCE fiber and spring tension, along with the displacement, form hysteresis loops, and the region surrounded by the hysteresis loops represents the work done by the LCE fiber tension and spring force. The work done by the driving force of LCE fiber represents the energy input of the system, while the work done by the spring represents the work expended by the resistance. When the energy gain is equal to the resistance dissipation, the system will maintain self-oscillation. Figure 3h,j draw the hysteresis loops of the driving force of LCE fiber and spring force in anti-phase mode, which refers to the same energy compensation mechanism as the case of the in-phase mode. ; the parameters for the anti-phase mode are . (a,b) Time-history curves for in-phase and anti-phase modes; (c,d) change curve of driving force with time in in-phase and anti-phase modes; (e,f) spring force versus time curves for in-phase and anti-phase modes; (g,h) curves of the work done by the driving force for in-phase and anti-phase modes; (i,j) curves of the work carried out by the spring force for in-phase and anti-phase modes. The energy absorbed by the system from the external environment compensates for the damping dissipation, thus maintaining the self-oscillation of the system. Synchronization Mechanism To better study the mechanism of synchronization between two LCE fibers after selfoscillation in a linear temperature field, we plot some key physical quantities in the process of self-oscillation. Figure 4a,b draw the time-history curves for in-phase and antiphase modes. Figure 4c,d, respectively, draw the change curve of the phase difference between fiber 1 and fiber 2 for in-phase and anti-phase modes. Figure 4c,d show that in the in-phase mode, the phase difference gradually decreases until it reaches zero, while in (e,f) spring force versus time curves for in-phase and anti-phase modes; (g,h) curves of the work done by the driving force for in-phase and anti-phase modes; (i,j) curves of the work carried out by the spring force for in-phase and anti-phase modes. The energy absorbed by the system from the external environment compensates for the damping dissipation, thus maintaining the self-oscillation of the system. Synchronization Mechanism To better study the mechanism of synchronization between two LCE fibers after selfoscillation in a linear temperature field, we plot some key physical quantities in the process of self-oscillation. Figure 4a,b draw the time-history curves for in-phase and anti-phase modes. Figure 4c,d, respectively, draw the change curve of the phase difference between fiber 1 and fiber 2 for in-phase and anti-phase modes. Figure 4c,d show that in the in-phase mode, the phase difference gradually decreases until it reaches zero, while in the anti-phase mode, the phase difference finally reaches a fixed value which is equal to half a cycle. the anti-phase mode, the phase difference finally reaches a fixed value which is equal to half a cycle. Through careful calculation, it is found that when the initial velocity directions of two self-oscillators are the same, the system will always develop into a synchronous mode. However, when the initial velocity direction is opposite, there is a critical LCE elastic coefficient that triggers a transition between in-phase and anti-phase modes. This result is similar to the existing experiment in that the elasticity coefficient can affect the synchronization mode of the system [69]. When the elastic coefficient of LCE is K < 7100, the system can be affected by the initial velocity, where the anti-phase synchronization mode occurs. When K ≥ 7100, the system always leads to an in-phase synchronous mode. In the case of weak interaction, i.e., K < 7100, the system can be likened to being acted on by external forces, as shown in Figure 4e. The system is divided into two LCE selfexciter separately for discussion. In the in-phase mode, each LCE oscillator is equivalent to applying an additional cycle force to another oscillator, which can be expressed as F 1 = A 1 sin(ω 1 t + ϕ 1 ) and F 2 = A 2 sin(ω 2 t + ϕ 2 ). When the periodic force F is consistent with the frequency of the harmonic oscillator, the synchronization phenomenon occurs. The same is true in the anti-phase mode. Under the weak interaction, Figure 4f-h plots the changes in the system synchronization mode by changing the velocity of L 2 when the initial velocity v 1 0 = 0.05 of L 1 is fixed. As shown in Figure 4f-h, the ring represents the phase change in the process of movement. When the velocity of L 1 is unchanged, the position of L 1 does not change. When the velocity of L 2 is in the blue region, the system can achieve anti-phase mode, because the phases of the two repel each other. It can be seen from Figure 4f-h that an increase in the LCE elasticity coefficient leads to an increase in the synchronization region until, finally, the synchronization region covers all regions. Parametric Analysis In Equation (14), there are seven dimensionless parameters: K, k, α, β, a 0 , α 1 and τ, which will affect the motion process of the system. This section discusses the influence of these system parameters on the amplitude and frequency of self-oscillation in in-phase and anti-phase modes. Effect of LCE Elasticity Coefficient In Figure 5a,b, the amplitude and frequency of the system change with the change of LCE fiber elasticity coefficient K in in-phase and anti-phase modes. Figure 5a,b show that when K ≤ 6, no matter in which mode, the system always achieves a static state, because the driving force F L is less than the initial elastic force F s0 of the spring, and the LCE oscillator cannot vibrate, i.e., the amplitude and frequency are 0. When K > 6, the amplitude and frequency of the system gradually increase with the increase of K. These results can be understood through the energy input and dissipation of damping. With the increase of K, the driving force F L gradually increases, and the energy supply from the linear temperature field gradually increases, so the amplitude and frequency of the self-oscillation increase. Figure 5c,d draw the limit cycles of different K in two modes. Figure 5c,d show that there is a limit value for the static state and self-excited state in the two synchronous modes, namely K = 6. Figure 6a,b, respectively, draw the curves of the amplitude and frequency changing with different spring elastic coefficients k in the in-phase and the anti-phase modes. Figure 6a shows that in the in-phase mode, with the increase of spring elastic coefficient k, the amplitude of the system gradually decreases and the frequency increases, because with the increase of k, the damping dissipation increases, so the amplitude decreases gradually, while the increase of k makes the spring stiffness increase, so the frequency increases gradually. Figure 6b shows that the amplitude and frequency of the system remain basically unchanged in the anti-phase mode. This is because in inverting mode, the two LCE fibers move in opposite directions and at equal distances, so the total length of the spring connected to the lower end remains the original length, and the amplitude and frequency of the system remain the same. Figure 6c,d show the different limit cycles with different spring elastic coefficients k in two modes. The results show that the system is always in a vibration state in in-phase mode or anti-phase mode, and its motion mode is independent of k. Figure 6a,b, respectively, draw the curves of the amplitude and frequency changing with different spring elastic coefficients k in the in-phase and the anti-phase modes. Figure 6a shows that in the in-phase mode, with the increase of spring elastic coefficient k , the amplitude of the system gradually decreases and the frequency increases, because with the increase of k , the damping dissipation increases, so the amplitude decreases gradually, while the increase of k makes the spring stiffness increase, so the frequency increases gradually. Figure 6b shows that the amplitude and frequency of the system remain basically unchanged in the anti-phase mode. This is because in inverting mode, the two LCE fibers move in opposite directions and at equal distances, so the total length of the spring connected to the lower end remains the original length, and the amplitude and frequency of the system remain the same. Figure 6c,d show the different limit cycles with different spring elastic coefficients k in two modes. The results show that the system is always in a vibration state in in-phase mode or anti-phase mode, and its motion mode is independent of k . Effect of Thermal Expansion Coefficient In Figure 7a,b, the amplitude and frequency of the system change with different thermal expansion coefficients α in the in-phase and the anti-phase modes. It can be seen Effect of Thermal Expansion Coefficient In Figure 7a,b, the amplitude and frequency of the system change with different thermal expansion coefficients α in the in-phase and the anti-phase modes. It can be seen from Figure 7a,b that when |α| ≤ 0.2, the system in two modes is in a static state, and the amplitude and frequency are zero. When |α| > 0.2, the amplitude increases with the increase of the absolute value of |α|, and the frequency is unaffected, because the driving force gradually increases with the increase of |α|, so the amplitude gradually increases. Figure 7c,d draw the images of limit cycles changing with different thermal expansion coefficients α in the in-phase and the anti-phase modes. Figure 7 shows that there is a critical value for both static state and vibration state in two modes, namely α = −0.2. Figure 8a,b plot the variation curves of amplitude and frequency of the system along with temperature gradient β in the in-phase and anti-phase modes. Figure 8a,b show that when β ≤ 0.04, the system is in a static state, where the amplitude and frequency are 0. On the contrary, when β > 0.04, the amplitude of two modes increases with the increase of β, while the frequency is unchanged, which can be understood through energy input and damping dissipation because with the increase of β, the temperature of the temperature field gradually increases, and the driving force F L gradually increases, so the amplitude gradually increases. In Figure 8c,d, the limit cycles change with different temperature gradients β in two modes. It can be seen that there is a critical value β = 0.04 for both static state and self-oscillation state in in-phase and anti-phase modes. Figure 9a,b show the variation curves of the amplitude and frequency with the change of the first damping coefficient a 0 in the in-phase and anti-phase modes. It can be seen from Figure 9a,b that when a 0 ≥ 0.05, the system is in static states, and the amplitude and frequency are 0. When a 0 < 0.05, the amplitude of both the in-phase and the anti-phase modes decreases with the increase of a 0 , while the frequency is unaffected, because with the increase of a 0 , the work carried out by damping increases and the energy dissipation of the system increases, so the amplitude gradually decreases. In Figure 9c,d, the limit cycles change with the change of the first damping coefficient a 0 in the two modes. The results show that the same limit value exists in the static state and vibration state of the in-phase and the anti-phase modes, namely, a 0 = 0.05. Figure 10a,b show that the amplitude and frequency of the self-oscillation change with the change of the second damping coefficient a 1 in the in-phase and the anti-phase modes. It can be seen from Figure 10 that the amplitude of the system decreases gradually as the second damping coefficient a 1 increases in the in-phase and the anti-phase modes, while the frequency remains unchanged. This is because as a 1 increases, the damping dissipation increases and so the amplitude gradually decreases. Figure 10c,d draw the images of the change of limit cycles with different second damping coefficients a 1 in two modes. The results indicate that the system is always in a vibration state in in-phase and anti-phase modes, and its motion mode is independent of a 1 . Effect of the Characteristic Time In Figure 11a (c,d) limit cycles of in-phase mode and anti-phase mode. In both in-phase mode and anti-phase mode, the amplitude decreases as the second damping coefficient a 1 increases, while the frequency remains constant. Effect of the Characteristic Time In Figure 11a,b, the amplitude and frequency change with different characteristic times τ in the in-phase and the anti-phase modes. The results show that in both modes, when τ ≤ 0.06, the system is in a static state and the amplitude and frequency are 0. At τ > 0.06, the amplitude of the self-oscillation increases as τ increases, and the frequency stays the same, because with the increase of τ, the heat transfer rate in LCE fiber increases, resulting in the gradual increase of driving force F L , so the amplitude gradually increases. Figure 11c,d draw the images of limit cycles changing with different temperature gradients τ in in-phase and anti-phase modes. It can be obtained that there is a critical value τ = 0.06 for triggering the self-oscillators in both modes. Conclusions The prevalence of synchronization and collective behaviors among self-excited coupled oscillators in nature necessitates investigation due to their inherent benefits, such as efficient energy harvesting, autonomous operation, and enhanced equipment portability. In this paper, based on thermally responsive LCE spring self-oscillators under a linear temperature field, the synchronous behavior of two coupled self-oscillators connected by springs is theoretically investigated. The mechanisms of self-oscillation and synchronization are theoretically investigated, integrating the well-established dynamic LCE model. According to the numerical findings, the system exhibits two synchronization modes: the in-phase mode and the anti-phase mode. Self-oscillations are sustained through a dynamic balance between damped dissipation and work carried out by the driving force. The numerical findings indicate that the synchronization mode primarily depends on the interaction between two LCE self-oscillators. In cases of strong interaction with the elastic coefficient of LCE, the system consistently develops into the in-phase synchronous mode. However, when the interaction is weak, altering the initial conditions can lead to the in-phase and anti-phase modes. When the initial velocity direction of the two selfoscillators is the same or the initial velocity direction is opposite but the value is small, the system achieves the in-phase synchronous mode. On the contrary, as the initial velocity direction of the two self-oscillators is opposite and the relative value is large, the system evolves into an anti-phase synchronization mode. In addition, the influences of the LCE elastic coefficient, spring elastic coefficient, thermal expansion coefficient and other system parameters on the synchronous mode, amplitude and frequency of the self-oscillations are systematically studied. The self-oscil- Conclusions The prevalence of synchronization and collective behaviors among self-excited coupled oscillators in nature necessitates investigation due to their inherent benefits, such as efficient energy harvesting, autonomous operation, and enhanced equipment portability. In this paper, based on thermally responsive LCE spring self-oscillators under a linear temperature field, the synchronous behavior of two coupled self-oscillators connected by springs is theoretically investigated. The mechanisms of self-oscillation and synchronization are theoretically investigated, integrating the well-established dynamic LCE model. According to the numerical findings, the system exhibits two synchronization modes: the in-phase mode and the anti-phase mode. Self-oscillations are sustained through a dynamic balance between damped dissipation and work carried out by the driving force. The numerical findings indicate that the synchronization mode primarily depends on the interaction between two LCE self-oscillators. In cases of strong interaction with the elastic coefficient of LCE, the system consistently develops into the in-phase synchronous mode. However, when the interaction is weak, altering the initial conditions can lead to the in-phase and anti-phase modes. When the initial velocity direction of the two selfoscillators is the same or the initial velocity direction is opposite but the value is small, the system achieves the in-phase synchronous mode. On the contrary, as the initial velocity direction of the two self-oscillators is opposite and the relative value is large, the system evolves into an anti-phase synchronization mode. In addition, the influences of the LCE elastic coefficient, spring elastic coefficient, thermal expansion coefficient and other system parameters on the synchronous mode, amplitude and frequency of the self-oscillations are systematically studied. The selfoscillation amplitude demonstrates a positive correlation with the increase of LCE elastic coefficient, thermal expansion coefficient, temperature gradient and characteristic time, while demonstrates a negative correlation with the increase of spring elastic coefficient and damping coefficient. Unlike existing work about self-oscillating synchronization systems based on active materials [7,70], this paper elucidates in detail the mechanism of the synchronization phenomenon. This study is expected to advance the comprehension of self-oscillator synchronization and provide its potential applications in diverse fields, including energy harvesting, power generation, detection, soft robotics, medical devices and micro/nanodevices. In addition, the research in this paper has the potential to be extended to large-scale synchronization systems containing a large number of coupled oscillators, which has a promising application in this field.
7,435.4
2023-08-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Battery Electric Drive of Excavator Designed with Support of Computer Modeling and Simulation : The motivation for this article was to describe the creation of a battery electric drive of a smaller excavator of a well-known manufacturer. The aim of the excavator electrification research was to replace its internal combustion engine with an electric motor. The innovated excavator does not burden its surroundings with gas exhalations and excessive sound emissions, so that it can work in confined spaces or protected areas. Simulation models of electric and hydraulic parts of the drive were created to select the most suitable solutions verified or predicted by simulations in a Matlab/Simulink environment. Tests have shown that the excavator is capable of operating for at least 7 h without recharging the battery. The other main achieved results of the project are a functional model of zero emissions of mini-excavator exhalation gases with significantly reduced noise, a proven control algorithm in the form of software, and its utility model according to the patent application 2018-35127 adopted by the Industrial Property Office of the Czech Republic. Innovation of the solution of the excavator was awarded the Gold Medal at the Brno International Engineering Fair in 2018. Introduction The worldwide trend in the construction of building, earthmoving, forest, and similar machines is not only to reduce their fuel consumption, but also to reduce the load of gaseous and acoustic emissions that burden their surroundings. There are numerous solutions by researchers and designers. One of the major pathways is the electrification of the drive using an electric battery (electric accumulator). Mathematical modeling and computer simulation play an important role in this research, enabling the search for optimal solutions in virtual space. The aim of our research was to create an electric battery drive of a small excavator up to two tons by replacing the diesel engine with an electric motor together with the creation of a new system of internal control of the excavator subsystems. The new design was made in our own way after completing a patent search to avoid a patent conflict. Within the framework of a grant project, Bosch Rexroth [1], in cooperation with Brno University of Technology, developed the drive and control of a gas emission-free excavator after previous cooperation involving the kinetic energy recovery of heavy vehicles with frequent starting and braking. As the excavators are often used in the building industry, considerable attention has been paid to their construction and improvement in respect to electrification worldwide. The advantage of the described excavator is the drive without gas exhalations. Recently, new electrified excavators of various innovative designs and properties can be found on the world market. Caterpillar [2], along with Pon Equipment, produced an all-electric 26 ton excavator with a 300 kWh battery pack in an effort to electrify the construction equipment. They built a prototype in Gjelleråsen, Norway, for the construction company Veidekke. Komatsu Ltd. [3] developed an electric battery-driven excavator. When fully charged, this battery enables from two to six hours of operation. The machine allows for real-time monitoring of power consumption and charging conditions on the built-in monitor panel. It also allows for the remote monitoring of that information together with the machine location and operating conditions via the KOMTRAX system. The Liebherr [4] electric excavator R 9200 E with a rated output of 850 kW is the biggest excavator on Eurovia mine's 350 sites-up to 25% less maintenance cost compared to a diesel excavator. Takeuchi [5] developed an e240 4 t class battery-powered excavator in 2017. The e240 is a battery version of the company's TB240 diesel model. The machine operates for nine hours at 65% of full load. It charges from a standard 220 V power outlet and take around 10 hours to go to full charge from zero. Wacker Neuson [6] debuted its first fully electric, battery-powered EZ17e compact excavator in 2018. All hydraulic functions are as powerful as those of the conventional model. The battery is integrated into the existing engine compartment. The EZ17e weighs almost exactly the same as the diesel version and can be transported on a trailer. Bobcat [7] rolled out the E10e, its first electric mini excavator. The machine-fully electric, oneton mini excavator-was built alongside its diesel-powered siblings, the E08 and E10z mini excavators. Kobelco [8] introduced its excavator -the addition to the Generation 10 series. A lot of different solutions are proposed in the literature to improve machinery fuel efficiency, and many of these are based on hybrid solutions. The aim of Casoli [9] was to present a hybridization methodology that allows for the comparison of different system layouts, to determine the dimensions of the energy storage devices, and finally to determine the most effective hybrid system layout. Electrification of excavators was described in Vauhkonen [10]. For this study, a JCB Micro excavator was chosen as a building platform. The 14 kW diesel-powered engine with its required equipment was replaced with a 10 kW electric motor. Four lithium titanate batteries, with a total voltage level of 96 V and a capacity of 60 Ah, powered the electric motor. With the electric drive, while maintaining the same performance, the operating time was substantially reduced compared to the diesel-powered drive. Xu [11] studied modeling of mechanical and hydraulic subsystems for the simulation, design, and control development of excavator systems. As a result, various approaches in the hydraulic system modeling were presented. A recent trend in the development of off-highway construction equipment, such as excavators, is to use a system model for model-based system design in a virtual environment. Modeling of hydraulic systems dynamics by means of differential and algebraic equations can be found in Nevrly [12] and Nepraz [13]. Casoli [14] presented the results of a numerical and experimental analysis conducted on a hydraulic hybrid medium-size excavator. Its standard version was modified using the energy recovery system; its layout was designed to recover the potential energy of the boom, using a hydraulic accumulator as a storage device. The recovered energy was utilized through the pilot pump of the machinery, which operates as a motor, thus reducing the torque required from the internal combustion engine. An experimental fuel saving of about 4% was noted over a testing working cycle. The objective of the thesis of Alvin [15] was to develop a fast simulation model to replicate the functioning of the excavator system. The paper by Nevrly et al. [16] introduced a simulation of drive for electric-hydraulic excavators, basic model schemes of the drive system, simulation models of the electric motor system, and the pump system driven by the electric motor. Examples of simulation results-time courses of model quantities-were received either by means of Matlab/Simulink or a set of differential equations. An energy recovery system integrating flywheel and flow regeneration for a hydraulic excavator boom system is described in Li [17]. Implementing the energy recovery system is a solution to improve energy efficiency for hydraulic excavators. A flywheel energy recovery system is proposed based on this concept. A hydraulic pump motor is employed as the energy conversion component and a flywheel is used as the energy storage component. The implementation of flow regeneration has two benefits: downsizing the displacement of the "hydraulic pump motor" and extra improvement of energy efficiency. A potential energy recovery and regeneration system for hybrid hydraulic excavators based on a multi-cylinder structure working device as a new invention is presented in Zhang [18]. Energy balancing for zero emission excavators was described in Jurik [19]. The author shows that efficient balancing of energy flows cannot be solved only by optimizing one subsystem, but rather that only the consistency of these subsystems enables the efficient balancing of energy flows for the efficient use of limited power sources-battery packs. Improving energy efficiency of an electric mini excavator using a special start-and-stop logic system is described in Hassi [20]. The benefits of this system were measured using a test cycle with the old and new configurations. According to the authors, improvement in the operating time proved to be at least 50% longer than that of the old configuration. The efficiency study of an electrichydraulic excavator was presented in Salomaa [21]. A Matlab model was utilized to study the total energy consumption and power distribution of the micro-excavator. The model consists of the hydraulic and mechanical systems related to the actuation of the front hoe, i.e., boom. In Liu [22], achievement of fuel savings in a wheel loader by applying hydrodynamic mechanical power split transmissions was described. In this paper, the torque converter was replaced with a hydrodynamic mechanical power split transmission for improving the fuel economy of wheel loader. Based on the probability similarity theory, the typical operating mode for the vehicles was constructed and used to evaluate the energy consumption performance of the selected solutions. As part of the development of the electric excavator, we prepared a patent study to avoid a patent collision with previously patented solutions. These included, in particular, patents from Kubota, Takeuchi, Hitachi, and Terex (Demag). Overall Characteristic of Solution Based on the patent search, the possibilities of solving the electric drive of the excavator were analyzed, then the structure of the functions was determined, and the elements of the excavator system were selected. A risk analysis was performed and the E19 excavator manufactured by Bobcat was chosen as a suitable starting machine for this task (Figure 1). To solve the electrification of the excavator, it was expedient to divide the whole excavator into partial parts, e.g., according to Figure 2. The diagram shows the functions that is subsequently manifested in mathematical modeling. To monitor and analyze the internal and external context of the physical quantities involved, a design of a virtual solution, the concept of the structure of subsystems-including the control system-was performed. Mathematical models of subsystems in the Matlab/Simulink environment of MathWorks were compiled using the industry-oriented toolboxes Simscape, SimHydraulics, and SimMechanics. Subsequently, variant solutions were developed and verified by simulations and measurement. Testing of machine functions was performed by a system of targeted measurements of excavator parameters with subsequent data processing and verification of mathematical models. The drive of the excavator was designed, including the battery power supply system and the combination of the motor and the frequency converter. The connection of the electric motor, controller, and electric battery is evident from the model of the drive subsystem in Figure 3. The Electric Motor The electric motor is a synchronous three-phase motor with permanent magnets powered by electric accumulator. It drives the pump via a belt drive. It was developed for application in an emission-free excavator. Electric Battery 48 V, Li-Fe Cells The battery consists of pack of connected electric cells (Figures 4 and 5). This battery is the only source of energy for driving the excavator's electric motor. The battery includes a BMS (Battery Management System) module, which monitors the status of the cells. The battery was developed based on specifications from Bosch Rexroth. An important factor is the charging time to reach full capacity while complying with all safety rules, such as the temperature of individual cells. This is approximately 5 h; the power supply has an output of 1.5 kW at a voltage of 240 V. One of the ways to improve charging is to use three-phase charging of the battery (Figure 6). Electrical parts were laboratory measured, inspected, and, if necessary, modeled and computer simulated (Figure 7). Special care was given to the electric battery and a specially made electric motor. Excavator Hydraulics In terms of force, the hydraulic system is of fundamental importance for the operation of the excavator. The simplified diagram of a standard hydraulic excavator can be seen in Figure 8, which shows the functional links between the basic elements of the system-the pump for the drive, main valve, hydraulic rotary motors, and hydraulic cylinders. A more detailed hydraulic diagram of a substantial part of the hydraulic system of excavator E19 is shown in Figure 9, where the details of the connection of hydraulic cylinders, directional valves, auxiliary valves, etc. can be seen. Modeling of Processes in the Excavator The aim was to create models of excavator systems and their subsequent use for the purposes of simulation prediction, function verification, and optimization. For reasons of clarity, greater versatility of use, and easier work with models, a modular and multilevel model concept was chosen. The models developed in the Matlab/Simulink environment are sufficiently universal and enable simulations of various excavator processes. Separately applicable bucket, arm, boom, and boom turn subsystems are available to facilitate simulations of the excavator's main movements. As an example of a hydraulic subsystem, Figure 10 shows the model of the pump and its connection to the hydraulic system. The control signals are optionally generated by the Signal Builder block and control the speed and the ideal source of torque. This source controls the variable displacement and pressure compensated pump, while all significant mechanical and hydraulic quantities are automatically registered, as shown in the diagram. When modeling hydraulic directional valves, the goal was to find the parameters of the models so that the behavior of the models corresponded as closely as possible to the characteristics of the directional valves listed in their catalog sheets. The models and their parameters were made more accurate during the work after tests on a functional sample of the excavator. The proposed directional valve models include physically existing nonlinearities (insensitivity, saturation transition). 3D Modeling of Aggregate Placement 3D models of the drive elements were created and variant designs of their installation into the excavator structure were made, after which a variant of the installation solution according to Figure 12 was selected. Control of Subsystems and Excavator System Following the results of the patent search, a specific control algorithm was created; this is the basic distinguishing element of the solution in comparison with other currently known solutions for controlling the excavator drive. The algorithm was created on the modular basis of the Bodas system. The hierarchy of the main parts of the control system is shown in Figure 13. The superior control system controls the battery module (with the connected charging block), electric motor control, hydraulic functions control, and diagnostics. A more detailed scheme of the excavator control is shown in Figure 14. The initiating electrical signals come from the manual control block of the excavator operation to the hydraulic directional valve block and to the drive block (frequency converter, electric motor). The electric current from the electric accumulator leads to this block and then goes on to the pump block and farther to the hydraulic directional valves. The pressure oil flows from the hydraulic distributors to the hydraulic motors, which move the working mechanisms of the excavator. From the point of view of control, the decisive block is "operation control," which implements the interaction between the operator and the excavator. It contains a microcomputer control unit that generates control electric currents for controlling oil flows in directional valves. Control is of an "offline" type, i.e., without feedback from the actual controlled movements. The data flow structure of the basic software modules for controlling the switchboards, the electric motor controller, and the pump can be seen in Figure 15. The inputs are signals from the manual control, from the battery management, and from the pressure sensor. These signals enter a set of sub-blocks, such as a low limiter, current protection of the accumulator, ramp block, etc. The output signals go to the coils of the electromagnets of the hydraulic routers, the motor controller, and the pump coils. Experiments The research used a methodology of theoretical and experimental verification of drive parameters, including the measurement of electrical, mechanical, hydraulic, and functional safety parameters of the excavator. The functional test methodology consisted of methodologies for excavation, travel, and load transfer. Figure 16 shows an example of the result of measurement of the speed (actual versus target), torque, battery current, etc., when starting and stopping the electric motor. The following values of energy consumption within one hour were reached using the electrical parameter tests: Excavation: consumption from 1.61 to 3.85 kWh Relocation: consumption from 0.76 to 1.39 kWh Travel: consumption from 0.64 to 5.46 kWh A specific feature in the assessment of the actual use appears to be the ability to work and the endurance of the machine operator when working in full concentration and maintaining safety for the specified period. During tests in ideal spatial conditions, operators were changed after one hour of work; this allowed us to achieve relatively high performance at work. Example of Simulated Event The characteristics of the electric rotary motor for the pump drive were first simulated ( Figure 17) and subsequently measured. Motor acceleration went from 0 to 2000 rpm without load. In 0.4 s, a torque impulse of 0-30 Nm was applied to the motor. Figure 18 shows the results of the simulation of hydraulic, mechanical, and electrical quantities during several rapid movements of the excavator bucket, while the load changes stepwise almost to the maximum (force piston, max = 27,000 N for both directions of movement). The drive was already slightly overloaded (actual speed was lower than required). Results An electric drive of the excavator (Figure 19) was created with a patented control system of its subsystems, the operation of which was free of gaseous and, to a large extent, noise emissions, unlike the original drive using an internal combustion diesel engine. Functional tests show that the machine is able to operate in standard operating mode. The measured values indicate good energy utilization of the machine. With a used battery, it is possible to achieve real working performance of 7 h. It can be stated that the knowledge gained within the framework of the project is reflected in the design and development of prototype part of the excavator as well as, specifically, in the "flow sharing" block and the axial pump type A10VO, both from the company Bosch Rexroth. The developed mathematical models can be used even after the end of the project for work on other types of similar machines (see Appendix A). Discussion The new type of drive predetermines the use of the excavator, especially for such places where operation without gas exhalations and with lowered noise otherwise emitted by internal combustion engine is required, i.e., in enclosed spaces and protected areas such as hospitals, rehabilitation facilities, protected landscape areas, etc. The limiting factor for achieving higher perfection of the models (accuracy, reliability) is the limited availability of parameters of the modeled parts of the device. Some parameters must then be determined by expert judgment. Also, the possibilities of model verification are complicated by the fact that, currently, the measurements on a real excavator can contain only a limited number of measured quantities, some of which (mainly torque) showed a certain unreliability. The verification had to be performed by a detailed analysis of only some of the mutually corresponding parts of the measurement and simulation records. The standard drive system of excavators powered by hydraulic motors seems to be somewhat inefficient due to the repeated conversion of energy: electric (in the case of a battery excavator)mechanical-hydraulic-mechanical. This causes some energy losses. A direct electric drive with one conversion, electric-mechanical, could theoretically lead to an increase in drive efficiency. A change of this kind was not among the objectives of the described project, but it seems to be a potential topic for the next phase of electrification of excavators and similar machines. A battery electric drive brings new possibilities to the field of mobile working machines for the efficient use of installed energy. The simulations and functional tests show the advantages and disadvantages of this solution. Due to the designation of the machine for operations requiring zero gas exhalations, it is now difficult to compare its economic parameters, such as machine price, all operating costs, and return on investment, to the parameters of a standard machine with an internal combustion engine while achieving the same or longer machine life. The results of the development as mathematical models were usable not only during the solution of the project, but also after the end of the project for similar development work. The knowledge from modeling is transferable to similar applications of control of hydrostatic drives, hydraulic motors in building, earthmoving machines, and the like. Patents Proven control algorithm in the form of software and its utility model according to patent application 2018-35127. The application was submitted to the Office for the Protection of Industrial Property. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. Appendix A. Example of Application of Methods and Models Created during the Development of the E019 Machines The electrification of the Dapper 5000 wheel loader [23], manufactured by VOP CZ and is currently being worked on, can serve as an example of the application of methods and models created during the development of the electric version of the E19 excavator to other machines. It is again a replacement of a diesel engine drive with an electric drive, with energy supply from an electric accumulator. Even in this case, it is a working machine in which the drive of the end working parts is ensured by hydraulic motors.
4,977
2020-09-12T00:00:00.000
[ "Engineering", "Environmental Science" ]
Novel Loss-of-Function Variant in HNF1a Induces β-Cell Dysfunction through Endoplasmic Reticulum Stress Heterozygous variants in the hepatocyte nuclear factor 1a (HNF1a) cause MODY3 (maturity-onset diabetes of the young, type 3). In this study, we found a case of novel HNF1a p.Gln125* (HNF1a-Q125ter) variant clinically. However, the molecular mechanism linking the new HNF1a variant to impaired islet β-cell function remains unclear. Firstly, a similar HNF1a-Q125ter variant in zebrafish (hnf1a+/−) was generated by CRISPR/Cas9. We further crossed hnf1a+/− with several zebrafish reporter lines to investigate pancreatic β-cell function. Next, we introduced HNF1a-Q125ter and HNF1a shRNA plasmids into the Ins-1 cell line and elucidated the molecular mechanism. hnf1a+/− zebrafish significantly decreased the β-cell number, insulin expression, and secretion. Moreover, β cells in hnf1a+/− dilated ER lumen and increased the levels of ER stress markers. Similar ER-stress phenomena were observed in an HNF1a-Q125ter-transfected Ins-1 cell. Follow-up investigations demonstrated that HNF1a-Q125ter induced ER stress through activating the PERK/eIF2a/ATF4 signaling pathway. Our study found a novel loss-of-function HNF1a-Q125ter variant which induced β-cell dysfunction by activating ER stress via the PERK/eIF2a/ATF4 signaling pathway. Introduction MODY3 (maturity-onset diabetes of the young, type 3) is the most common form of MODY, which is caused by heterozygous variants in hepatocyte nuclear factor 1 alpha (HNF1a) [1]. So far, more than 1200 various pathogenic and non-pathogenic HNF1a variants have been identified [2]. HNF1a plays a key role in the regulation of β-cell function, which not only controls cell lineage differentiation but also maintains β-cell identity [3,4]. Importantly, HNF1a is a master regulatory transcription factor that controls the expression of more than 106 target genes in human pancreatic islets [5], including SLC2A2 [6], PDX1 [7], and FOXA3 [8]. Importantly, the endoplasmic reticulum (ER) plays a key role in insulin secretion because it controls insulin synthesis, proper folding, and response to glucose [16,17]. Clinical and Structural Characterization of HNF1a-Q125ter Variant A 15-year-old male adolescent was diagnosed with diabetes. He had a higher fasting blood glucose and HbA1c (15.4 mmol/L and 12.9%, respectively) ( Figure 1A). Meanwhile, his urine glucose was 4+ and urine ketone was 2+, which suggested the patient might have diabetic ketosis. The normal reference for both is negative (−) ( Figure 1A). Furthermore, the patient had a normal level of C-peptide (1.69 ng/mL), but a high level of lactic acid (LAC) ( Figure 1A). As shown in Figure 1B, the patient's mother had a family history of diabetes. According to these clinical genotypes, we speculated that the patient could have a MODY. Therefore, we used Sanger sequencing to identify the type of MODY in order to better treat the patient. The exon coding regions of ABCC8, AKT2, BLK, CEL, EIF2AK3, GCK, GLIS3, GLUD1, HADH, HNF1A, HNF1B, HNF4A, INS, INSR, KCNJ11, KLF11, MAPK8IP1, NEUROD1, PAX4, PDX1, PLAG1, PTF1A, RFX6, SLC19A2, SLC2A2, UCP2, and ZFP5 were directly sequenced. The result of Sanger sequencing showed that the 125th glutamine of the patient's HNF1a had mutated into a stop codon (HNF1a p.Gln125*/HNF1a-Q125ter), which was a heterozygous variant ( Figure 1B,C). As a consequence, the patient was diagnosed with MODY3. Meanwhile, the mother of the patient also had a HNF1a-Q125ter variant ( Figure 1B). Interestingly, the HNF1a-Q125ter is a new variant that has never been reported. The HNF1a variant gene structure, functional domains, and location are represented in Figure 1D, which indicates that HNF1a-Q125ter could impair DNA-banding and transactivation domains of HNF1a. Additionally, HNF1a-Q125ter had an impaired protein structure, the specific results of which are shown in Figure 1E. Surprisingly, the patient did not respond to glimepiride, which is sulfonylureas. Similar HNF1a-Q125ter Variant Impaired Pancreatic β-Cell Function in Zebrafish In order to investigate the function of the HNF1a-Q125ter variant, we generated a zebrafish line with HNF1a containing a mutation at a similar position using CRISPR/Cas9. Details of the mutation position of HNF1a in zebrafish are represented in Figure 2A and Supplementary Figure S1A. The mRNA level of hnf1a decreased in hnf1a +/− ( Figure 2B). Furthermore, the survival rate of hnf1a +/− larvae was lower than WT ( Figure 2C). We also evaluated the morphological changes in hnf1a +/− zebrafish during the different development stages. However, there were no obvious morphological phenotype differences between WT and hnf1a +/− zebrafish at the stages of 1, 2, 3, 4, 5, and 6 dpf ( Figure S1B). Similar HNF1a-Q125ter Variant Impaired Pancreatic β-Cell Function in Zebrafish In order to investigate the function of the HNF1a-Q125ter variant, we generated a zebrafish line with HNF1a containing a mutation at a similar position using CRISPR/Cas9. Details of the mutation position of HNF1a in zebrafish are represented in Figure 2A Furthermore, the survival rate of hnf1a +/− larvae was lower than WT ( Figure 2C). We also evaluated the morphological changes in hnf1a +/− zebrafish during the different development stages. However, there were no obvious morphological phenotype differences between WT and hnf1a +/− zebrafish at the stages of 1, 2, 3, 4, 5, and 6 dpf ( Figure S1B). Islet isolation from WT and hnf1a +/− larvae at 6 dpf. The dotted blue lines represent intact individual β cells. Bar scale: 2 μm. n = 3 intact individual β cells for each genotype. Results are represented as means with standard errors; * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. Student's t-test. All experiments were performed at least three times, unless otherwise indicated. WT, wild type. Since MODY3 is caused by the dysfunction of pancreatic β cells, we then surveyed the β-cell function in hnf1a +/− zebrafish larvae. Interestingly, the β-cell number was significantly decreased in hnf1a +/− ( Figure 2D,E), and its total free glucose level was significantly increased ( Figure 2F). Moreover, the insulin mRNA levels (both insa and insb) were Islet isolation from WT and hnf1a +/− larvae at 6 dpf. The dotted blue lines represent intact individual β cells. Bar scale: 2 µm. n = 3 intact individual β cells for each genotype. Results are represented as means with standard errors; * p < 0.05, ** p < 0.01, *** p < 0.001, and **** p < 0.0001. Student's t-test. All experiments were performed at least three times, unless otherwise indicated. WT, wild type. We also investigated insulin-secretion ability in the hnf1a +/− zebrafish. The transcription level of marker genes for insulin secretion, e.g., abcc8, scl2a2, gck, kcnj11, and kcnh6, were significantly decreased in the isolated hnf1a +/− zebrafish islets ( Figure 2L). Next, to examine whether hnf1a +/− affects insulin secretion in real time, we performed live-imaging by using the calcium influx reporter line Tg(Ins:GCaMP6s), which was GCaMP6 calcium indicators combined with an insulin promoter. Finally, according to the change of cytosolic calcium concentration, it was converted into fluorescence signal. β-cells were labeled with a red nuclear marker, while the GCaMP6s fluorescence was present in the green channel. Remarkably, β cells of hnf1a +/− islets displayed a severely blunted calcium influx in response to glucose ( Figure 2K and Supplementary Videos). We also performed transmission electron microscopy (TEM) to access the granule population in the hnf1a +/− zebrafish islets. As shown in Figure 2N,O, the number of insulin granules was lower than WT. Taken together, these data indicated that mutation of the HNF1a at a similar position to the HNF1a-Q125ter variant in zebrafish resulted in reduced β-cell numbers, suppressed insulin synthesis, and impaired insulin secretion. Similar HNF1a-Q125ter Variant Induced β-Cell ER Stress in Zebrafish In the TEM images, we also carefully examined the ultrastructure of β cells. Interestingly, the ER lumen of hnf1a +/− zebrafish was dilated compared with the WT in β cells ( Figure 3A,B). We then further analyzed the marker genes in the ER stress pathway. As shown in Figure 3C, bip and atf4 were significantly upregulated, and atf6b was downregulated, while xbp1, sXbp1 (spliced xbp1), and chop were not changed. This suggested that the hnf1a +/− might induce ER stress through activating Atf4 without inducing apoptosis. A further investigation showed that the Atf4 staining signal was stronger in hnf1a +/− zebrafish β cells compared to WT zebrafish ( Figure 3D,E). Results are represented as means with standard errors; * p < 0.05, ** p < 0.01, and *** p < 0.001, and **** p < 0.0001. Student's t-test. All experiments were performed in at least three biological repeats. WT, wild type. Additionally, chemically induced ER stress could activate Nrf2 in some mammalian cultured cells [25]. Furthermore, the antioxidant transcription factor nuclear factor erythroid 2 related factor 2 (Nrf2) was downstream of PERK, an unfolded protein response (UPR) signal pathway. PERK also phosphorylates and activates Nrf2 [26]. Hence, we detected the transcription levels of nrf2a, nrf2b, gstp1, and hmox1a. As shown in Figure 3F, hnf1a +/− decreased the transcription level of marker genes for Nrf2. These data suggested that ER stress was induced in the β cells of hnf1a +/− . Overexpression of HNF1a-Q125ter Variant Led to β-Cell Dysfunction In Vitro To explore the detailed mechanism of HNF1a-Q125ter variant in β-cell function, we generated a HNF1a-Q125ter construct and expressed these constructs in the β-cell line, Ins-1 832/13, where overexpression of HNF1a-Q125ter suppressed Ins-1 cell growth, but not the HNF1a-WT ( Figure 4A,B). hnf1a +/− zebrafish, HNF1a-Q125ter overexpression resulted in decreased insulin granules ( Figure 4I,J). Glucose-stimulated insulin secretion assay was also applied to test the function of HNF1a-Q125ter. Ins-1 cells transfected with HNF1a-Q125ter showed blunted insulin secretion in response to high glucose (16.7 mM) ( Figure 4K). Additionally, the transcription levels of markers for insulin secretion, e.g., Slc2a2, Gck, Abcc8, Kcnj11, and Kcnh6, were downregulated ( Figure 4E). All of the above data suggested that overexpression of HNF1a-Q125ter led to β-cell dysfunction in Ins-1 cells, as was consistent with the phenomena in zebrafish. We also evaluated the effect of HNF1a-Q125ter overexpression on insulin synthesis in Ins-1 cells. As shown in Figure 4C,D, both Ins1 and Ins2 transcriptional levels were increased in HNF1a-WT-overexpressed cells, while they decreased in HNF1a-Q125ter-overexpressed cells. Moreover, the mRNA of several key transcription factors in the regulation of insulin biosynthesis were decreased ( Figure 4H). Immunostaining also indicated that the insulin protein intensity was alleviated in HNF1a-WT-overexpressed cells, but reduced in HNF1a-Q125ter-overexpressed cells ( Figure 4F,G). A similar trend was also observed in the proinsulin detected by immunoblot (Supplementary Figure S2A,B). To elucidate the effect of HNF1a-Q125ter on insulin secretion, we first measured the insulin granules through immunofluorescent staining. Consistent with the result in hnf1a +/− zebrafish, HNF1a-Q125ter overexpression resulted in decreased insulin granules ( Figure 4I,J). Glucose-stimulated insulin secretion assay was also applied to test the function of HNF1a-Q125ter. Ins-1 cells transfected with HNF1a-Q125ter showed blunted insulin secretion in response to high glucose (16.7 mM) ( Figure 4K). Additionally, the transcription levels of markers for insulin secretion, e.g., Slc2a2, Gck, Abcc8, Kcnj11, and Kcnh6, were downregulated ( Figure 4E). All of the above data suggested that overexpression of HNF1a-Q125ter led to β-cell dysfunction in Ins-1 cells, as was consistent with the phenomena in zebrafish. The HNF1a-Q125ter Variant Induced ER Stress by Activating the PERK/eIF1a/ATF4 Signaling Pathway To gain a deeper understanding of HNF1a-Q125ter, we introduced the HNF1a knockdown (shHNF1a) plasmid to further explore the related mechanism. We confirmed that shHNF1a efficiently knocked down HNF1a in Ins-1 cells ( Figure 5A). Since a similar HNF1a-Q125ter variant induced β-cell ER stress in zebrafish, we then questioned whether ER stress also occurred in Ins-1 cell. We imaged the ER morphology in Ins-1 cells transfected with HNF1a-WT, HNF1a-Q125ter, and shHNF1a by TEM (Supplementary Figure S3A). We measured the width of the ER lumen and found significantly dilated ER in cells overexpressed with HNF1a-Q125ter and shHNF1a, compared with the control or HNF1a-WT ( Figure 5B and Supplementary Figure S3A). Discussion The HNF1a variant that causes MODY3 is the most commonly reported MODY, comprising 30% to 65% of all MODY cases. However, the molecular mechanisms that impair islet β-cells function are still unclear. In this study, we found a new HNF1a variant, HNF1a-Q125ter, in a human, presenting with atypical clinical symptoms of MODY3 are non-sensitive to sulfonylureas, and explored its molecular mechanisms by using zebrafish and the Ins-1 cell line. We firstly generated a similar variant in zebrafish (hnf1a +/− ), and the animal displayed hyperglycemia, which was a diabetic phenomenon ( Figure 2D). We further found that the hnf1a +/− significantly decreased the zebrafish β-cell numbers and that overexpression of HNF1a-Q125ter suppressed Ins-1 cell growth ( Figures 2B and 4B). Consistent with our data, several HNF1a variants, including p.D80V, p.R203C, p.P475L, and p.G554fsX556, resulted in the retardation of Ins-1 cell growth by inducing cell-cycle arrest at the transition from G1 to S phase [27]. Additionally, Ins-1 cells overexpressing the dominant-negative variant HNF1a-P291fsinsC showed significant growth impairment, which is due to delayed transition from the G1 to S phase and was mainly manifested by downregulation of cyclin E and upregulation of P27 [28]. Induction of another dominant-negative HNF1a variant (SM6) suppressed cell-cycle progression by increasing the level of the mTORC1-regulated cell-cycle inhibitors [29]. However, the patient with HNF1a T260M showed impaired GSIS but no obvious decrease in β-cell mass, and neither β-cell proliferation nor apoptosis [9]. Although the data on β-cell mass were inconclusive for the HNF1a variant, this was likely due to differences in the HNF1a mutation sites or species. Our data further suggested that insulin expression and secretion were also affected in HNF1a-Q125ter-overexpressed Ins-1 cells or hnf1a +/− zebrafish (Figures 2 and 4 and Supplementary Figures S3 and S4). Similarly, Hnf1a KO mice (Hnf1a −/− ) displayed decreased insulin levels and impaired GSIS [12]. Moreover, the human HNF1a variant HNF1a T260M also showed impaired islet GSIS, which disturbed the transcriptional regulatory network of insulin secretion [9]. Studies also revealed that HNF1a directly regulated transcription of several genes essential for insulin secretion, e.g., Slc2a2 and HNF4a [30,31]. In the clinic, MODY3 patients with the HNF1A variant had impaired insulin secretion, including reduced fasting insulin levels and decreased OGTT insulin levels [32]. Taken together, these data suggest a consensus that disruption of HNF1a results in impaired insulin secretion across species, from zebrafish to mice to humans. We further showed that the HNF1a-Q125ter variant's induced β-cell dysfunction could be due to activation of ER stress. Several pieces of evidence support our observation, including upregulated ER stress markers (Atf4 and Nrf2 [33]), and dilation of ER lumen (Figures 3 and 5). Congruent with our results, the target expression of dominantnegative HNF1a variant (SM6) in mouse β cells dilated rough ER cisternae [34]. In addition, overexpression of HNF1a variant (SM6) in Ins-1 cell lines resulted in ER stress mainly by downregulation of XBP1 and BiP [35]. In this study, we demonstrated that the HNF1a-Q125ter variant induced ER stress by activating the PERK/eIF1a/ATF4 signaling pathway ( Figure 5). Although the downstream signals were different in HNF1a-Q125ter, and SM6 variants induced ER stress, the differences might be caused by the variants' sites. The HNF1a-Q125ter variant lost most of its DNA-binding domain and lost its entire transactivation domain, and the phenotype of HNF1a-Q125ter overexpression was highly similar to the knockdown (shHNF1a) phenotype. Moreover, the in vivo and in vitro functions of HNF1a-Q125ter variants are similar to other dominant-negative HNF1a variants (P291fsinsC and SM6). Hence, we speculated that the HNF1a-Q125ter variant might play its role in β-cell dysfunction in a dominant-negative manner. In summary, we identified a novel HNF1a variant, HNF1a-Q125ter, that induced β-cell dysfunction. Both through in vivo and in vitro approaches, our data revealed that the HNF1a-Q125ter variant decreased β-cell numbers, reduced β-cell growth, and impaired insulin synthesis and secretion. Further investigations demonstrated HNF1a-Q125ter induced ER stress by activating the PERK/eIF2a/ATF4 signaling pathway. Future studies are needed to evaluate how the HNF1a variant interacts with molecules in the ER stress pathways and possible therapeutic targets to reduce the stress on β-cells for alleviating insulin-secretion defects. Patient Subjects were recruited from routine clinical activities. A 15-year-old male adolescent was diagnosed with MODY3 by Sanger sequencing technology, using exome-sequencing approaches. DNA was extracted from the peripheral blood of the patient by column method. Establishment of hnf1a Variant Zebrafish Using CRISPR/Cas9 Technique HNF1a gRNA synthesis was conducted according to the standard protocol, as in a previous publication [42]. The sgRNA target site (ACAACCTTCCCCAGAGAG) was designed by using the online tool CRISPRscan, and sgRNA was synthesized from a T7 kit (MAXIscript T7 Transcription Kit, Invitrogen, Carlsbad, CA, USA). Then sgRNA and Cas9 protein (NEB, Beijing, China) were co-injected into single-cell-stage embryos. The mutant F0 generation was raised to adulthood, and the F1 generation zebrafish were obtained by crossing with AB zebrafish. Genomic DNA prepared from adult fin clips was genotyped by PCR followed by 1% agarose gels and polyacrylamide gels, using the following primers: Forward primer: ATGCTTCACAAGTACATAATACA. Reverse primer: TTGAGGTGCTGCGACAGAT. The single mutant zebrafish with a 2 bp gene deletion was obtained by sequencing. Islets Isolation We isolated islets by collagenase digestion from larvae, as described [43]. Wild-type (WT) and hnf1a mutant larvae were anesthetized and digested in 250 µL collagenase P solution (0.6 mg/mL, dissolved in HBSS, Roche, Basel, Switzerland) for 5 min, at 37 • C. The digestion was then stopped by adding 1 mL stop solution (10% FBS in HBSS). The lysate was spun, and the pellet was resuspended in cold HBSS plus 10% FCS. The suspension was transferred to a Petri dish, and the islets were picked under Leica M205 FCA fluorescence stereomicroscopy (Leica, Wetzlar, Germany). RNA Extraction and Quantitative RT-qPCR Total RNA from Ins-1cells, islets, and larvae was extracted by using the RNA Simple Total RNA Kit (Tiangen, DP419, Beijing, China). Reverse transcription was performed by using the FastKing RT Kit (with gDNase) (Tiangen, KR116, Beijing, China). Then qPCR-RT was carried out by using 2× SYBR Green PCR Master Mix (Lifeint, Xiamen, China), and the 2 −∆∆ct method was used to calculate the gene-expression fold change normalized to Ct values of β-actin or 18s of the control sample. Primer sequences are listed in Supplementary Table S1. Western Blot Cells were washed in ice-cold PBS and subsequently homogenized in 1× RIPA buffer (Sigma, R0728) supplemented with protease inhibitors (MCE, HY-K0010) and phosphatase inhibitors (MCE, HY-K0021). The suspensions were centrifuged at 12,000 rpm, at 4 • C for 10 min. The supernatants were collected and assayed for total protein concentration by using the PierceTM BCA Protein assay kit (ThermoFisher Scientific, A23228, Massachusetts, USA). Samples were then electrophoresed on SDS-PAGE gel and transferred onto PVDF membranes. Membranes were incubated with the antibodies listed in Supplementary Table S2. Visual bands were detected by using ChemicDocTM Imaging System (BIO-RAD, 733BR2378). Densitometric analysis was performed by using Image J software (National Institutes of Health, Image J 1.8.0.345, win64). β-Cell Counting and Imaging Fluorescence-positive cell counting was conducted with reference to a previously published paper [44]. In brief, larvae were fixed in 4% paraformaldehyde overnight, at 4 • C, and then washed with PBST and flat mounted in Aqua-Mount (Richard-Allan Scientific, Massachusetts, USA), with their right sides facing the coverslip. The β-cell numbers were counted based on the nuclear mCherry signal numbers by a Zeiss Axiolmager A1 microscope (Carl Zeiss, Jena, Germany). All the counting was repeated by a blinded reviewer. Immunofluorescence HeLa cells and Ins-1 cells were seeded onto glass coverslips for transfection, as described above. Cells were fixed in 4% paraformaldehyde for 15 min and blocked in 5% FBS and 0.1% Tween-20 in 1× PBS for 2 h, at room temperature. Cells were stained with primary antibodies diluted in blocking solution overnight, at 4 • C, followed by secondary antibodies for 2 h, at room temperature. Finally, DAPI-Fluoromount-GTM (Yeasen Biotechnology, 36308ES11, Shanghai, China) was used to dye DAPI on glass slides. The 6 dpf larvae were fixed in 4% paraformaldehyde overnight, at 4 • C. Larvae were washed in 1× PBS, dehydrated in methanol, frozen in methanol, redehydrated in methanol, frozen in acetone, and blocked in 5% FBS in PBDT for 2 h at room temperature. The larvae were stained with primary antibodies overnight, at 4 • C, followed by secondary antibodies for 2 h, at room temperature. Finally, larvae were laid on glass slides and compacted with glass coverslips. The Leica SP8 Confocal microscope was used for imaging. The brightness and contrast of entire images, where adjusted in some instances, were adjusted equally across all samples of the same experiment. The antibodies and further details are represented in Supplementary Table S3. Glucose-Stimulated Insulin Secretion Ins-1 cells were seeded at a density of 800,000 cells per well of a 12-well plate and transfected as described above. Cells were incubated in 2.8 mmol/L glucose in KRB buffer for 1 h, followed by 16.7 mmol/L glucose in KRB buffer for 1 h before analysis. The concentration of insulin in the supernatants was quantified by using the Rat/Mouse Insulin ELISA Kit (Millipore, EZRMI-13K). Transmission Electron Microscopy Ins-1 cells after intervention were placed in 2.5% glutaraldehyde for 30 min, at room temperature, followed by overnight at 4 • C. A total of 200 µL 20% BSA was added to cells, which were then collected. The suspensions were centrifuged at 2000 rcf, at room temperature, for 5 min, and mixed in phosphate buffer. WT and hnf1a +/− larvae were fixed in 2.5% glutaraldehyde overnight, at 4 • C, followed by isolating islets under Leica M205 FCA fluorescence stereomicroscopy. The islets were collected in 1% agarose gel. Images were analyzed under a Hitachi-HT-7800 transmission electron microscope. Live-Imaging of Calcium Influx Zebrafish larvae were euthanized in cold Danio buffer for 3 min, after which the larvae were transferred to the extracellular solution (ECS) (containing 5 mM glucose), and the islets were isolated under the Leica M205 FCA fluorescence stereomicroscope with a syringe needle. We put a drop of melted 0.5% agarose on each of the glass-bottomed dishes beforehand; while waiting for the agarose to cool down to room temperature, we transferred the individual islets to dishes with a pipette and immersed them in 0.5% agarose. Adding ECS, we then carefully placed the dish on the plate holder of the Leica SP8 confocal microscope, using a 20× objective. Using the filter for red fluorescence to view the position of β-cell nuclei, we focused on the islet. The green channel recorded the GCaMP fluorescence intensity. After the first 50 frames (5 mM glucose), we increased the glucose concentration of the surrounding solution to 20 mM without stopping the recording. Statistical Analysis Statistical analysis was performed by using GraphPad PRISM 8 software employing two-tailed Student t-tests to calculate p-values for unpaired comparisons between two groups, and one-way ANOVA was used for comparisons between three or more groups, using p < 0.05 to represent significance. All data were shown as mean ± SEM. The sample sizes of independent experiments can be found in the figure legends. Informed Consent Statement: All enrolled individuals signed an informed consent form for research use of their molecular, cellular, and clinical data. Data Availability Statement: The data that support the findings of this study are available within the article and its Supplementary Materials.
5,320
2022-10-27T00:00:00.000
[ "Biology" ]
An Efficient Deterministic-Stochastic Model of the Human Body Exposed to ELF Electric Field The paper deals with the deterministic-stochastic model of the human body represented as cylindrical antenna illuminated by a low frequency electric field. Both analytical and numerical (Galerkin-Bubnov scheme of Boundary ElementMethod) deterministic solutions of the problem are outlined. This contribution introduces the new perspective of the problem: the variability inherent to input parameters, such as the height of the body, the shape of the body, and the conductivity of body tissue, is propagated to the output of interest (induced axial current). The stochastic approach is based on the stochastic collocation (SC) method. Computational examples show the mean trend of both analytically and numerically computed axial current with the confidence margins for different set of input random variables. The results point out the possibility of improving the efficiency in calculation of basic restriction parameter values in electromagnetic dosimetry. Introduction The exposure of the human body to electromagnetic fields has always been a kind of controversy, particularly in the last two decades.During this period, a number of papers dealing with various models of the human body have been published.The interaction of the human body with electromagnetic field can be analyzed separately for low and high frequencies, respectively, due to different coupling mechanisms and related biological effects.Due to the absence of resonance effects at low frequencies (very long wavelengths, e.g., 6 000 km at 50 Hz), the thermal effects are negligible, while the nonthermal effects could possibly have severe effects on membrane cells [1,2].On the other hand, in high frequency range where the body dimensions are comparable to external field wavelength and resonances become significant, thermal effects are dominant [3,4].In the last few decades, various epidemiological surveys can be found regarding this subject.The study of the adverse health effects of electromagnetic fields to humans is still a hot topic as there are no clear correlations between the EM field exposure and diseases, such as cancer. As it has been already stressed in [1,2,5], the key to understanding the coupling of low frequency (LF) fields with the human body is the knowledge of the induced current inside the human body.When the body is exposed to tangential electric field, the induced current consists only of axial component while circulating component of current induced by magnetic field is small and thus negligible [1,2].The approaches to tackling this problem can be generally classified as analytical and numerical.Once the current induced inside the human body is determined, it is straightforward to calculate the current density, electric field, and other characteristic parameters [5]. In this paper, the cylindrical antenna model of the human body exposed to extremely low frequency (ELF) electromagnetic field is considered.King and Sandler solved the problem analytically in [6] proposing parasitic antenna model of the human body and Gandhi and Chen developed realistic human body model and solved it using FDTD [7], while Poljak and Rashed dealt with the boundary element model of the human body and solved it in frequency domain by using the Galerkin-Bubnov scheme [5].Numerical calculation of International Journal of Antennas and Propagation induced current inside the human body exposed to ELF electric field has also been reported in [8][9][10]. The nature of the parameters encountered in dosimetry studies is uncertain.The influence of their variability has to be taken into account in order to obtain full knowledge about the induced current or electric field inside the human body.Also, in order to convey good validation among the different computational and experimental models, it is necessary to develop a stochastic model that provides a statistical distribution of recommended limits [11].The present paper aims to implement stochastic modelling to account for the variability of calculated induced current inside the human body due to the uncertain nature of input parameters required for the current assessment, such as the height of a person, the shape of the body, and the corresponding body conductivity, which is the dominant electric parameter of the body at ELF.The aim of the paper is to combine modern stochastic techniques, namely, stochastic collocation (SC) with the existing deterministic model for the induced current calculation, and thus provide a new tool for more accurate determination of the body model. In this paper, SC technique is combined with the analytical solution proposed by King [1,2] and numerical solution proposed by Poljak and Rashed [5].The emphasis of this work is put on the novel deterministic-stochastic approach; therefore, rather simplified model of the human body is chosen, but it is worth noting that due to increased computational capability the human body models have become more anatomically realistic.A detailed review of realistic male and female models is given in [12]. The paper is organized as follows: in Section 2, an overview of the theoretical antenna model of the human body exposed to ELF fields is presented.The overview of analytical and numerical solution, respectively, of this model is given.In Section 3, the statistical background is explained, and, in Section 4, some illustrative numerical results are presented.The obtained results are compared with the basic restriction (BR) levels given by the ICNIRP guidelines [13][14][15].Finally, in Section 5, some conclusions are drawn. Deterministic Cylindrical Model of the Human Body 2.1.Formulation.Human is assumed to be vertically positioned on the perfectly conductive ground (PEC) and exposed to incident ELF field.As it has been reported in [1,2,5], the human body exposed to ELF field can be represented by an imperfectly conducting thick cylinder whose length corresponds to the height of the body and its radius is calculated according to the mean width of the human body ( and , resp., in Figure 1).It is common to use muscle tissue in homogenous models of the human body which is the reason to use the conductivity of muscle tissue as the average conductivity of the equivalent cylindrical antenna [3,4].The man is standing on the ground with arms alongside the body and barefoot, thus neglecting the capacity between the soles of the feet and their image in the earth; that is, the body is well grounded.The mean values for height, radius, and conductivity are 1.75 m, 0.14 m, and 0.5 S/m, respectively [1,2,5]. Total axial current induced in the body is governed by the Pocklington integrodifferential equation that can be derived from the continuity condition for the tangential electric field components [5]: where inc is the incident field, sct is scattered field due to the presence of imperfectly conductive cylinder, () is induced axial current, and () is the impedance per unit length of the finitely conductive cylinder. Start from the curl Maxwell equation: and take into account the definition of magnetic vector potential: The scattered electric field can be expressed as follows [5]: where = √ 0 0 is free space constant and is the axial component of the magnetic vector potential given by [5] where is free space permittivity, ( ) is induced axial current, and (, ) is the so-called exact Green function for the thick cylinder [5]: in which is the distance between source point and observation point , both of them located on thick wire surface [5]: Combining ( 1)-( 7) and after some mathematical manipulation, Pocklington's integrodifferential equation for imperfectly conducting wire is obtained [5]: The conducting and dielectric properties of the human body are taken into account in terms of impedance () [1,2,5]: Analytical and numerical solutions of integral equation ( 8) are outlined in the following subsections, respectively. Once the axial current is determined, it is possible to calculate current density, electric field, and other related parameters [5], such as the current density and the induced electric field 2.2.Analytical Solution.Equation ( 8) can be solved analytically according to the procedure presented in [1,2] and the solution is given by where 0 = 120 Ω is the free space impedance, 0 = 1/, is the capacity between the soles of the feet and their image in the earth, and Ψ = Ψ( max ) is a function depending on the position of maximum current [1,2].Since this paper deals with the case when man is standing barefoot on PEC ground, then 0 = 0 (the body is well grounded) and ( 12) simplifies into [1,2] where Numerical Solution. Numerical solution of ( 8) has been presented in [5], and it is briefly outlined here for the sake of completeness.Applying the GB-BEM to (8) results in the following matrix equation: where [] is mutual impedance matrix, {} contains unknown axial current coefficients, and finally {} is vector containing excitation function [5]: In the range of extremely low frequencies, the incident electric field is assumed to be constant over the body and equal to 0 so the local voltage corresponds to [5] Vector {} contains linear shape functions and {} their derivatives.Shape functions are given in the form of Lagrange's polynomials [5]. Stochastic Model Integrating stochastic analysis into presented deterministic model statistical margins could be determined for variations International Journal of Antennas and Propagation of input data.The goal is to propagate uncertainties from the input to the output and calculate statistical moments to assess confidence intervals.Various stochastic techniques have been developed in the past decade such as unscented transform (UT), stochastic collocation (SC), polynomial chaos expansion (PCE) method, and Kriging technique.These methods were successfully applied in various electromagnetic compatibility problems [16][17][18]. Comparing stochastic collocation to well established and widely accepted Monte Carlo methods, the main advantage of SC over MC would be simplicity and less computational cost.On the other hand, MC methods are more convenient in case of large number of input random variables [19].Also, SC method is nonintrusive; that is, it is not necessary to change existing deterministic code which is an advantage compared to the more accurate but at the same time more complicated and intrusive polynomial chaos expansion method [19]. Theoretical background of SC method can be found elsewhere [16][17][18][19], but, for the sake of completeness, some basics are given here, as well. Fundamentals of Stochastic Collocation. The fundamental principle of SC technique lies in polynomial approximation of the considered output for given random parameters.Random parameter is defined as follows [16,17]: where 0 is the initial (mean) value and û is the random variable (RV) with a priori chosen statistical distribution.Function → ( 0 , ) is expanded over a stochastic space using the Lagrangian basis functions [16,17]: where is ( − 1) degree Lagrange polynomial.An interesting property of the Lagrangian basis is Following the definition from statistics for the mean, the expected value of output of interest can be calculated as The final form of (22) can be written as where weight is given as Note that function pdf() is the probability density function of RV û from (17). The order ( − 1) of approximation depends on the number of chosen points (sigma points).The computation of integral in ( 24) is based on Gaussian quadrature with identical sigma points.The convergence is dependent on the number of chosen sigma points. This overview is given for the case of single RV, but it can be easily extended to the case of RVs.For example, if = 2, then = 0 + û , = 0 + û . (25) Function (, ) → ( 0 , 0 ; , ) can be projected on a Lagrangian basis: The mean is given by and weights are computed from relation Furthermore, -dimensional problems can be solved using tensor product in each direction.However, for very large , stochastic collocation would become complex for practical use. In addition to the mean, other higher statistical moments can be readily computed and the expressions are given in Table 1 for one-dimensional RV case.An extension to dimensional problems is straightforward. Computational Examples First, in Section 4.1, univariate examples are given, while Section 4.2 deals with the case of three input RVs.Note that uncertain inputs are integrated in both analytical and numerical deterministic solutions.Input variables chosen as RVs are body length, radius, and conductivity, respectively.Random inputs are given in Table 2 and all RVs follow uniform distribution.In principle, stochastic studies are carried out in two steps: first to statistically model input data and then to solve stochastic numerical or analytical model, respectively [16].The mean values for each RV are given in literature [1,2,5]. Sigma-weight pairs for each of the random variables are given in Table 3. Convergence is accomplished for three sigma points in all examples and therefore there is no need to increase the number of points. The results for all scenarios are presented in the following way: distribution of the mean value for the output of interest, which is either induced current or internal electric field, is given along the human body starting from the feet to the top of the head.Also, figures display the confidence interval around the mean value given as mean value ±3 or ±2 standard deviations. Univariate Examples. The first example selected is the human body exposed to incident electric field inc = 10 4 V/m at frequency of 60 Hz with body length as RV.Then, at the second example, radius is chosen as input RV and finally conductivity (Table 2).The results are presented in Figures 2-4, respectively.Stochastic analysis is carried out for both analytical and numerical deterministic solutions. Figures 2-4 show the highest variability of the results in the case when the radius of the body is taken as RV input (Figure 4).The results are practically nondependent on conductivity (Figure 2) which is in accordance with the analytic prediction; that is, the current does not vary with the change in conductivity [1,2].Taking into account the influence of each RV on the output, it is possible to use SC for sensitivity analysis to exclude the less significant variables from observation for case when the number of random input variables grows high.An additional task is to assess the confidence margins for the output of interest.According to the usual procedure to define confidence interval as mean value ±3 standard deviations, it is demonstrated that the extreme values are not sufficient to predict the dispersion of results around mean trend. Multivariate Examples. In this section, the example with all three input random variables is considered.Sigma/weight pairs are given in Table 3.The human body is illuminated by incident field inc = 10 4 V/m at frequency = 60 Hz.The results in Figure 5 are very similar to the ones in Figure 4 where only radius is considered as random variable, thus verifying the variability of the output current to mostly depend on variability of radius that belongs to particular cylindrical representation of the human body.Though this conclusion is related to the case of uniform distribution International Journal of Antennas and Propagation of random variables, the computation is also applicable to different distributions of random variables. It is worth noting that the results represented in Figures 2-5 show the distribution of the induced current along the human body starting from the feet to the top of the head.This is consistent with ICNIRP's basic restriction on exposure of humans to ELF electric fields from 1998 which is induced current density [13]. However, international standards [14,15], also recommended by WHO, report the change in the basic restriction parameter on ELF electric field exposure from induced current into induced electric field inside the body ⃗ int . Therefore, in Figure 6, the mean trend of the induced electric field obtained from ( 11) is represented along with the confidence intervals: the mean value ±2 standard deviations for 95.45% and the mean value ±3 standard deviations for 99.7% certainty of coverage (Figures 6(a) and 6(b), resp.).The extreme values are completely included inside the confidence margins for the case of 99.7% coverage, while, in 95.45% cases, upper extreme value is not likely to happen.Once again, International Journal of Antennas and Propagation it is demonstrated that extreme values are not sufficient to predict the dispersion of results around mean trend, since lower extreme values are much higher than the lower interval margin in case of 99.7% coverage.Also, the stochastically computed mean is higher than the deterministic one. Conclusion The paper deals with the application of stochastic collocation (SC) method in modelling the human body exposed to LF fields using the cylindrical antenna representation of the body.The deterministic model has already been presented and the new perspective has been given in this work.Taking into account the constant interest of public in potentially adverse health effects of ubiquitous EM radiation, this paper extends the analysis from the stochastic point of view.Presented examples provide the mean value for induced current and induced electrical field with the confidence margins taking into account a priori determined variability of input parameters.It is demonstrated that extreme values of input RVs are not sufficient to predict the dispersion of the output around mean trend.Taking into account confidence intervals obtained from stochastic models, basic restrictions in EM dosimetry can be established with better precision. Figure 1 : Figure 1: Cylindrical model of the human body exposed to electromagnetic field. Figure 2 : Figure2: Induced axial current in the human body with conductivity as RV: ⟨⟩ is stochastic expected value, is standard deviation, and is the value obtained from numerical deterministic model. ×10 − 4 IFigure 3 : Figure 3: Induced axial current in the human body with length as RV: ⟨⟩ is stochastic expected value, is standard deviation, and is the value obtained from analytical (a) and numerical deterministic model (b). Figure 4 : Figure 4: Induced axial current in the human body with radius as RV: ⟨⟩ is stochastic expected value, is standard deviation, and is the value obtained from analytical (a) and numerical deterministic model (b). 2 I 1 IFigure 5 :EE Figure 5: Induced axial current in the human body: ⟨⟩ is stochastic expected value, is standard deviation, and is the value obtained from (a) analytical deterministic model, two input RVs: length and radius (, ), and (b) numerical deterministic model, three input RVs: length, radius, and conductivity (, , ). Figure 6 : Figure 6: Induced electric field in the human body with three RVs-length, radius, and conductivity (, , ): ⟨ int ⟩ is stochastic expected value, is standard deviation, and int is the value obtained from numerical deterministic model.(a) Confidence interval of 95.45% is given as ±2 margins around mean.(b) Confidence interval of 99.7% is given as ±3 margins around mean. Table 1 : SC computation of high-order statistical outputs for the one-RV case (random output ). ⋃ indicates that the random variable is uniformly distributed. Table 3 : Sigma-weight pairs for each of the random variables.
4,299.4
2016-01-01T00:00:00.000
[ "Physics" ]
The Discrete Type-II Half-Logistic Exponential Distribution with Applications to COVID-19 Data 1. College of Statistical & Actuarial Sciences, University of the Punjab, Pakistan<EMAIL_ADDRESS>2. School of Statistics, Minhaj University Lahore, Pakistan<EMAIL_ADDRESS>3. Department of Statistics, Lahore College for Women University, Lahore Pakistan<EMAIL_ADDRESS>4. Department of Mathematics, College of Science & Arts, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia<EMAIL_ADDRESS>5. Department of Statistics, Mathematics and Insurance, Benha University, Benha 13511, Egypt<EMAIL_ADDRESS> Introduction In late 2019, a novel coronavirus disease (COVID- 19) was first reported in China and has been announced as an epidemic by the World Health Organization (WHO) (Lee et al., 2020). The epidemic has mostly been controlled in China since March 2020 but continues to inflict public health and socioeconomic situations in all other countries of the world. One of the major reasons for controlling the disease is China's strategy of effective use of its health care system and publicity of awareness programs among people which played a vital role in the control of the COVID-19 pandemic. However, the major source of its rapid spread is human-to-human contact. It is well recognized that the life duration in the real world is related to continuous non-negative lifetime distributions. However, it is sometimes uneasy to obtain the samples from a continuous distribution. The observed data obtained are discrete because they are usually measured at only a finite number of decimal places and can not assume all points within an interval. When measures are taken on a continuous (ratio or interval) scale, the discrete distributions are more appropriate for such observations. Therefore, it is rational to assume that these observations are from a discretized distribution which is constructed from a continuous distribution same (Chakraborty, 2015). During the last few decades, many continuous lifetime distributions have been proposed and studied. However, research work on discrete distributions is not widely addressed comparatively to continuous distributions. The discretization of continuous lifetime models has been applied to derive discrete lifetime distributions. The discretization of continuous distributions keeps similar functional form of the survival function (SF), as well as resulting in many reliability properties which remain the same (Nakagawa and Osaki, 1975). Recently, the methods of generating discrete analogues of continuous distributions have been considered by several authors. For example, the infinite series discretization method (Good, 1953;Kulasekera and Tonkyn, 1992;Kemp,1997;Sato et al., 1999), survival discretization approach has an interesting feature of keeping the original functional form of SF (Nakagawa and Osaki, 1975), hazard function discretization approach (Stein, 1984), compound two-phase method (Chakraborty, 2015), reversed hazard function discretization method (Ghosh et al., 2013). Some notable recent proposed discrete distributions include discrete Weibull (Nakagawa and Osaki, 1975), discrete skew-laplace , discrete-laplace The main objective of this article is to provide a new flexible two-parameter discrete model, called the discrete type-II half-logistics exponential (DTIIHLE) distribution using the survival discretization approach. The DTIIHLE distribution can be utilized to model over-dispersed count data sets. Its hazard rate function (HRF) can be decreasing or unimodal. We derive some of its properties in explicit forms such as the quantile function (QF), moments and probability generating function (PGF). The two parameters are estimated via the maximum likelihood (ML) and a simulation study is conducted to explore the performance of the ML estimators. The importance of the newly DTIIHLE distribution is illustrated by analyzing two real-life COVID-19 data sets which represent the number of COVID-19 deaths in Pakistan and Saudi Arabia. The rest of the article is structured as follows. In Section 2, the DTIIHLE distribution is defined with some plots of its probability mass function (PMF) and HRF. Some properties of the DTIIHLE distribution are provided in Section 3. The ML approach is adopted to estimate the DTIIHLE parameters in Section 4. Simulation results are conducted to explore the behavior of the introduced estimators in Section 5. To validate the use of DTIIHLE distribution in fitting real-life count data, two sets of data from medicine field are fitted in Section 6. Finally, some conclusions are presented in Section 7. The DTIIHLE Distribution The SF of type II half-logistic exponential (TIIHLE) distribution (Elgarhy et al., 2019) takes the form where is scale and is the shape parameters. The probability density function of the TIIHLE distribution is The Discrete Type-II Half-Logistic Exponential Distribution with Applications to COVID-19 Data 923 A discrete analog of any continuous random variable can be obtained using different discretization approaches. A review on such discretization techniques can be explored in Chakraborty (2015). The most common discretization method is the one preserving the functional form of the SF. Let be a continuous random variable (RV) with SF ( ). The corresponding PMF of a discrete RV reduces to To this end, we apply this discretization method of the continuous TIIHLE distribution to generate the corresponding DTIIHLE model which is defined by the PMF where − = and 0 < < 1. The corresponding cumulative distribution function (CDF), ( ) = ( ≤ ), of DTIIHLE distribution takes the form From Equation (5), one can easily derive the QF as follows The SF and HRF of the DTIIHLE model are given as and The reverse HRF and the second rate of failure of the DTIIHLE model are defined by The recurrence relation which can be used to generate probabilities from the DTIIHLE distribution has the form The PMF plots for different values of its parameters are presented in Figure 1. The HRF plots are displayed in Figure 2. The plots reveal that its PMF can be unimodal, as well as its HRF can be decreasing or unimodal. The PGF and moments The Discrete Type-II Half-Logistic Exponential Distribution with Applications to COVID-19 Data 925 The PGF of the DTIIHLE distribution follows as Differentiating G ( ) with respect to and setting = 1, we obtain the mean of DTIIHLE distribution as Again differentiate ′ ( ) with respect to (wrt) and setting = 1, we obtain On differentiating Gʹʹx (Z) wrt Z and setting Z=1, we have On differentiating Gʹʹʹx (Z) wrt Z and putting Z=1, we get Moments about the origin can be calculated using the factorial moments as The dispersion index (DI) is defined by = σ 2 / . Table 1 shows descriptive measures of the DTIIHLE distribution for different parameter values. One can note that the skewness decreases as the value of the shape parameter increases. If the value of the DI is greater than 1, then the proposed distribution is applicable for over-dispersed data. Parameter Estimation Let 1 , 2 , 3 , … , be a random sample of size from the DTIIHLE model. Then, the log-likelihood function is given by The first derivatives wrt and are The ML estimates (MLEs) of and can be obtained using numerical methods. Simulation Study A comprehensive simulation study has been conducted by generating 10,000 samples of various sample sizes from the DTIIHLE distribution. Particularly, we generate the samples using the following combination of parameters ( , ) i.e., The average estimates (MLEs), mean square errors (MSEs), and convergence probabilities are listed in Table 2. The MLEs are quite stable and very close to the true values of the parameters. The MLEs are consistent as shown from Table 2. The MLEs, standard errors (SE) and 95% confidence intervals (C.I.) for the estimates are listed in Tables 3 and 5 for the two data sets, respectively. Some goodness-of-fit measures including log-likelihood (ℓ), AIC, BIC and KS statistic are presented in Tables 4 and 6 for the respective two data sets. From Tables 4 and 6, it is observed that the DTIIHLE distribution outperforms all other fitted models in analyzing number of deaths in Pakistan and Saudi Arabia. It can provide the best fit to the analyzed data among all other competitive distributions. Figures 3 and 4 display the PP plots for all the competitive distributions for the two data and they support the findings in Tables 4 and 6. Conclusion In this article, a two-parameter discrete distribution is proposed to model COVID-19 new cases in Pakistan and Saudi Arabia, called the discrete Type-II half-logistics exponential (DTIIHLE) distribution. Several mathematical properties of the DTIIHLE model are discussed. Its parameters have been estimated by using the maximum likelihood approach. A simulation study was carried out to check the performance of parameters based on MSEs and CP. The DTIIHLE model is utilized to model two real-life data sets about the number of COVID-19 deaths in Pakistan and Saudi Arabia due to COVID-19. The newly DTIIHLE model is important to elaborate on the existing discrete distributions in the literature. It has the lowest goodness-of-fit measures values among all discrete competing models. Hence, the proposed model is best among competitive distributions.
2,016
2021-12-01T00:00:00.000
[ "Mathematics", "Medicine" ]
ESMSec: Prediction of Secreted Proteins in Human Body Fluids Using Protein Language Models and Attention The secreted proteins of human body fluid have the potential to be used as biomarkers for diseases. These biomarkers can be used for early diagnosis and risk prediction of diseases, so the study of secreted proteins of human body fluid has great application value. In recent years, the deep-learning-based transformer language model has transferred from the field of natural language processing (NLP) to the field of proteomics, leading to the development of protein language models (PLMs) for protein sequence representation. Here, we propose a deep learning framework called ESM Predict Secreted Proteins (ESMSec) to predict three types of proteins secreted in human body fluid. The ESMSec is based on the ESM2 model and attention architecture. Specifically, the protein sequence data are firstly put into the ESM2 model to extract the feature information from the last hidden layer, and all the input proteins are encoded into a fixed 1000 × 480 matrix. Secondly, multi-head attention with a fully connected neural network is employed as the classifier to perform binary classification according to whether they are secreted into each body fluid. Our experiment utilized three human body fluids that are important and ubiquitous markers. Experimental results show that ESMSec achieved average accuracy of 0.8486, 0.8358, and 0.8325 on the testing datasets for plasma, cerebrospinal fluid (CSF), and seminal fluid, which on average outperform the state-of-the-art (SOTA) methods. The outstanding performance results of ESMSec demonstrate that the ESM can improve the prediction performance of the model and has great potential to screen the secretion information of human body fluid proteins. Introduction The diverse array of proteins found within human body fluids serve as biomarkers for detecting and monitoring diseases, enhancing diagnostic accuracy, and assessing risk levels [1][2][3][4].Because of this, the study of proteins secreted by human body fluids will become very necessary.The first identification of proteins in human body fluids dates back to 1937 [5].Since then, with the development of proteomics technology, more proteins can be identified from human body fluids through techniques such as two-dimensional gel electrophoresis (2-DE) [6] and mass spectrometry (MS) [7].For example, M.G. et al. identified a series of differentially expressed proteins associated with pancreatic cancer through pancreatic fluid analysis [8].Similarly, D.C. et al. utilized MS methods to discover biomarkers in 1000 human blood samples [9].However, high-precision mass spectrometry detection is often limited by expensive experimental costs.Therefore, fast and cost-effective bioinformatics-based research methods offer a new perspective for predicting body fluid protein profiles. Machine-learning-based protein prediction methods have made significant strides in predicting various body fluids.Among these, the support vector machine (SVM) [10] prediction method stands out as a representative approach.This method employs binary classification to determine whether a protein is secreted into a specific human body fluid.The training process involves gathering a wide range of common protein features (sequence length, autocorrelation, hydrophobicity, charge, subcellular localization, longest disorder region, etc.) and then utilizing the recursive feature elimination (RFE) method based on SVM to select important protein features.Subsequently, the SVM model is employed to model proteins in body fluids.This approach has been successfully applied to studies involving saliva and urine [11,12].While the feature-based model has shown promising results, it can be influenced by manual intervention during feature selection.In response to this limitation, neural network models leveraging deep learning (DL) techniques, such as convolutional neural networks (CNNs), fully connected neural networks, gated recurrent units (GRUs), and transformers, have been adopted to predict proteins in human bodily fluids.The advent of DL, fueled by increased data availability and high-capacity computer hardware, poses a challenge to traditional machine learning methods.One of the main advantages of DL lies in its ability to better represent raw data through nonlinear transformations, enabling more effective learning of hidden patterns within the data.Studies on transformer architecture [13] have demonstrated its efficacy in tackling large-scale computing challenges posed by excessively long sequences, surpassing CNNs in various tasks.For instance, Du et al. proposed a DL model for predicting secretory proteins in plasma and saliva [14].Shao et al. learned complex features from protein sequence information through a CNN, a bidirectional gated recurrent unit (BGRU), and other networks, and completed the prediction of human body fluids.The model built was called DeepSec, which improved the prediction performance.However, the amount of protein data in body fluids is limited, so the model will be overfitted in many human fluids.Huang et al. extracted information from protein sequences through the densely connected convolutional networks (DenseNet) model and transformer architecture, etc. and proposed the DenSec model for predicting secreted proteins in cerebrospinal fluid (CSF) [15].The prediction methods of DL use complex network structures, which result in a large number of parameters in the model.He et al. propose MultiSec, which predicts body fluids through multi-task learning, using less computational complexity to improve prediction accuracy [16].The above studies are based on position-specific scoring matrix (PSSM) information to predict proteins, and it is necessary to propose a more efficient prediction method using other information to make the prediction more accurate. In recent years, deep-learning-based language models (LMs) have achieved remarkable advancements in natural language processing (NLP).These deep learning LMs excel in tasks like predicting the next word in a sentence or reconstructing corrupted text to understand language based on contextual cues.Similarly, protein language models (PLMs) based on the transformer architecture have found success in the field of proteomics.PLMs are trained on extensive datasets of protein sequences to capture underlying evolutionary patterns and extract semantic information embedded within the protein sequences [17,18].One of the basic pre-processing steps in NLP is tokenization, the splitting of the protein amino acid sequences into individual units of atomic information called tokens.Most NLP models use words as tokens, but some models use characters as tokens.Twenty basic amino acids make up human proteins, so the characters 'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', and 'Y' are used to represent amino acids ('A' for alanine, etc.), which are modeled with a character-level PLM model.At present, the widely adopted PLMs include evolutionary scale modeling (ESM) [19] series models and ProtTrans series models.For instance, ESM-1b is a high-capacity transformer with protein sequence as input and hyperparameter optimization training.Post-training, the model's output representation contains information about the structure, function, homology, and other secondary levels of the protein, and this information can be manifested by linear projection.The ProtTrans models have been developed to predict protein secondary structures for tasks like subcellular localization and membrane relative water solubility prediction.Notably, ProtT5 has achieved breakthroughs in secondary structure prediction, surpassing state-of-the-art methods without requiring multiple sequence alignment (MSA) or evolutionary information. In this paper, we propose a model for predicting protein secretion in human body fluids, ESMSec.This model is composed of ESM2 (pre-trained esm2_t12_35M_UR50D, the embedding layer accepts a vocabulary of length 33, each word is embedded as a vector of length 480, and the fill tag index is 1 (<pad>)) [20] and attention architecture.Initially, the data are sampled in a balanced manner according to different body fluids, and the balanced protein amino acid sequence is input into the ESM2 model to extract the feature information of the sequence.Then, the extracted information is used as the input of multi-head attention architecture, and the output information is input to the feedforward neural network (FFN) and finally through the fully connected layer for binary classification.We selected plasma, CSF, and seminal fluid, which are three important and ubiquitous fluids, for the experiment.ESMSec achieved relatively accurate prediction in all human body fluids, with an average area under the receiver operating characteristic curve (AUC) of 0.9157, and it is proved that the ESM can extract protein secretion information. Performance of ESMSec in Three Human Body Fluids In our study, ESMSec was developed using Python 3.10 and implemented primarily using PyTorch 1.12 and Scikit-Learn 1.2 [21,22].The model training and testing were performed on a GeForce RTX 2080 Ti GPU.Comparison experiments were conducted on a Windows 11 platform.Firstly, to address the imbalance in positive and negative sample data across different human body fluids, a balanced sampling strategy was employed.This strategy generated three groups of data for each body fluid type, with a random selection ratio of 6:2:2 for training, validation, and testing datasets, respectively.Secondly, the pretrained ESM2 model was utilized to extract features from the processed protein amino acid sequences, with sequence length controlled at 1000 and an output shape of 1000 × 480.Subsequently, a multi-head attention architecture and feedforward neural network (FFN) with a four-layer fully connected structure were used for protein sequence classification and prediction.The classification loss for each body fluid was calculated accordingly.The Adam optimizer was utilized to optimize the loss function for secreted proteins in each body fluid, with a learning rate set at 0.00005.ESMSec underwent 20 iterations with the training datasets, and the iteration with the highest accuracy (ACC) score for each body fluid was selected based on the corresponding validation datasets.After training, the ESMSec was evaluated on a testing dataset of three human body fluids, including plasma, CSF, and seminal fluid.Table 1 presents the benchmark test results for ESMSec on these testing datasets.ESMSec achieved performance ranging from 83.25% to 84.86% in ACC, 83.00% to 84.35% in F-measure (F1), 66.53% to 69.87% in Matthews correlation coefficient (MCC), and 90.73% to 92.76% in AUC.This indicated that ESMSec obtained good performance in the three body fluids simultaneously. Evaluating the Performance of Classification We conducted a performance comparison of ESMSec with various existing methods, including SVM-based, decision tree (DT)-based, DNN-based, DeepSec-based, MultiSecbased, and ESM-1b-based [19] methods.The hyperparameters for these methods were chosen based on the MCC metric from the validation dataset, and their performance on the testing dataset is reported as the benchmark for comparison. • SVM is established based on protein features because SVM cannot directly model protein sequences, Initially, computational tools (UniProt, Profea, etc.) are employed to calculate features based on protein amino acid sequences, and the SVM-RFE method is applied for the iterative selection of collected features.The top 50 significant features are then chosen using the T-test and false discovery rate (FDR), and the SVM classifier is used to predict protein secretion in specific body fluids.The maximum number of iterations is 300, and the default values are used for other parameters; • The modeling process of the DT-based method is similar to the SVM method.The depth of the DT model is 7, and the minimum number of samples required to split the internal nodes is 20; • In the DNN model, the input feature dimension is 50, the number of neurons is 500, the number of layers is 4, the learning rate is 0.0001, and the batch size is 32; For our method, the dropout in our FFN is set to 0.3 in plasma and seminal fluid and 0.2 in CSF.We employ the same model architecture to train three models.To ensure experimental fairness, we also compare with the pre-trained ESM-1b model, which shares the same structure as ESMSec.Table 2 presents the average benchmarks for ESMSec and other methods.As depicted in the table, our classifier outperforms other methods on average in ACC, F1, MCC, and AUC.(The methodological evaluation index scores of the three body fluids are shown in Tables A1-A4 of Appendix A). Figure 1 illustrates the average performance of the three body fluids across the seven classifiers, with our method achieving the highest overall average score.Considering various evaluation metrics, ESMSec demonstrates superior accuracy in predicting the likelihood of identifying secreted proteins compared to other methods, further confirming the ESM's efficacy in extracting distinctive protein characteristics.The best results are in bold.To assess the effectiveness of our proposed ESMSec approach, we conducted abla experiments, and the results are shown in Figure 2, providing a comprehensive ins into our methodʹs performance.The figure clearly shows that our method outperfo the ESM2 method on average for the three body fluid testing datasets.This find underscores the advantage of incorporating attention architecture in protein classificat To assess the effectiveness of our proposed ESMSec approach, we conducted ablation experiments, and the results are shown in Figure 2, providing a comprehensive insight into our method's performance.The figure clearly shows that our method outperforms the ESM2 method on average for the three body fluid testing datasets.This finding underscores the advantage of incorporating attention architecture in protein classification. Prediction of Potential Secreted Proteins ESMSec was utilized to identify potential secreted proteins in three types of human body fluids.We collected 8691, 9714, and 9049 proteins from plasma, CSF, and seminal fluid, respectively, which were not experimentally verified.We retrained the ESMSec, and for the prediction of the protein, we predicted the proteins with a probability greater than Prediction of Potential Secreted Proteins ESMSec was utilized to identify potential secreted proteins in three types of human body fluids.We collected 8691, 9714, and 9049 proteins from plasma, CSF, and seminal fluid, respectively, which were not experimentally verified.We retrained the ESMSec, and for the prediction of the protein, we predicted the proteins with a probability greater than 0.5 as the potential proteins in the corresponding human body fluid, in which the predicted number of proteins in plasma is 5919 (As shown in Supplementary Materials Table S1), in CSF the predicted number of proteins is 6728 (As shown in Supplementary Materials Table S2), and in seminal fluid the predicted number of proteins is 5885 (As shown in Supplementary Materials Table S3).Table 3 shows the information of the five proteins with the highest prediction probability for each body fluid.In addition, through consulting relevant literature, a total of seven of the most important proteins in the three body fluids predicted by us have been verified as corresponding body fluid proteins by experiments. Discussion ESMSec is a computational model that leverages PLM to predict secreted proteins across various human body fluids.It utilizes the ESM to extract embedded features, which are then processed through a multi-head attention mechanism and a fully connected neural network.Compared to methods based solely on protein features and PSSM, ESMSec demonstrates higher prediction accuracy and superior generalization performance.This highlights the capability of the ESM in extracting information related to secreted proteins in human body fluids.On average the F1 metrics for the three human fluids show that our method outperforms the best-performing method (MultiSec) from other approaches by about 3.39% on the testing dataset.This indicates that ESMSec effectively represents proteins across the protein space.By incorporating the attention framework, our model can better capture long-distance dependencies, leading to the identification of 5919, 6728, and 5885 potential secreted proteins in the three body fluids.These findings open up new possibilities for future biological experiments. By comparing models with different parameters in the ESM2 series, we finally selected a 12-layer model with a parameter count of 35M, which outperformed the other parameter count models on average across all body fluids.Due to limited hardware resources, only four ESM2 models could be used for experiments (ESM2_t33_650M runs on GeForce RTX 3090 GPU).The average evaluation indexes of the three body fluid testing datasets are shown in Table 4 (The index scores of the three body fluids on ESM2 models of different sizes are shown in Tables A5-A7 of Appendix A).However, it is evident from all the experimental methods that the MCC index is generally low, while the AUC index remains high.This analysis suggests that the imbalance in the classification threshold may be the cause, as the MCC value can fluctuate with changes in this threshold.Taking all this information into account, we have full confidence in the predictive capabilities of our method.Although ESMSec has achieved good prediction results, there is still room for optimization.In the future, we will improve the performance of prediction accuracy through input methods such as simultaneous input and collect more data to test different body fluids.We also need to investigate further the specificity of the protein in different body fluids and work to improve the interpretability of its entry into body fluids to make this approach more meaningful. Data Collection The data utilized in this study were sourced from the Human Body Fluid Proteome (HBFP) open database, which collected 15,480 experimentally verified proteins in body fluids from 241 articles.We specifically focused on plasma, CSF, and seminal fluid from this database for our experiments and searched proteins secreted by the three types of human body fluids and corresponding sequences from the database.Based on these data, three sub-datasets were constructed respectively.For each data subset, the positive sample was the experimentally verified in body fluid protein in the database, and the negative sample was generated by the positive sample data and the Pfam protein family information [23].Specifically, first, all human proteins are obtained from the UniProt database and mapped to the corresponding Pfam family, then all the Pfam family information is found in the positive sample dataset, all the proteins in the Pfam family are removed, and finally, for each family, if the protein belongs to the family and the family intersects with the secreted protein, it is not taken as a negative sample, and if the protein does not belong to any family that meets the conditions, it is taken as a negative sample of the current body fluid.To ensure an accurate evaluation of our protein prediction method, we filtered out redundant proteins using a sequence similarity approach.Initially, we calculated the sequence similarity of all proteins in the dataset using the PSI-CD-HIT program.Subsequently, one protein with over 90% sequence similarity was randomly retained, and the remaining proteins were removed as redundant [24].The number of positive and negative samples for each body fluid is shown in Table 5. Considering the varying numbers of positive and negative samples, we applied balanced sampling to even out the data distribution.Each sub-dataset was then randomly divided into training, validation, and test datasets in a 60%, 20%, and 20% ratio, respectively.The training dataset was utilized for method training, the validation dataset for parameter selection, and the testing dataset for evaluating prediction performance.The distribution data of proteins in human body fluids are shown in Table 6, and the range of sequence lengths in each body fluid is shown in Table 7. Model In this paper, ESMs and attention architecture were used to predict secreted proteins in plasma, CSF, and seminal fluids.The overall architecture is shown in Figure 3. First, the input to the model is protein sequence information, rather than electing for the traditional PSSM, and then the features of the protein sequence are captured through the ESM2 model.Finally, the multi-head attention architecture with full connection and FFN is utilized as the classifier of whether the protein enters the corresponding body fluid. Feature Extraction Since the ESM has been utilized for feature extraction of protein amino acid sequences, this model was also used for feature extraction of the sequence of protein data in body fluids in this study.The collected protein amino acid sequences undergo a pre-processing step where sequences are standardized to a fixed length.If a protein sequence exceeds 1000 residues, we concatenate the first 500 residues with the last 500 residues to ensure uniformity.Subsequently, we tokenize the sequence information using the ESM.(We chose the data of length 1000 for the experiment.Long sequences of proteins lose a lot of information, but in our data, about 12% of the data are affected by truncation, so if there is missing information, the negative impact on our method will not be very large).Finally, we extract the embedded information from the last layer of the protein language model (PLM) to obtain a dimensional representation of 1000 × 480. Classification The classification module can calculate the probability that the protein will be secreted into a certain body fluid based on the features extracted by the final ESM module.A batch size of 32 was utilized, resulting in a dimension of 32 × 1000 × 480.Subsequently, the relationships within the sequence are captured by a multi-head attention mechanism, and then feature extraction and cross-layer information transfer are carried out by a fully connected feedforward network with residual connection, and layer normalization is used to stabilize the training process of the model. where X is the embedded feature of the ESM2 output, repeated three times as the query, key, and value, the scaling factor is 1 . The result is output after being calculated by the attention mechanism. MultiHead(X, X, X) = Concat(head 1 , . . ., head 8 )W O (2) x = LN(X + MultiHead(X, X, X)) The MultiHead is a multi-head attention operation, the LN layer is a normalized operation, and FFN is a feedforward neural network, which consists of two linear transformations.The first layer will change the dimension by four times first and add the GELU function in the middle.W and b are the weight vector and bias, respectively, and h is the result of the second LN layer.In the pooling layer, maximum pooling and average pooling concat are used to obtain two dimensions of the initial dimension which is put into the final fully connected layer (q).f = max(0, q • µ + ν) (9) This method is a fully connected layer composed of four hidden layers and carries out nonlinear transformation, where µ and ν are the weight vector and the bias.For prediction, we use softmax as the activation function at the output layer, and then cross-entropy loss as the loss function for binary classification, which is defined below: where ŷ and y, respectively, represent the predicted value and the true value, n is the number of proteins.When predicting proteins in body fluids, the category corresponding to the larger output is selected as the prediction label. Evaluation In the experimental comparison, we selected four evaluation indicators of ACC, F1, MCC, and AUC.It is worth noting that higher values indicate better classification performance for all those measures.These metrics are defined as follows: where TP TN, FP, and FN represent the number of protein samples corresponding to true positive, true negative, false positive, and false negative, respectively. Conclusions In this work, we present the novel method ESMSec for predicting secreted proteins in plasma, CSF, and seminal fluid, which consists of an ESM2 with 12 layers and 35M parameters and attention architecture.The embedded PLMs extracted the protein amino acid sequence information in body fluids without using standard feature extraction methods such as MSA.The method is evaluated using an HBFP database dataset, and the experimental results show that our method has a better predictive effect than other existing methods in terms of average evaluation indicators.In addition, we also introduced the processing methods of positive and negative data samples and compared SVM, DT, DNN, DeepSec, MultiSec, and ESM-1b, as well as carried out an ablation experiment using only the ESM2 model.The ACC of our method reached 83.90%, and the results of F1, MCC, and AUC are better than those of other methods.In the Discussion section, we also explained why we chose the ESM2 model with 12 layers and 35M parameters.Features extracted by PLMs have more information content than those extracted by other feature extraction methods in the existing research.From the data point of view, our method still has shortcomings because the use of PLMs requires more training data, and some data that are less related in other body fluids cannot achieve good results.We will continue to collect more data and test more data on other proteins entering body fluids to improve the accuracy of predicting proteins entering body fluids. Figure 1 . Figure 1.Comparative baseline methods for test datasets corresponding to 3 human body fluid In the plasma testing dataset; (b) in the CSF testing dataset; (c) in the seminal fluid testing dat (ACC: Accuracy, F1: F-measure, MCC: Matthews correlation coefficient, AUC: Area under curv Figure 1 . Figure 1.Comparative baseline methods for test datasets corresponding to 3 human body fluids.(a) In the plasma testing dataset; (b) in the CSF testing dataset; (c) in the seminal fluid testing dataset.(ACC: Accuracy, F1: F-measure, MCC: Matthews correlation coefficient, AUC: Area under curve). Figure 2 . Figure 2. Results of the ablation experiment. 13 Figure 3 . Figure 3. ESMSec architecture diagram ((a).Data Collection.(b).Feature extraction.(c) Classification.).4.2.1.Feature Extraction Since the ESM has been utilized for feature extraction of protein amino acid sequences, this model was also used for feature extraction of the sequence of protein data Table 1 . ESMSec benchmarking on independent testing datasets of 3 human body fluids. • DeepSec bypasses feature collection and selection, opting for end-to-end training via protein PSSM data.It addresses the imbalance issue through a bagging strategy, training multiple networks simultaneously to identify secreted proteins within a single body fluid, which demands significant computational time and resources.Fifty filters of different sizes of {1, 5, 7} were utilized to extract features and combined to obtain a 1000 × 150 feature map with a learning rate of 0.0001; • MultiSec adopts a balanced sampling strategy to solve the imbalance problem, trains the network through the multiple gradient descent algorithm (MGDA), builds a lightweight CNN to extract feature information, and uses a multi-task method to predict protein secretion.It extracts protein features at different scales via multiple parallel convolution layers, incorporating four parallel convolution and pooling operations.The filter sizes are {3, 5, 7, 9}, with 128 filters and a learning rate of 0.0001. Table 2 . Average benchmarks for ESMSec and other methods were compared on 3 independent testing datasets of human body fluids. The best results are in bold. Table 3 . 5Protein information with the highest prediction probability in 3 body fluids. Table 4 . The evaluation indexes of ESM2 series models were compared on 3 body fluid testing datasets. Table 5 . The number of samples of 3 human body fluids. Table 6 . Partitioning data of proteins in 3 human body fluids. Table 7 . Sequence length range of 3 human body fluids. Table A3 . Supplementary Materials:The following supporting information can be downloaded at: https: //github.com/BBT-123/ESMSec(accessed on 20 April 2024).Conceptualization, Y.W.; methodology, Y.W. and H.S.; validation, H.S. and K.H.; formal analysis, N.S.; investigation, H.S. and W.H.; data curation, H.S. and K.H.; writingoriginal draft preparation, H.S.; writing-review and editing, Y.W., H.S., N.S., W.H., Z.Z. and Q.Y.; visualization, Z.Z. and Q.Y.; supervision, L.H.; project administration, Y.W. and L.H.; funding acquisition, Y.W.All authors have read and agreed to the published version of the manuscript.This research was funded by the National Natural Science Foundation of China, grant number '62072212', the Development Project of Jilin Province of China, grant numbers '20220508125RC, On the independent testing datasets, 7 methods of 3 kinds of human body fluid were compared on the MCC evaluation index. Funding:The best results are in bold. Table A4 . On the independent testing datasets, 7 methods of 3 kinds of human body fluid were compared on the AUC evaluation index. Table A5 . Index scores of 3 body fluid testing datasets on ESM2_t6_8M model. Table A6 . Index scores of 3 body fluid testing datasets on ESM2_t30_150M model. Table A7 . Index scores of 3 body fluid testing datasets on ESM2_t33_650M model.
6,234.2
2024-06-01T00:00:00.000
[ "Computer Science", "Medicine", "Biology" ]
Very light dilaton and naturally light Higgs boson We study very light dilaton, arising from a scale-invariant ultraviolet theory of the Higgs sector in the standard model of particle physics. Imposing the scale symmetry below the ultraviolet scale of the Higgs sector, we alleviate the fine-tuning problem associated with the Higgs mass. When the electroweak symmetry is spontaneously broken radiatively à la Coleman-Weinberg, the dilaton develops a vacuum expectation value away from the origin to give an extra contribution to the Higgs potential so that the Higgs mass becomes naturally around the electroweak scale. The ultraviolet scale of the Higgs sector can be therefore much higher than the electroweak scale, as the dilaton drives the Higgs mass to the electroweak scale. We also show that the light dilaton in this scenario is a good candidate for dark matter of mass mD ∼ 1 eV − 10 keV, if the ultraviolet scale is about 10−100 TeV. Finally we propose a dilaton-assisted composite Higgs model to realize our scenario. In addition to the light dilaton the model predicts a heavy U(1) axial vector boson and two massive, oppositely charged, pseudo Nambu-Goldstone bosons, which might be accessible at LHC. Introduction The standard model (SM) of particle physics, which has been very successful in describing the interactions of elementary particles, is finally completed by the discovery of its last missing piece, the Higgs particle, at the large hadron collider (LHC) [1,2]. The properties of the Higgs particle are measured to be consistent with the standard model prediction, better than at the percent level by the subsequent experiments [3,4]. But, nonetheless, the SM is widely regarded as an effective theory below the electroweak scale ∼ 1 TeV, set by the vacuum expectation value (vev) of Higgs fields. Since the SM does not have any obvious symmetry to protect the mass of Higgs particle, which is very sensitive to short distance physics, it needs to be highly fine-tuned, if the ultraviolet (UV) scale of Higgs physics is much higher than TeV [5]. New physics at TeV is hence currently actively explored at the LHC to find a hint for physics beyond the standard model, though no clear signals have been found yet. While signals for new physics are actively being probed at LHC, the lower limit of new particle masses has been pushed up to almost 2 TeV at the Run 2 of LHC [6,7], putting most models of physics beyond the standard model (BSM) such as walking technicolor, composite Higgs or supersymmetry in great tension with LHC. We might therefore need to seek alternative solutions to the naturalness problem of the standard model, one of the JHEP02(2018)102 basic guiding principles for new physics. Recently there has been proposed an interesting mechanism to select the Higgs mass dynamically without introducing new physics at the electroweak scale [8]. The idea is to construct a model that has many (or infinite) local minima for a wide range of a field that cosmologically relaxes into a local minimum at the electroweak scale, starting from a local minimum at the ultraviolet (UV) scale of the Higgs sector, to give a small mass of the electroweak scale to Higgs fields. The QCD axion fits this criterion, if it couples to the Higgs sector, since its potential is periodic under the shift symmetry to have infinitely many local minima, and hence the field is called relaxion. In this paper we propose a very minimal model which assumes only very light dilaton in addition to the standard model particles up to a UV cutoff scale, M , much higher than the electroweak scale. Our model provides the naturally light Higgs boson, though its UV scale is much higher than the electroweak scale. To discuss the mechanism for our model, we first assume that our model is an effective theory below the cutoff scale, M . One possible candidate for the UV completion of our model, as discussed later, is a dilatonassisted composite Higgs model, based on Banks-Zaks gauge theories with a quasi infrared (IR) fixed point [9], where both the Higgs boson and the dilaton are (composite) Nambu-Goldstone bosons from strong dynamics in UV, corresponding to the spontaneously-broken global symmetry [10] and scale symmetry, respectively. Being a Nambu-Goldstone boson, associated with spontaneously-broken scale symmetry at the UV scale of the Higgs sector, the dilaton in our model does a similar role as relaxion that alleviates the naturalness problem of the standard model Higgs. The standard model is scale invariant classically, if one turns off the Higgs mass or the relevant operators in the Higgs potential. In a classic paper [11], however, Coleman and Weinberg (CW) showed that, even if one imposes the scale invariance at the quantum level in the Higgs sector of SM, the Higgs field could develop a vev to break the electroweak symmetry spontaneously by the radiative corrections. Since the value of Higgs vev is determined by the dimensional transmutation of the quartic coupling in the CW mechanism, it should be chosen by experiments; φ = v ew 246 GeV to account for the weak interactions. The standard model fermions and the weak gauge bosons get mass from the Higgs vev through the Yukawa and gauge couplings with the Higgs fields. The problem of CW mechanism is however that the Higgs mass turns out to be too small, compared to the experimental value, m H 125 GeV, unless one introduces extra bosons [12,13]. Furthermore, the standard model has to be fine-tuned from the intrinsic ultraviolet scales such as the Landau pole associated with the weak hypercharge to keep the scale invariance [14]. Our model relies on the electroweak symmetry breakingà la Coleman-Weinberg but evades these problems by embedding the Higgs sector into an almost stable conformal sector at the UV scale of the standard model, which leads to a very light dilaton that generates additional contributions to the Higgs mass of the order of the Higgs vev, φ = v ew . The ultraviolet theory of the Higgs sector in the standard model is assumed to be near conformal such as the gauge theories with the Banks-Zaks infrared (quasi) fixed point [9] and the scale symmetry is spontaneously broken near the IR fixed point to generate a very light dilaton as a Nambu-Goldstone boson. The dilaton of the UV sector then drives the Higgs mass to a small value, controlled by the scale anomaly or the vacuum energy of the JHEP02(2018)102 UV sector, once the Higgs field develops a vev. At low energy our model contains only the standard model with very light dilaton, which is therefore different from previous models [14][15][16][17][18][19][20][21] that attempt to solve the naturalness problem by imposing the scale invariance in the Higgs sector, not broken spontaneously. We also show that the light dilaton of our model abundantly constitutes the dark matter in our universe once it is non-thermally produced at early universe by the vacuum misalignment of the dilaton field. Finally we propose a specific dilaton-assisted composite Higgs model to realize our scenario that the very light dilaton derives the Higgs mass to the electroweak scale. Very light dilaton and scale anomaly The standard model (SM) of elementary particles is scale-invariant in the classical limit, if one turns off the Higgs mass term (and also the cosmological constant term, which we neglect in our discussions), but the scale symmetry is broken radiatively by quantum effects. Since our model assumes a spontaneously-broken scale symmetry in the UV theory of the Higgs sector, one is led at low energy to an extension of the standard model that still preserves the scale symmetry at the operator level up to the scale anomaly, though spontaneously broken. Coleman-Weinberg potential We first review the (unsuccessful) scenario of Coleman and Weinberg [11] that Higgs field might be a dilaton in the standard model. CW showed that even if one imposes in the standard model the scale-invariance by taking the quadratic term in the Higgs effective potential to vanish the scale symmetry is spontaneously broken by radiative corrections. At one-loop, for instance, the Higgs field develops an effective potential to have a minimum away from the origin [11,22], where g and g are the couplings of SU (2) L × U (1) Y electroweak gauge interactions, respectively. 1 By expanding the potential around the minimum, one finds the Higgs mass to be 2 for v ew = 246 GeV and the weak gauge boson masses, M W = 80.4 GeV and M Z = 91.2 GeV. The scale-invariant Coleman-Weinberg potential leads to too small Higgs mass, compared to the measured value, 125 GeV. Furthermore, if one includes the top quark, the one-loop effective potential changes its sign and the CW mechanism does not work. As we show however in our model, where the scale symmetry is spontaneously broken at the ultraviolet scale of the Higgs sector, a quadratic JHEP02(2018)102 term in the Higgs potential is induced at the electroweak scale to generate the Higgs mass of the electroweak scale, when the dilaton develops a small vacuum expectation value, similar to the relaxion mechanism, provided that the CW mechanism works, having extra bosons [13]. A model for light dilaton Below the UV scale of the standard model, which we denote M , taken to be much larger than 1 TeV, the Higgs potential is given as, neglecting possible irrelevant operators, where the quadratic term is not protected in general and naturally of order of the UV scale, M . 2 Being parameters of the low-energy effective theory, the Higgs mass M and the quartic coupling λ include all the ultraviolet contributions from the UV theory above the cutoff scale that are relevant at low energy. Especially the mass term includes the contributions from the massive modes in the UV theory or certain intrinsic scales of the UV sector such as the scale for the conformal phase transition in the case of conformal UV theories [23]. In the case of the composite Higgs model, that we will focus on later as a possible model that realizes our mechanism, the Higgs mass is protected by the shift symmetry and generated by the Higgs interactions with the standard model particles such as top quark or EW gauge bosons, the mass term in eq. (2.3) then should be regarded as the counter term to the SM contributions to the Higgs mass that contains the effect of UV physics. Since the scale symmetry is assumed to be spontaneously broken near the infrared fixed point of the UV theory like the Banks-Zaks theories, the symmetry breaking scale is much higher than the dynamical or infrared scale of the UV theory, Λ SB M , known as Miransky scaling [24] or Berezinskii-Kosterlitz-Thouless (BKT) scaling [25,26]. Our UV model is therefore almost scale-invariant for the wide range of scales, M < E < Λ SB . (See figure 1.) When the scale symmetry is spontaneously broken, the dilatation current creates a Nambu-Goldstone boson, the dilaton, denoted as σ, out of vacuum: where f is the dilaton decay constant, f ∼ Λ SB and the dilatation current D µ = θ µν x ν with the energy-momentum tensor θ µν that couples to gravity. 3 In order for the dilaton to behave like the relaxion, it has to couple to the Higgs fields. One natural way to achieve this is to assume that both the dilaton and the Higgs boson come from the same UV dynamics. all the scale-symmetry violating terms in the Higgs sector are coupled to the dilaton field. The (anomalous) Ward identity of the scale symmetry fixes how the dilaton couples to the Higgs fields: Consider the following Green's function, Upon integrating over all spacetime points, after taking the total divergence, one gets If one assumes the second term in eq. (2.7) is saturated at low energy by the dilaton, known as the hypothesis of partially conserved dilatation currents (PCDC), then one gets which shows that the strength to emit the dilaton by φ † φ is 2/f as realized in the effective theory by 2 f σφ † φ, the first nontrivial term in the expansion of the nonlinear coupling of the dilaton to the quadratic Higgs fields, e 2σ/f φ † φ. The Higgs sector of the standard model now becomes at low energy (E < M ), suppressing the Higgs couplings to fermions, where φ is the Higgs field and D µ is the electroweak covariant derivative. The potential V (σ, φ) in the effective theory contains the scale anomaly term V A and the Higgs potential term V 0 with its coupling to the dilaton, JHEP02(2018)102 We note that because the dilaton transforms nonlinearly under the scale transformation in the SM sector, σ → σ + σ 0 , d 4 x M 2 e 2σ/f φ † φ is scale invariant and the scale anomaly term changes accordingly, E vac → E vac e 4σ 0 /f . 4 The scale anomaly term in the potential is determined by the anomalous Ward identity of scale symmetry as [28] where E vac ∼ M 4 is the vacuum energy density of the UV theory of the Higgs sector 5 and f is the dilaton decay constant, f M . The low energy theorem associated with the scale anomaly determines the dilaton mass, m 2 D = 16 |E vac | /f 2 . As long as the scale symmetry is broken very close to the (quasi) infrared fixed point of the UV theory, there will be a large separation of two scales f ∼ Λ SB and M , the dynamical (or infrared) scale of the (quasi) scale-invariant UV theory. We then have |E vac | ∼ M 4 f 4 and the dilaton can be very light [29,30]. Since the UV completion of the Higgs sector is assumed to be (quasi) scale-invariant, one can impose the scale invariance at the cutoff scale on the standard model in the sense of Bardeen's naturalness [14]. 6 We therefore choose the renormalization condition or the counter terms in eq. (2.10) such that the quadratic term of the Higgs field vanishes in the full 1PI effective potential [11]: This renormalization process is stable under any UV contributions because the very light dilaton, that coupled to Higgs fields, enjoys the shift symmetry, σ → σ + σ 0 . (See more on this in appendix A.) The effective potential then becomes where V CW (φ) is the Coleman-Weinberg potential for (massless) Higgs fields. At one-loop where a is a constant, to be chosen such that φ = v ew , and β is nothing but the one-loop beta function of the Higgs quartic coupling, λ, assumed to be positive by having extra 4 The anomalous Ward identity with θ µ µ = 4Evac(χ/f ) 4 and determines the dilaton potential VA(σ). 5 The vacuum energy Evac in eq. (2.12), that contributes to the dilaton mass, is due to the vev of the order parameter of the scale symmetry, subtracting out the usual perturbative contributions, so that it vanishes when the vev vanishes [31]. 6 Our renormalization condition at the cutoff scale is technically different from that of Bardeen's proposal. bosons [13]. As the Higgs sector flows into the infrared, the Higgs field develops a vev by the CW mechanism [11]. As soon as the Higgs field gets a vev, it drives the minimum of the dilaton potential away from the origin, σ = 0. When the Higgs field develops a vev, φ = v ew , it breaks the scale symmetry explicitly and the dilaton potential gets an additional contribution (see figure 2) where V CW (v ew ) now depends on σ from the minimization of V (σ, φ). The dilaton field therefore develops a vev away from the origin. For the one-loop CW potential one finds where we have taken the vacuum energy, |E vac | M 2 v 2 ew . The Higgs mass then becomes, neglecting small mixing with the dilaton, Since the dynamical scale or the infrared scale of the UV theory of the Higgs sector is assumed to be of order of M , its vacuum energy |E vac | ≈ cM 4 , where the constant c is given by the structure of the UV theory. In the case of Banks-Zaks gauge theories with a quasi IR fixed point, the constant depends only on the gauge group and the number of fermions [31]. Thus the Higgs mass is naturally given as the electroweak scale or v ew . In our model, therefore, having the scale-invariant UV theory of the Higgs sector, that gives the coupling between the dilaton and the Higgs boson, the dilaton dynamically relaxes the Higgs mass to the electroweak scale, giving the naturally light Higgs boson or m H M . Without severe fine-tuning we have therefore dynamically raised the ultraviolet JHEP02(2018)102 scale of the Higgs sector M to be much higher than the electroweak scale, alleviating the naturalness problem associated with the Higgs mass. The scale symmetry does a crucial role in our mechanism. Having the very light dilaton at the UV scale M , the Higgs sector is almost scale-invariant. The curvature of the Higgs potential, therefore, has to be chosen to vanish at the origin by the renormalization condition to be consistent with the scale symmetry, σ → σ + σ 0 . However, once the Higgs sector flows into IR, the Higgs field develops a vev, φ = v ew by the CW mechanism, generating the IR scale. The Higgs vev therefore sets the scale for the Higgs mass. Very light dilaton as dark matter Besides the naturalness problem that we discussed, another strong motivation for physics beyond the standard model is to account for the dark matter that constitutes about 23% of the total energy of our present universe. According to the current standard big-bang cosmology, cold dark matter with a cosmological constant, so-called the ΛCDM fits the current observations such as the cosmic microwave background (CMB) best [32,33]. A very light dilaton has been shown to be one of the best candidates for the cold dark matter [29,30]. Life time The dilaton couples to the standard model particles, once they get mass by the Higgs mechanism that breaks the electroweak symmetry. The light dilaton therefore decays into two photons through a loop process (and also into neutrinos and gravitons, which we neglect), as shown in figure 3. The decay rate is given at one loop for the very light dilaton as where C is approximately a constant times the electric charge squared, summed over all charged particles in the standard model. We estimate the lifetime of the dilaton τ D 10 20 sec 5 C (3.2) In order for the dilaton to be long-lived to become a dark matter candidate of mass, m D = 10 keV with decay constant f = 10 12 GeV, the UV scale has to be M ∼ 10 TeV Relic abundance of dilaton Since the dilaton is weakly coupled, it will not be in thermal equilibrium with other particles in early universe, when it is produced. However, by the vacuum misalignment the light dilaton will be non-thermally produced in early universe. If we take the degree of the misalignment to be θ os = δσ/f , the relic density of the dilaton will be at the time of oscillation from the misalignment Since the relic density at present is given as ρ σ (T 0 ) = ρ σ (T os ) · s(T 0 ) s(Tos) , where s(T ) is the entropy density at temperature T , we find the dilaton dark matter contributes to energy of our present universe as [29,30] where g * (T os ) is the effective degrees of freedom of early universe at the temperature for the coherent dilaton field starting to oscillate. Very light dilaton as dark matter has been studied in [29,30] in the context of walking technicolor. The light dilaton in our model might be detected in similar experiments such as a microwave cavity experiment under strong magnetic fields. Dilaton-assisted composite Higgs model In this section we propose a specific model to realize our scenario that the dilaton relaxes the Higgs mass to the electroweak scale. This model is based on a composite Higgs model, where the Higgs boson is a pseudo Nambu-Golstone boson, associated a global symmetry, broken spontaneously by strong dynamics at M 4πv ew [10,34,35]. The Higgs mass is protected by the (approximate) shift symmetry that is radiatively broken by the electroweak interactions, giving the loop-suppressed Higgs mass, whereg is the coupling of the electroweak interactions. On top of these features of the composite Higgs, our model needs to exhibit a (quasi) IR fixed point to have a very light dilaton at low energy that couples to the Higgs fields. Consider a composite Higgs model based on the SU(2) gauge theory with N f Dirac fermions ψ i (i = 1, 2, · · · , N f ) of the fundamental representation [36,37] and with N s Dirac JHEP02(2018)102 fermions χ i (i = 1, 2, · · · , N s ) in the symmetric second-rank ternsor representation [38]. Since the spinors are pseudo real in the SU(2) gauge theory, the global symmetry is SU(2n) for n (massless) Dirac fermions, which breaks down to Sp(2n), once the fermion bilinears form condensates [39]. The Higgs field is then identified as one of the Goldstone bosons living on the coset space, SU(2n)/Sp(2n), where the SM gauge group is embedded in its unbroken subgroup, SU(2) L × U(1) Y ⊂ Sp(2n) so that the Higgs fields transform correctly under the SM gauge symmetry. To see whether our composite Higgs model is near the conformal window or not, we study the two-loop beta function of the SU(N ) gauge theory with N f fundamental Dirac fermions and N s Dirac fermions in the second-rank symmetric tensor representation, that is given as with the coefficient b and c, known as The theory will be asymptotically free if b > 0 and will have a IR fixed point near at α * = −b/c, if c < 0 and the chiral symmetry is unbroken. The chiral symmetry of the Dirac fermions will break at the critical couplings, α c (f ) and α c (s) for the fermions in the fundamental representation and in the symmetric second-rank tensor, respectively, if they are smaller than the would-be IR fixed point α * . The critical couplings are given in the ladder approximation [40,41] as . For the SU(2) gauge theory with N f = 8 fundamental Dirac fermions the lattice results show that the theory is in the conformal window, flowing into a stable IR fixed point [42]. This is consistent with our two-loop beta function analysis, which shows that the critical coupling for the chiral symmetry breaking α c (f ) = 1.40 is bigger than the IR fixed point, α * ≈ 1.26. Let us consider another gauge theory in the conformal window; the SU(2) gauge theory with N f = 4 Dirac fermions in the fundamental representation and N s = 1 Dirac fermion in the symmetric second-rank tensor representation. Since the critical couplings for both representations, α c (f ) = 1.40 and α c (s) = 1.05 are larger than the IR fixed point, α * 0.84, the theory will be in the conformal window, according to the analysis based on the two-loop beta function. The theory will flow from the asymptotically free theory to the IR fixed point. The coupling never becomes strong enough to break the chiral symmetry. Now, we gauge half of the flavor of the fundamental Dirac fermions so that they become bi-fundamental under SU(2) 1 × SU(2) 2 (See Table 1.). For the bi-fundamental fermions ψ i (i = 1, 2) the attractive forces are additive and thus the critical couplings for the chiral symmetry breaking will be smaller than α c (f ) = 1.40 in JHEP02(2018)102 the ladder approximation, since the Bethe-Salpeter Kernel for the fermion-bilinear in the scalar channel is approximately in the short-distance limit [41] where α i is the coupling of SU(2) i at the symmetry breaking scale, Λ SB , and x 2 is the distance square of the four-dimensional Euclidean space. However, unlike the SU(2) 1 gauge theory, the SU(2) 2 gauge coupling runs, becoming strong at low energy (E Λ SB ). Therefore we tune α 2 to become close to the α c (f ) − α * ≈ 0.56 at E = Λ SB so that the chiral symmetry of the bi-fundamental fermions breaks dynamically very near the IR fixed point of the SU(2) 1 gauge theory. 7,8 Once the bi-fundamental fermions get dynamical mass, they will decouple at low energy and the SU(2) 1 coupling becomes stronger and stronger to break the SU(2) χ chiral symmetry of χ ab down to U(1) χ and we will have two extra Goldstone bosons, Φ χ . By identifying the unbroken U(1) χ as the U(1) em , the Goldstone bosons are oppositely charged and get mass ∼ eM χ , where e is the electric charge and M χ ∼ M is the scale for the SU(2) χ chiral symmetry breaking. As the SU(2) 1 gauge theory flows into the IR, the bi-fundamental fermions get condensed at Λ SB , breaking the chiral symmetry near the (quasi) IR fixed point. The coupling of SU(2) 1 will show the walking behavior, since its beta function β 1 (α) ≈ 0 for the wide range of scales, shown in figure 4, where the dynamical (IR) scale is given by the Miransky or BKT scaling, We see that the dynamical scale M can be arbitrarily small, compared the chiral symmetry breaking scale Λ SB , if α 1 is close to the IR fixed point α * . Our composite Higgs model therefore is almost scale-invariant for energy M < E < Λ SB and there should be a dilaton associated with spontaneous breaking of scale symmetry, when the bi-fundamental Dirac fermions get condensed at Λ SB to break its global symmetry SU(4) down to Sp(4). 7 Since the bi-fundamental fermions are charged under both gauge groups, the β-function will have mixings between two gauge couplings. At two-loop β(α) = −bα 2 − cα 3 +bα 2 α2 for SU(2)1, whereb = 3/(8π 2 ). The mixing will shift in perturbation the value of b to b −bα2. However, since α2 is at most 0.56 before the chiral symmetry breaking, the mixing does not change the IR fixed point much and thus negligible for our discussions. 8 We note that by gauging partially the flavor symmetry of the gauge theory, as in our case, one can move most of the gauge theories in the conformal window to the broken phase very near the conformal window. Since the vacuum manifold M = SU(4)/Sp(4) ∼ SO(6)/SO (5) is five dimensional, there will be five Goldstone bosons. If we embed the standard model gauge group into the unbroken subgroup Sp(4) ∼ SO(5) ⊃ SU(2) × U(1) , the five Goldstone bosons can be decomposed into one SU(2) L doublet, to become the SM Higgs boson, and one real CP -odd singlet scalar [37,43,44]. The broken generator associated with the singlet scalar is nothing but the axial fermion number U (1) ψ A of the bi-fundamental fermion ψ. Assuming it is non-anomalous, 9 we weakly gauge it so that the singlet is absorbed into the U(1) ψ A gauge boson. The U(1) ψ A gauge boson gets mass ∼ g ψ M 4πv ew , with g ψ being the U(1) ψ A coupling, and decouples from the SM particles at low energy. When the SU(2) × U(1) subgroup in the unbroken global symmetry is gauged, the electroweak interaction contributes to the vacuum energy, lifting the degeneracy of the vacuum manifold. The correction to the vacuum energy at the leading order in the electroweak coupling expansion is given as (see figure 5), after the renormalization, JHEP02(2018)102 where ∆ µν is the electroweak gauge boson propagator and J µ (x) are the electroweak currents, denoted as ⊗ in figure 5. The composite Higgs field φ is nonlinearly realized, with the decay constant, f φ ∼ M by the Pagels-Stokar formula [27]. In addition to the SM gauge bosons, the SM fermions will contribute to the vacuum energy through the Yukawa interactions. To calculate, for instance, the top Yukawa contributions to the vacuum energy, one needs to calculate the two-point function of the composite operators Γ(x) or Γ † (x) of the UV theory, denoted as the bullets in figure 5, that source or sink the top-quark mass term, connected by the top-quark propagators. The zero mode of JHEP02(2018)102 the composite operator Γ(x) for the top quark should be correctly normalized to give the top Yukawa coupling, y t . 10 Expanding the vacuum energy of the composite Higgs due to the vacuum misalignment in powers of the Higgs fields, φ, one finds the Higgs effective potential at the scale M becomes for where The one-loop beta-function for the Higgs quartic coupling β is adjusted to be positive in the composite Higgs model. For instance, the U(1) ψ A gauge-boson contribution to the one-loop beta-function to the quartic coupling is given as which makes the beta-function β > 0 as long as g ψ 2y t . In the dilaton-assisted composite Higgs model the (one-loop) effective potential for the composite Higgs fields and the dilaton is given as where we have chosen the renormalization condition that is consistent with the scale symmetry [14], To find the vacuum configuration we minimize the effective potential: f 2 e 4σ/f = 0, (4.14) 10 The SM fermions are external to the composite Higgs dynamics. Unlike the gauge interactions, the Yukawa interaction of SM fermions will be absent in the composite Higgs, unless the interaction for the Yukawa couplings is incorporated in the UV theory to begin with. Here we assume that the Yukawa couplings are generated in the UV theory through the four-Fermi interactions between the SM fermions and the fermions in the UV theory of the composite Higgs, similar to the extended technicolor [45,46]. JHEP02(2018)102 which gives using the relation E vac = −cM 4 of the composite Higgs model. Neglecting the small mixing with the dilaton, the Higgs mass becomes Since in our composite Higgs model c 1.2 [31], either ξ or β has to be O(1) or the U(1) ψ A coupling g 2 ψ /4π 0.73 to give m H 125 GeV. By coupling the Higgs sector to the light dilaton, we have shown that the Higgs mass is given by the IR scale, m H ∼ v ew , not by the UV scale, M . This seems mysterious but the scale symmetry is working behind. As the Higgs sector flows into IR, M → M , the dilaton transforms σ → σ + f ln (M /M ) to keep the renormalization condition (4.13) until the Higgs field gets the vev, φ = v ew which breaks the scale symmetry. Hence the UV scale of the composite Higgs can be arbitrarily high. The cosmological or phenomenological requirements on the dilaton mass and its decay constant, however, will constrain the scale of the model. In our model with the SU(2) 1 × SU(2) 2 composite-Higgs gauge group, if we take for instance M = 10 TeV and α 1 = 0.98 α * , the dilaton decay constant f ∼ Λ SB 3 × 10 10 TeV to give the dilaton mass The dilaton of this mass range is shown to be a good candidate for the dark matter [29,30]. Discussions and conclusion In this paper we propose a mechanism that very light dilaton naturally derives the Higgs mass to the electroweak scale, if the Higgs field gets the electroweak vevà la Coleman-Weinberg mechanism and couples to the light dilaton. The scale symmetry, associated with the light dilaton, does a crucial role in our mechanism that the Higgs mass is given by the Higgs vev, v ew , the IR scale of the Higgs sector. We then show that the dilaton-assisted composite Higgs model, based on the SU(2) 1 × SU(2) 2 gauge theory with two Dirac fermions in the bi-fundamental representation and one in the symmetric tensor representation of SU(2) 1 , realizes our scenario. Both the dilaton and the composite Higgs are shown to arise as (pseudo) Nambu-Goldstone bosons, once the Dirac fermions in the bi-fundamental representation get condensed. The standard model is then coupled through the very light dilaton to the quasi-conformal composite Higgs model at M 1 TeV. By imposing the scale symmetry on the standard model, the naturalness problem of Higgs mass is alleviated to the UV scale, M . When the electroweak symmetry is radiatively broken by the CW mechanism, the dilaton potential gets an extra contribution from the Higgs vev, which then drives the dilaton vev away from the origin. The non-vanishing dilaton vev relaxes the Higgs mass naturally to be of the electroweak JHEP02(2018)102 scale, as the vacuum energy or the scale anomaly of the scale-invariant UV theory of the Higgs sector is of the UV scale, M . 11 At the electroweak scale, much below the UV scale, the model contains the standard model and only one extra particle, the very light dilaton, which is shown to be a good candidate for dark matter in the universe. If we take for instance the UV scale M ∼ 10 − 100 TeV and the dilaton decay constant f ∼ 10 12−16 GeV, the dilaton mass becomes m D ∼ 1 eV − 10 keV, which is then long lived enough and abundantly produced by the vacuum misalignment to constitute dark matter in our universe. Finally, the dilaton-assisted composite Higgs model predicts in addition to the very light dilaton a heavy (axial) vector boson of mass ∼ g ψ M and two, oppositely charged, pseudo Nambu-Goldstone bosons (SM singlet) of mass ∼ eM . If the UV scale of our composite Higgs model is around a few 10 TeV, their mass will be a few TeV or so, accessible at LHC. JHEP02(2018)102 |E vac | ∼ M 4 , one can write down the low-energy effective theory of dilaton that saturates the scale anomaly: where χ describes the small fluctuations around the asymetric vacuum, with χ = f at the vacuum. The dilatation current in the dilaton effective theory is given as The scale anomaly then takes [48], using the equations of motion for χ, From eqs. (A.3) and (A.5) we get We note that the anomaly equation (A.5) does not fix the constant c 0 . But, our choice of the vacuum, χ = f , fixes c 0 = 1. For the nonlinear realization of the dilatation symmetry we rewrite χ = f e σ/f to get with V A (σ) = |E vac | e 4σ/f (4σ/f − 1). A.2 Dilaton and scale invariance of the Higgs sector To solve the fine-tuning problem of Higgs mass, we embed the Higgs sector to a scaleinvariant theory in UV. The UV theory is assumed to break the scale symmetry spontaneously, generating dynamically a condensate θ µ µ ∼ M 4 . The scale M defines the intrinsic scale of the UV theory such as the dynamical mass in eq. (4.8) or the scale of phase transitions in [23]. Integrating out all the modes above the dynamical scale M in the Higgs UV sector, the low energy effective theory of the Higgs fields is given as, turning off all the SM interactions except the Higgs self interactions and the dilaton coupling, where the ellipsis denotes the higher order terms of φ † φ, suppressed by powers of M . Note that we have included in the effective theory the the dilaton coupling to the Higgs fields, JHEP02(2018)102 We see that because of the shift symmetry of the dilaton field the Higgs sector is scale-invariant up to the logarithmic violation through the constant c m and the quartic coupling λ. Hence, as long as the shift symmetry of the dilaton is good enough, the Higgs quadratic coupling M 2 φ should be unphysical. This property is not spoiled under any radiative corrections from the UV physics of the Higgs sector with spontaneously broken scale-symmetry, because one can always compensate the radiative corrections by shifting the dilaton field, as we have shown in this appendix A.2. The constraint on the Higgs mass, studied in [23], therefore does not apply to our model that has light dilaton from the spontaneously broken scale-symmetry, noted also in [49]. A.3 The renormalization condition m 2 φ (Λ) = 0 Now we turn on the SM interactions of the Higgs fields, which will break the scale symmetry that the Higgs-dilaton sector enjoys. From the effective potential (2.10) or the effective Lagrangian density (A.8) we calculate the one-particle irreducible (1PI) effective potential for the Higgs fields by integrating out all SM particles and possibly some new particles to get at one-loop, neglecting the higher order terms, where the loop momentum is cut off at Λ ∼ M and the effective potential is expanded in powers of Λ with their coefficients c i and β being functions of Higgs couplings to SM particles and also to new additional heavy particles that the UV sector of Higgs fields might have. 13 Though the scale symmetry is explicitly broken by SM interactions, one can still impose the renormalization condition (2.13) that the Higgs quadratic term in the 1PI effective potential at Λ vanishes by redefining the dilaton field σ → σ = σ +σ 0 with a suitable choice ofσ 0 : The choice of the renormalization condition, eq. (A.16) or eq. (2.13), is consistent with the scale symmetry that the Higgs-dilaton Lagrangian of eq. (A.8) enjoys and also with the fact that the Higgs quadratic term is protected above the intrinsic scale M of the UV sector by the symmetry. 14 We emphasize that the choice ofσ 0 in eq. (A.16) is equivalent to the choice of the counter term in the Coleman-Weinberg potential, since the quadratic term M 2 φ e −2σ 0 /f φ † φ represents the effects of the UV sector of Higgs fields. Therefore, if we 13 If one applies strictly to our discussion Bardeen's original proposal for the naturalness problem [14], the only consistent quadratic terms allowed in the radiative corrections in (A.15) are ones due to heavy particles associated with the UV sector of the Higgs fields, but not the one from the regulator. Here, for simplicity, without any confusion the correction c1Λ 2 stands collectively for all radiative corrections to the quadratic term that the effective theory receives. 14 One may argue that the renormalization condition (A.16) is not compatible with the UV theory of the Higgs sector that has some new massive particles or nonperturbative scale. But, we emphasize that what matters is whether or not one can maintain the renormalization condition at all orders in perturbation. JHEP02(2018)102 fix the UV cutoff the Higgs sector to be Λ, the intrinsic scale of the UV theory at vacuum is determined by the condition with σ = 0 c m M 2 = c 1 Λ 2 . (A. 17) We note that the renormalization condition eq. (A.17) holds for any cutoff Λ because the Higgs quadratic term in the effective potential eq. (A.15) is scale-covariant: Under the scale transformation Λ → Λ the dilaton field transforms σ → σ = σ + f ln (Λ /Λ) and the quadratic term becomes This is equivalent to saying that the Callan-Symmanzik equation for the 1PI two-point function of Higgs fields in Fourier transforms becomes Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
9,357.6
2018-02-01T00:00:00.000
[ "Physics" ]
Observations of recurrent cosmic ray decreases during solar cycles During solar cycle 22, the modulation of several hundred MeV galactic cosmic rays (GCRs) by recurrent and transient cosmic ray decreases was observed by the Ulysses spacecraft on its descent towards the solar south pole. In solar cycle 23, Ulysses repeated this trajectory segment during a similar phase of the solar cycle, but with opposite heliospheric magnetic field polarity. Since cosmic ray propagation in the heliosphere should depend on drift effects, we determine in this study the latitudinal distribution of the amplitude of recurrent cosmic ray decreases in solar cycles 22 and 23. As long as we measure the recurrent plasma structures in situ, we find that these decreases behave nearly the same in both cycles. Measurements in the fast solar wind, however, show differences: in cycle 22 ( A>0) the recurrent cosmic ray decreases show a clear maximum near 25 ◦ and are still present beyond 40, whereas we see in cycle 23 ( A<0) neither such a pronounced maximum nor significant decreases above 40 . In other words: the periodicity in the cosmic ray intensity, which can be clearly seen in the slow solar wind, appears to vanish there. Theoretical models for drift effects, however, predict quite the opposite behaviour for the two solar cycles. To closer investigate this apparent contradiction, we first put the visual inspection of the data onto a more solid basis by performing a detailed Lomb (spectral) analysis. The next step consists of an analysis of the resulting periodicities at 1 AU in order to distinguish between spatial and temporal variations, so that we can obtain statements about the question in how far there is a correlation between the in-situ data at 1 AU and those measured by Ulysses at larger latitudes. We find a good correlation being present during cycle 22, but not for cycle 23. As one potential explanation for this behaviour, we suggest the difference in the coronal hole structures between the cycles 22 and 23 due to a large, stable coronal hole structure, which is present during cycle 22, Correspondence to: P. Dunzlaff<EMAIL_ADDRESS>but not in cycle 23. We support this possibility by comparing Yohkoh SXT and SOHO EIT maps. Introduction During a magnetic storm Forbush (1937) discovered that the cosmic ray intensity measured simultaneously at two stations went down by several percent and showed a characteristic profile on a time scale of 2-3 days. These short term decreases in the GCR flux were first thought to be of Terrestrial origin, but the observation of a second type of decreases, recurring with a period of 27 days (cf. Simpson, 1954), also suggested an influence of the solar dipole field. In his detailed analysis Simpson (1954), however, could show that both types of cosmic ray decreases are caused by processes in the interplanetary medium: the first type of decreases is stronger, show a more irregular structure and occur more or less as singular events as mentioned above. These so-called transient or Forbush decreases are caused by interplanetary coronal mass ejections (ICMEs). The cosmic ray decreases of the second type have their origin in corotating interaction regions (CIRs), which are generated by a fast solar wind ramming into a slower flowing one ahead, leading to a structure being stable for several solar rotations. Cosmic ray decreases that are caused by CIRs can hence repeatedly be observed in space, so that they appear as groups with a periodicity of 27 days (see for example Heber et al., 1999;Richardson, 2004, and references therein). They are, thus, referred to as recurrent decreases, reflecting also their more regular, almost periodic structure In this study, we concentrate on recurrent cosmic ray decreases and their periodic nature. On the one hand, they have been observed in situ near 1 AU in the ecliptic plane by spacecraft like ACE or SOHO (cf. Kunow et al., 1995;Fig. 1. The drift motion of positively charged cosmic ray particles in the heliosphere for solar cycles with A>0 (left panel) and A<0 (right panel). The Sun (not to scale), the solar magnetic field, and its polarity are indicated in the background image. Richardson et al., 1999). On the other hand the out-of-plane orbit of Ulysses offers the opportunity to observe such structures not only farther away from the Sun, but also at higher latitudes. As the measurement of cosmic ray decreases at higher latitudes, however, can happen only remotely (e.g. Fisk and Jokipii, 1999), because the modulating structure is no longer present locally, the question of the transport of charged particles within the heliosphere becomes important. In order to address this question, we can also make use of results for Forbush decreases as far as only the propagation of these perturbations through the interplanetary medium is concerned: Le Roux and Potgieter (1991) used a timedependent particle-transport code in order to simulate Forbush decreases. Their finding that amplitude and recovery phase depend on the polarity of the heliospheric magnetic field point out the importance of drift effects. During an A>0 solar magnetic epoch, i.e. when the solar magnetic field is pointing out over the solar north pole (cf. Fig. 1), drift models predict that positively charged particles drift predominantly inward through the solar polar regions and then outward through the equatorial regions along the heliospheric current sheet (Jokipii et al., 1977). In an A<0 solar magnetic epoch, these particles drift mainly into the inner heliosphere along the heliospheric current sheet and then outward through the polar regions (Potgieter and Moraal, 1985) as sketched in Fig. 1. The first opportunity to verify the model of Le Roux and Potgieter (1991) by observing the cosmic ray modulation at higher latitudes was the initial descent of the Ulysses spacecraft in 1992 to 1994 from the equatorial plane towards the solar south pole, i.e. during the A>0 solar cycle 22. A welldefined temporal intensity variation of cosmic rays in connection with CIRs was observed . According to the drift motion shown in the left panel of Fig. 1, this variation should decease at higher latitudes. Moreover, CIRs are limited to latitudes where the slow solar wind has been observed (in the range 30 • -40 • (Paizis et al., 1999) or perhaps smaller, cf. Phillips et al., 1995). Surprisingly, however, Ulysses still observed a periodic modulation with a roughly 26-day recurrence even beyond these latitudes. In order to solve this puzzle as well as in consideration of the observed small latitudinal gradients, Jokipii and Kóta (1995) and Fisk (1996) proposed a large perpendicular particle transport by diffusion or magnetic connection to take place. Thus, longitudinal intensity variations can actually be transported to high latitudes. According to cosmic ray transport models Gil et al., 2005), the amplitudes of the recurrent decreases are expected to be larger for cycles with A<0 than for those with A>0. In contrast to this expectation the comparison of galactic cosmic ray data close to 1 AU for different solar cycles (Richardson et al., 1999;Alania et al., 2005;Richardson, 2004), however, showed quite the opposite: the amplitudes observed during the A<0 cycles 21 and 23 turned out to be smaller than those observed during the A>0 cycles 20 and 22. Since Ulysses is the only spacecraft, which has measured the GCR decreases both in an A>0 and an A<0 magnetic epoch in situ as well as remotely, it will add important informations on these contradictions. Furthermore, the amplitude provides also informations about the spatial distribution of the cosmic rays: Paizis et al. (1999) and Zhang (1997) found a close correlation of the amplitude with the radial and latitudinal gradients, where it is important to note that they found a maximum intensity at latitudes around 25 • -30 • . Based on the new Ulysses results for solar cycle 23, our study is arranged in the following way: in Sect. 2 we briefly describe the trajectory and the instruments on board of Ulysses, followed by a report of the data in Sect. 3. Section 4 is devoted to the analysis of the data and their possible conclusions: Sect. 4.1 presents a comparison of the amplitudes for both solar cycles and their latitudinal dependencies. A detailed mathematical analysis of the observed periodicities is given in Sect. 4.2. In order to properly classify our observations, we must separate temporal from spatial variations. This is done in Sect. 4.3 by comparing the Ulysses data with those obtained with SOHO and ACE at 1 AU, keeping in mind that CIRs (as the main cause for recurrent decreases) do not become fully developed until 1 AU, so that the Ulysses measurements between 2 and 5 AU are crucial to investigate local CIR effects, whereas the SOHO and ACE data provide the heliospheric background conditions. A possible explanation is suggested in Sect. 5, where perpendicular diffusion of the particles within coronal hole structures is taken into account. Finally, we summarise and discuss our findings in Sect. 6. Ulysses -trajectory and instrumentation Ulysses was launched on 6 October 1990 and followed an in-ecliptic path towards Jupiter in order to be deflected in February 1992 by the gravitational field of the planet to its C22 1992.60-1993.50 1993.80-1994.60 C23 2004.80-2006.00 2006.15-2007.00 final out-of-ecliptic orbit with a period of 6.2 years. The spacecraft reached its maximum latitude of −80.2 • in mid-1994 during the declining phase of solar cycle 22. The two upper panels in Fig. 2 display the radial distance and heliographic latitude of Ulysses. The shaded areas C22 and C23 indicate the two time intervals of almost identical trajectories analysed in this study, referring to the solar cycles 22 (A>0) and 23 (A<0). Both intervals are further divided into a period P1, when Ulysses was sampling both the slow and fast solar wind and a period P2, where Ulysses was exposed to only the fast solar wind 1 . In the following we refer, thus, to the four periods of time displayed in Table 1. The third panel displays the main difference between the two intervals, the oppositely directed solar polar magnetic fields. It shows the field strength in the southern as well as in the northern polar regions, taken from http://quake.stanford. edu/ ∼ wso/. In addition to the sign, the data also show a lower absolute value of the magnetic field strength during C23 than during C22. The fourth panel is dealing with the solar activity and shows again more similarities than differences: while the sunspot number (black line) is lower in C23 than in C22, the tilt angle of the solar magnetic field (red line) was somewhat higher in the second epoch, so that we may conclude that the two periods, C22 and C23, are characterised not only by almost identical trajectory segments, but also by nearly the same heliospheric conditions. With the polarity of the solar magnetic field being the only real differerence, the data analysis of these two intervals offer an almost ideal opportunity to study drift effects isolated from other influences. The Ulysses data used in this study were obtained with the Kiel Electron Telescope (KET), which is part of the Cosmic Ray and Solar Particle Investigation (COSPIN) (Simpson et al., 1992), the Solar Wind Observations Over the Poles of the Sun (SWOOPS) (Bame et al., 1992) and the Vector Helium Magnetometer (VHM) on board Ulysses. As mentioned in the introduction, we compare the Ulysses data with those obtained close to Earth. The data used for this analysis were obtained from the Solar Wind Electron, Proton & Alpha Monitor (SWEPAM) (McComas et al., 1998), the Magnetic Field Experiment (MAG, Smith et al., 1998) on board the ACE spacecraft (Stone et al., 1998), the Electron Proton Helium Instrument (EPHIN) (Müller-1 In order to avoid ambiguities we will call P1 and P2 in the following simply the periods of the "slow" (i.e. slow and fast) and "fast" (i.e. only fast) solar wind, respectively, keeping in mind that this is strictly speaking an incorrect simplification. (Clem and Dorman, 2000). Observations An overview of the Ulysses measurements made in C22 and C23 is displayed in the left and right panels of Fig. 3, respectively. Both plots show, from top to bottom, the solar wind speed (SWOOPS), the magnetic field strength (VHM), 250-2000 MeV protons (with the smoothed count rate plotted in red) and the detrended count rate of the cosmic ray protons, C/C (given in %). In addition, the radial distance and latitude of Ulysses are displayed on top of the plots. The dashed vertical lines mark periods of 26 days (C22, left) and 24.5 days (C23, right). From these observations, it becomes evident that the solar wind speed, the magnetic field strength and the GCRs vary on time-scales close to the solar rotation period in the time intervals C22/P1 and C23/P1, caused by CIRs. When the spacecraft enters the region of the fast solar wind (i.e. the sub-periods P2 in our nomenclature) at about < ∼ 40 • S, the variation seems to vanish both in the solar wind speed and the magnetic field strength in both solar cycles. The cosmic ray intensity, however, continues to be modulated in C22/P2 (cf. Kunow et al., 1995), whereas it almost vanishes in C23/P2. Lario and Roelof (2007) analysed Ulysses HISCALE data (Lanzerotti et al., 1992) and found recurrent energetic particle events for both cycles at all latitudes. While the observations in C22 are in good agreement with our results, they do not correspond for C23. With respect to drift-dominated transport models like Jokipii et al. (1977), our new observation is, on the one hand, consistent with the theory insofar that different polarities of the solar magnetic field should also lead to different features in the GCR decreases, but on the other hand, the second observation is as surprising as already the first one was, i.e. we should have seen quite the opposite in both cases. Analysis of the data The Ulysses observations both for C22/P2 and C23/P2 seem to be in apparent contradiction with drift-dominated propagation models. Before studying the periodicities by using a detailed mathematical (Lomb) analysis, we will first investigate the amplitudes of the GCR decreases. The purpose is twofold: we can investigate in how far these are consistent with the theory and we can discriminate transient decreases from recurrent ones to eliminate the transient decreases from our analysis of the periodicities. Amplitudes During C23 Ulysses observed more transient and fewer recurrent cosmic ray decreases than during C22, as the third and fourth panels of Fig. 3 show, indicating that the solar activity in 2004 was larger than that in 1992. From the sunspot number, displayed in Fig. 2, we would expect the opposite, so that the essential role is obviously played here by the tilt angle, which was larger during C23 than during C22. Four outstanding transient events are marked by (A) to (D) in the right panels of in Fig. 3. They are correlated with periods of larger solar activity in January, August, September 2005, and December 2006 (Struminsky, 2007;Malandraki et al., 2007) and several ICMEs and solar energetic particle (SEP) events Lario and Roelof (2007) that have been observed in particular up to 2005.6, i.e. during C23/P1. The prominent event occuring close to 20 • in C22 may also be related to a SEP event Lario and Roelof (2007). In order to determine the amplitudes of the recurrent GCR decreases we adapt the procedure suggested by Paizis et al. (1999) as illustrated in the small insert of Fig. 3: we first determine three values: the counting rate, c s , at the positive peak in the centre of the decrease and the two negative peaks, c 1 and c 2 , preceding and succeeding the positive peak, respectively. The amplitude, c, is then defined as the difference between c s and the mean value of c 1 and c 2 , i.e. c=c s −(c 1 +c 2 )/2. Figure 4 shows c as a function of Ulysses latitude for the complete time intervals C22 and C23, indicated by filled and open triangles, respectively. For C22, we find a good agreement with the values determined by Paizis et al. (1999). In particular, our analysis confirms their latitude dependence, with a clear maximum around 25 • . The results for C23 show smaller count rates due to the enhanced solar activity and a less pronounced maximum, if any. Both data sets appear to be somewhat distorted around the transition from the slow (P1) to the fast (P2) solar wind at about 35 • -40 • . While the absolute values, c, of the amplitudes in Fig. 4, are considerably lower in C23 than in C22, the relative values, c/c in the slow solar wind (periods P1 of each cycle), are comparable, as Fig. 5 shows. Here, the relative amplitude is plotted as a function of the reduced solar wind speed. The latter has been calculated by subtracting 380 km/s (C22) and 300 km/s (C23) from the measured values. In both cycles a negative slope can be seen with the gradient being somewhat stronger for reduced solar wind speeds below 200 km/s. From this observation we may conclude the following: during C23, the GCRs were exposed to a more active Sun than during C22, causing the amplitudes of the GCR recurrent decreases to be smaller (Fig. 4). Eliminating these effects by considering the relative amplitude and the reduced solar wind speed, the data for the slow solar wind (P1) do not show significant differences between both cycles, so that the influence of the CIRs on the GCR modulation is (almost) the same at the position of Ulysses as long as the CIRs can be measured in situ. Periodicities In the fast solar wind (P2), however, we see much larger differences between C22 and C23 with the most remarkable one being the fact that the periodicity in the GCR modulation appears to vanish in C23/P2. In order not to be limited to a simple visual inspection and a manual determination of local minima and maxima and their periodicities, we set the determination of the latter onto a solid mathematical base by applying the Lomb algorithm for being able to search also for smaller amplitude variations (Lomb, 1976). Figure 6 shows the Lomb periodograms for the Ulysses measurements in the interval C22. The first panel shows the results for the solar wind velocity (green), the second one those for magnetic field (red), and the third one the cosmic ray flux (blue). In each panel the values for the periods P1 (slow solar wind) and P2 (fast solar wind) are displayed with dark and light colours, respectively. The results for C23 are shown in the same way in Fig. 7. The 26-day sidereal period of the Sun is highlighted by the vertical line, whereas the horizontal line marks a significance of ∼99%. In order to avoid a over-plotting of two close lines, this level, which is always very close to a value of 10, has been set to 10 in the plots. Note that the transient events (A)-(D) have been removed from the periodicity analysis for C23/P2. The periodograms for both cycles confirm the visual impression of the data displayed in Fig. 3: all three quantities show during cycle C22 a significant periodicity of 26 days as long as Ulysses is located in the slow solar wind (P1, dark lines in Fig. 6). In the fast solar wind (P2, light lines) a clear periodicity is still present only in the solar wind velocity, whereas a periodicity in the magnetic field and GCR data may also be present, but is by far less obvious. These periodicities have already been discussed extensively in the literature (cf. Paizis et al., 1999;Zhang, 1997) and could be used to validate our analysis. The picture for cycle 23 (Fig. 7) is quite different: for Ulysses flying through the slow solar wind, we can also see a periodicity, but only in the solar wind velocity and the magnetic field, and with a somewhat shorter time period of 24.5 days. The GCR variation does not show a clear periodicity, but this may be owed to the disturbing influence of transient decreases like the events (A) to (D). A period of 24.5 days may, thus, be present, too. In the fast solar wind, however, no periodicity can be seen at all. In both cycles, we see clear periodicities, although with slightly different periods, as long as Ulysses can measure the effects of CIRs in situ, i.e. in the slow solar wind. In the fast solar wind, where in particular the GCR modulation can only be observed remotely, we see, as already suggested by Fig. 3, a different behaviour in both cycles. The result, however, is that both data sets consequently contradict the transport models, so that further analysis is necessary. Comparison with 1 AU data The observations discussed so far raise the question of how the transport of charged particles in the inner heliosphere could take place. In order to get an idea about the large-scale processes we try to distinguish spatial from temporal variations by investigating also data measured in situ at 1 AU, keeping, however, in mind that CIRs usually only fully develop beyond the Earth orbit. For the time interval C22 we could only make use of the Moscow Neutron monitor, because 1 AU plasma data were measured during that period only by the IMP-8 spacecraft, which, however, stayed for significant times within the Earth's magnetosphere. Thus, CIR plasma parameters are measured with a bad coverage, making a reasonable spectral analysis impossible. For C23 we can resort to data for all three quantities studied in the previous sections: the solar wind speed and the magnetic field were measured by the instruments SWEPAM and MAG on the ACE spacecraft, while intensity of >50 MeV GCRs was measured by the EPHIN instruments on-board SOHO. The latter were complemented here by the count rates of the Kiel Electron Monitor. A first impression of the data (Fig. 8) shows no significant change in the GCR modulation in cycle C22, whereas in C23 a transition to a more quiet phase appear to take place around 2005.7, i.e. towards the end of time period C23/P1. More profound statements, however, also require a Lomb analysis, the results of which are shown in Figs. 9 (GCRs only) and 10 for the time intervals C22 and C23, respectively. Again, dark and light colours refer to the periods P1 and P2, respectively. While Ulysses was located in P1 in the slow solar wind and in P2 in the fast one, the measurements discussed in the following were all obtained in situ at the same position, so that a comparison of the periodicities at 1 AU with those along the Ulysses trajectory should shed light on the question whether we see temporal or spatial variations. The data at 1 AU behave thoroughly different from the Ulysses data. For C22, where only GCR data can be used, a clear periodicity of about 29 days (with a second one of ∼34) observed during P1 becomes weaker and shifted to about 30 days in P2, the second peak vanishes. In contrast, three clear peaks arise at 13, 14, and 15 days, i.e. with about half the period as before. Such a behaviour can be expected for a northern solar coronal hole extending to the solar equator, indicating, thus, a reconfiguration of coronal structures from a one-stream to a two-stream structure. For the time interval C23, a clear periodicity has been found only in the solar wind speed during P1 with a period of 27 days, corresponding to the synodic rotation of the Sun. The magnetic field data and the solar wind speed during P2 do not show a pronounced periodicity, but we see at least a low peak in the periodogram near 27 days, which comes close to the level of significance only for the solar wind speed. In addition, we see a clear periodicity of 9 days of both quantities and in both sub-periods P1 and P2. The peaks at 9 days are plotted enlarged in Fig. 11. If comparing them we see different tendencies: while the peak at 9 days is more distinct in P1 than in P2, i.e. diminishes from P1 to P2, the periodicity of 27 days appears to evolve from P1 to P2. In contrast, the GCR modulation does not show any clear periodicity at all. Our results concerning the significance of periodicities close to that of the solar rotation (with the sidereal period for Ulysses, the synoptic one for ACE and SOHO) are summarised in Table 2. Comparing for both solar cycles the results of period P1 with that of P2 as well as the 1 AU data with that by Ulysses, we come to the following conclusions: -C22: At 1 AU we see a more or less stable configuration rotating with the Sun, although some reconfiguration of Table 2. Periodicities in the solar wind velocity (v sw ) and magnetic field (B) as well as for the detrended cosmic ray flux ( c/c) for the different time intervals as measured by Ulysses (upper half) and by various spacecraft (cf. text) at 1 AU in the ecliptic plane (lower half). Plus and minus signs refer to a peak clearly above and below the line of significance in the Lomb analysis, respectively. Peaks close to this line are shown by an open circle. The brackets indicate a periodicity with a slightly ( > ∼ 1 day) deviating periodicity. the corona seems to affect the results. As the analysis for the Ulysses data shows a quite similar behaviour, we can state that the variations we see in the data can be explained by spatial variations and that there seems to exist some kind of correlation between the regions of low and high latitudes. -C23: In addition to a periodicity of 27 days, we also observe one with 9 days in the 1 AU data. These two periodicities, however, evolve differently in time: while the significance of 27-day periodicity increases, the 9 day one decreases from P1 to P2. In addition, the solar wind structure changes around 2005.7, so that the variations observed at 1 AU are temporal. At Ulysses we see the opposite temporal evolution: the 29 day period is vanishing from P1 to P2 in C23. Since the temporal evolutions are not correlated with each other and the Earth is a "fixed" point with respect to latitude, we conclude Ulysses is entering a different region in space being linked at best loosely to lower latitudes. Thus, again the observations are caused by the spatial variation of the Ulysses spacecraft. The conclusion is that there is no obvious correlation between the two locations by drift, diffusion or similar effects. A possible interpretation The analysis in the previous section could reduce the contradiction between observations and drift-based transport models to the question of why there exists a correlation between processes taking place near the equatorial plane at 1 AU and those at higher latitudes in cycle C22, but not in C23. Drift effects play obviously only a minor role, so that we must look for alternative explanations. The main component of the solar magnetic field beyond the source surface is, at least up to a first approximation, the radial component and further out also a longitudinal component, perpendicular to which the GCRs have to be transported into the polar direction. As drift effects are apparently not sufficiently effective, we suggest perpendicular transport to be provided by diffusion instead (cf. Jokipii and Kóta, 1995;Fisk, 1996). The question remains, however, why this transport works in C22, but not in C23. Two facts may help to find an answer to this question: on the one hand (Jokipii and Kóta, 1995) discovered that perpendicular diffusion in the latitudinal direction is by a factor of 3-4 (Ferreira et al., 2001) more effective in the fast solar wind of a polar coronal hole than in the slow wind, provided there are stable structures much larger that the particles' gyro radius. The regions of fast solar wind are, as Helios measurements show (Schwenn, 1990), separated by sharp boundaries from those of the slow solar wind. Such boundaries occurring in the longitudinal direction generate strong gradients in the solar wind speed and finally lead to the formation of CIRs. As CIRs are known to represent effective "barriers" for particle propagation, the idea is now to investigate such boundaries also in the latitudinal direction. The purpose is again twofold: Are there actually large and stable regions of fast solar wind, i.e. coronal holes, where effective perpendicular diffusion can take place? And: do we possibly see boundaries, which can be regarded as "barriers" for the latitudinal particle transport? These questions are addressed by investigating the coronal hole evolution deduced from Carrington maps of the YOHKOH Soft X-ray Telescope (SXT) in C22 and the SOHO Extreme ultraviolet Imaging Telescope (EIT) in C23. The left panel of Fig. 12 shows the SXT maps for Carrington rotations 1868 (April/May 1993), 1874(September/October 1993), and 1880(March 1994, whereas the right panel shows EIT maps. Displayed are the Carrington rotations The maps for C22 (Yohkoh SXT) show an extended coronal hole structure reaching from southern polar regions to the equator. Although this coronal hole moves slowly, i.e. within one year, from about 45 • in CR 1868 to about 0 • in CR 1880, its form remains almost the same, so that we see an extended, stable structure, within which effective perpendicular diffusion can take place. For C23 (SOHO EIT) we see, in contrast, only small and variable equatorial coronal holes, which do not extend to higher latitudes. Therefore, we conclude tentatively that the modulation of GCRs is correlated with the spatial extensions of these holes. In C23 the holes are large enough to allow the acceleration of low-energy particles, but too small for an efficient acceleration of high-energy particles (Lario and Roelof, 2007) or modulation of GCRs. The Carrington maps, thus, suggest the following interpretation: during C22 a large southern coronal hole extending up to equatorial regions allows the transport of charged particles in the latitudinal direction by perpendicular diffusion, while in C23 no such structure is present. Instead, small hole structures with in part sharp boundaries do not permit effective perpendicular diffusion as was present in C22. Thus, we may conclude that drift effects are in both solar cycles of minor importance. Summary and conclusion Decreases in the intensity of galactic cosmic rays can be divided into two groups: while transient decreases are caused by more or less isolated events like interplanetary coronal mass ejections, recurrent decreases are caused by corotating interaction regions (CIRs). The periodic nature of the latter make recurrent decreases a useful tool for studying the transport of charged particles in the heliosphere. Of particular interest is the transport in the latitudinal direction, i.e. from the equatorial plane to the polar regions of the heliosphere and vice versa: while the corotating interaction regions occur usually only at low latitudes, observations at higher latitudes can reflect the CIR modulation only remotely and provide, thus, valuable informations about the latitudinal transport of galactic cosmic rays, i.e. perpendicular to the heliospheric magnetic field. The inclined trajectory of the Ulysses spacecraft provided for the first time the opportunity not only to measure the cosmic ray intensity, but also the solar wind speed and the magnetic field. The perpendicular transport of charged particles should essentially be provided by diffusion and, in particular, by drift effects. Analytical as well as numerical models have been developed in order to model the drift motion in the heliosphere, which depends on the sign of the solar magnetic field. The data obtained during the first descent of the Ulysses spacecraft in the A>0 solar cycle 22, however, did not show the expected result: the periodicity in the cosmic ray intensity observed in the slow solar wind (period P1) was still present at high latitudes, i.e. in the fast solar wind (period P2). The second Ulysses flyby about 12 years later offered the opportunity to repeat the measurements along the same trajectory and in similar solar wind conditions, but with reversed solar magnetic field polarity during the A<0 solar cycle 23. These second observations, however, were as surprising as the first ones were: the periodicity observed in P1 almost vanishes in P2, a result that was expected, however, for the opposite polarity of the solar magnetic field, i.e. during cycle 22. In order to get an idea how this apparently opposing behaviour in both solar cycles arises, we performed a detailed analysis of the data by investigating (1) the amplitudes and (2) the periodicities of the cosmic ray decreases and comparing (3) the Ulysses data with those obtained along the Earth orbit in order to distinguish spatial and temporal variations. The results can be summarised as follows: -During solar cycle 23 the amplitudes of the cosmic ray decreases were lower than in cycle 22, indicating an increased solar activity. The dominating criterion for the latter is obviously the tilt angle of the solar magnetic field rather than the sunspot number. The relative amplitudes in the slow solar wind (P1) are, however, quite similar in both cycles, so that we can conclude in agreement with the measurements by Lario and Roelof (2007) that the cosmic ray modulation by CIRs is more or less the same. -The Lomb analysis shows for both cycles a clear periodicity in the slow solar wind. While that of 26 days in cycle 22 clearly reflects the sidereal rotation of the Sun, that of 24.5 days in cycle 23 cannot be explained so far. The results for the fast solar wind confirm the first impression: while the modulation in cycle 22 continues to be periodic, there is almost no periodicity in cycle 23. -The comparison of the periodicity analysis for Ulysses data and those measured at 1 AU are compiled in Table 2. Our interpretation is that we see in cycle 22 spatial variations of a stable structure at both locations, so that a correlation can be established. In cycle 23, however, we see temporal variations at 1 AU, but spatial variations along the Ulysses trajectory, i.e. neither a stable configuration nor a correlation between the slow and fast solar wind regions. Drift-dominated particle-transport models obviously fail, on the one hand, to explain our measurements, but on the other hand, the large-scale stable structures seen in cycle 22 suggest latitudinal diffusion to take place instead. This process can work much more efficient in extended regions within the fast solar wind, so that we investigated in addition the coronal hole structures for the respective time intervals by using Carrington maps by Yohkoh SXT (cycle 22) and SOHO EIT (cycle 23). The maps actually show a large and (almost) stable coronal hole extending from the south pole into equatorial regions in cycle 22, but only small-scale structures with boundaries, which can be regarded as "barriers" for the particle transport in cycle 23. As a possible explanation, which certainly must be critically inspected by further studies, we suggest that the modulation processes are almost the same, so that different coronal hole structures leading to different CIR structures rather than drift effects are the reason for the opposing behaviour in both cycles. This conclusion is supported by Lario and Roelof (2007) showing that CIRs in cycle 23 were, in contrast to cycle 22, only able to accelerate low energy particles. While a large and stable coronal hole allows an effective latitudinal transport in cycle 22, small-scale structures and boundaries allow almost no correlation between slow and fast solar wind regions in cycle 23.
8,296.4
2008-10-15T00:00:00.000
[ "Physics" ]
Parameter Effect on Tribology Performance of Biopolymer Composite Green Lubricant In this paper, the effects of loading capacity and sliding speed on the tribology properties of composite films formed by a high-concentration MoS2 additive and the biopolymer hydroxypropyl methylcellulose (HPMC) are demonstrated. The main mechanisms that affect the coefficient of friction are the real contact area and the formation of the transfer layer. Reducing the real contact area will decrease the overall coefficient of friction, while the formation of the transfer layer is a dominant factor for high-load and high-sliding-speed conditions, which results in a stable coefficient of friction. The self-lubrication phenomenon occurring after the formation of the transfer layer is mostly responsible for the excellent tribology properties of the MoS2/HPMC composite material. Introduction In recent years, several environmental issues have been widely discussed, including climate change anomalies, severe weather changes, lack of water resources, food shortage, and an increase in sea level. At the same time, the development of environmental technologies such as those for reduced greenhouse gas emission as well as the reduced usage and reutilization of natural resources have attracted much attention. The application of natural biopolymer materials has also recently received a lot of attention such as in electronic devices (transistors, loudspeakers, and actuators) and sensors (biosensors, gas sensors, and chromogenic sensors). The industry now requires the use of environmentally friendly or earth-friendly materials. For example, technologies substitute cutting fluid with dry plating for mechanical knives and minimum quantity lubrication (MQL) using biopolymer additives. This has been continuously developing. In addition to dry lubrication plating and MQL, green tribological material technology has also been extensively studied and developed. (1,2) This year happens to be the 50th anniversary of the renowned Jost Report. As pointed out in the report, friction/lubrication is not only a technical problem but also an economic problem. (3) As a matter of fact, friction/lubrication is even more critical today: it is about the sustainability of the earth. (4,5) Good green tribological materials have several important features in common: (1) environmentally friendly (from nature), e.g., natural oils, including soybean oil and coconut oil, and plant cellulose; (6,7) (2) biologically decomposable; (8) (3) widely applicable; (9)(10)(11)(12) (4) easy to prepare; (13,14) and (5) reliable tribological performance. (15)(16)(17) In contrast to traditional petrochemical oil products, the lubrication performance characteristics (friction and wear reduction) of green tribological materials that meet the above requirements are not as good as those of the former. Therefore, there are still many research topics worth investigating. In this study, a biologically and environmentally friendly biopolymer material, hydroxypropyl methylcellulose (HPMC), is introduced and its tribological performance with MoS 2 additives that are known for their excellent tribological properties is strengthened. By developing this environmentally friendly lubricant and using it in sustainable manufacturing applications, the ultimate goal of this work is to reduce the use of petroleum-based lubricants and minimize damage to the earth while achieving friction reduction, wear resistance, and energy conservation. Preparation of MoS 2 /HPMC composite films and the control of film thickness The MoS 2 /HPMC composite film was produced by adding 5 g of HPMC to 30 mL of water and 130 mL of ethanol, followed by heating to 60 °C. A MoS 2 additive (13.5 g) was added to the solution and ultrasonicated for 20 min. A solution of 150 μL was injected onto a silicon substrate by a micropipette. Films were observed after the solutions were left to stand for 1 h at 25 ± 2 °C and 60 ± 5% RH. Film analysis The surface morphology of the films was determined using a scanning electron microscope equipped with an energy-dispersive X-ray spectroscope (EDS). Tribological performance analysis of MoS 2 /HPMC A pin-on-disk system was used to study the friction behavior of the composite films. Experimental details were as follows: the radius of gyration was 2 mm, the sliding velocity was varied from 0.01 to 0.05 m/s, and a DIN 17350 chrome steel ball was used as the counter grinding ball. Results and Discussion The SEM cross-sectional image in Fig. 1(a) has thickness information, which shows a film thickness of approximately 100 µm. There are uneven portions on the surface because it is very difficult to create a clean and smooth cross section on a soft elastic film such as HPMC. In the process of splitting, the deformation and destruction of the film are likely to occur, which is indicated by the unevenness in the image. (18) Figures 1(b)-1(d) show the results of EDS mapping, wherein the Mo and S signals (from MoS 2 ) are observed from Figs. 1(b) and 1(c), whereas the C signal is observed in Fig. 1(d) (from HPMC). It can be seen that MoS 2 particles are uniformly distributed in the HPMC substrate. The nine-point Raman measurement method was adopted in the experiments, where nine regions with the same size were drawn on a 10 mm 2 Si base, followed by Raman spectral measurement at the center of each region. The results are shown in Fig. 2. (5,19) The distinct peaks E 1 2g and A 1 g are respectively located at 383 and 408 cm −1 , corresponding to the characteristic peaks of MoS 2 , which indeed confirms the existence of the MoS 2 additive. (20) Further analyses of the positions and amplitudes of the nine-point peaks show that all relative errors are within 3%, suggesting an excellent uniformity of the MoS 2 particle distribution. The tribological properties with 10%-added MoS 2 were experimentally studied under different loads, viz., 2, 5, and 8 N. When the load was 2 N, according to the SEM measurement, the real contact area with grinding chrome steel balls was 0.1 mm 2 , and the contact pressure was 20 MPa. Previous studies noted that when the load is small, even if there is an uneven deformation, in situations where the roughness of the contact surface is constant, the real contact area and the coefficient of friction are considered to be proportional to 2/3 of the load. (21,22) In this case, the coefficient of friction is related to the shearing force and hardness of the material. (23) However, when the load is large, an insufficient load capacity will result in the elastic or plastic deformation of the film, which will increase the contact area and the overall average coefficient of friction. (24) When there is a relative movement between two objects of distinct materials in contact, if the cohesion force of one material is smaller than the adhesion force in the contact interface, the material with a low cohesion force will break and adhere to the surface of the material with a high cohesion force. This phenomenon is called material transfer, which forms the transfer layer as a result. (25) The condition of the transfer layer directly affects the tribological properties of the entire system. During the run-in period, a great amount of wear will be produced, but the wear will gradually decrease when the transfer layer formed (transition period) and well-covered (steady state) the object surface. (26) After effectively adhering to or covering the objects in contact, the transfer layer material can provide a self-lubrication effect, which is one of the main mechanisms responsible for the considerable reduction in frictional resistance. (16,25,27) As shown in Fig. 3, only the tribological properties during the run-in and transition periods are discussed. At a low load (2 N) during the run-in period, the amount of deformation of the soft film is small. The coefficient of friction is related to the shearing force of the material, and therefore it is low compared with high-load situations. However, because the transfer layer is not easy to produce, the coefficient of friction is unstable. At a high load (5 or 8 N), the soft film is deformed, resulting in an increase in real contact area, which generates a higher coefficient of friction. Because the load is higher, the transfer layer is easily formed, resulting in relatively stable tribological properties. It has been proposed in the literature that a load on the system and sliding speed will affect the generation of the transfer layer and also directly affect the subsequent tribological behavior. (28) Figure 4 shows the tribological properties of the film at different sliding speeds. With an increase in sliding speed, the friction coefficient exhibits a stable state. This is because when the sliding speed is high, the transfer layer can easily adhere to and cover the objects in contact, producing a good self-lubrication effect. The relationship between the average coefficient of friction in and the sliding speed obtained from Fig. 4 is plotted in Fig. 5. Here, two mechanisms are considered to affect the behavior of the coefficient of friction: the real contact area and the length of the run-in period. At a low sliding speed, because the real contact area is small, although the unstable period is relatively large (longer run-in period), the overall average coefficient of friction is lower than that at higher sliding speeds. Sensors and Materials, Vol. 29, No. 11 (2017) 1495 While at higher sliding speeds, because the real contact area is large, the steady state will be rapidly reached (short run-in period) and the system will have a higher coefficient of friction. This once again verifies the discussion and results shown in Fig. 3. Conclusions The main purpose of this research is to study the effects of a high-concentration MoS 2 additive on the tribological properties of the biopolymer HPMC. The results demonstrate the importance of the growth time and uniformity of the transfer layer in the system to the tribological properties of MoS 2 /HPMC. The real contact area of the soft film is also a very important parameter in different applications. The conclusions that may be drawn from this study are as follows: (1) MoS 2 additives and HPMC can form a completely covered and uniformly distributed composite film. (2) An appropriate load can accelerate the formation of the transfer layer, providing a favorable coefficient of friction. (3) The sliding speed affects the run-in period and the real contact area during wear.
2,352
2017-01-01T00:00:00.000
[ "Materials Science" ]
Optimized decision algorithm for Information Centric Networks Information Centric Networks (ICN) enable network, server context and user context-awareness, to achieve an enhanced architecture for the delivery of the multimedia content. The information comes from different sources and serves as input for the decision algorithms for choosing the pertinent configuration such as the best server or the suitable delivery path. Therefore, the relevance of the input informa- tionandtheefficiencyofthedecisionalgorithmsarebothcru-cial for the system performance. This paper proposes exploiting the multi-criteria optimization algorithms in the context of the ICN. Based on the approach of the reference level decision, an optimized algorithm is proposed, which considers the impact of different network and server parameters, and dynamically adapts the decision to the current state of the system. The additional contribution of the paper is comprehensive video content consumption simulation model, which represents large scale network. This model was designed to compare effectiveness of decision algorithms proposed for ICN. The presented simulation results prove effectiveness of proposed decision algorithm and suggest its deployment on the future media networks. Introduction In the last years, research initiatives for improving multimedia streaming in the Future Internet [1] have multiplied.From the creation of the future media networks (FMN) cluster [2], a series of research projects develop various complex systems that optimize the content transfer and dynamically adapt the streaming parameters to prevent the degradation.These systems, e.g., [3][4][5][6][7][8][9][10], are generally known as Information Centric Networks (ICN).The selection of pertinent streaming parameters requires awareness about, among others, user profile and context, terminal capabilities, network conditions and server context [11].For example, the knowledge on the location of content replicas, server and network conditions, and content transfer requirements makes feasible optimisation by taking centralised decisions [12], which improves utilisation of network resources and leads to better quality experienced by users. One of the challenges in design ICN system is specification of network-awareness process, which measures network performance metrics having significant impact on the quality perceived by consumers, and associate them with an acceptable cost model [13,14].Despite different existing monitoring tools, characterizing network conditions is still a rough task.In addition, metrics are correlated and this correlation is, in general, impossible to model. Thanks to the network awareness, the selection of the transmission parameters (e.g., content source, bitrate, etc.) can be improved.Such an improvement requires optimized decision algorithm, which considers the possible solutions (e.g., a number of content sources, the different bitrates to download the content, etc.) and decides the best one for the current network conditions.The decision is, in general, an NP-complete problem, since it results in a multi-criteria decision problem.In this case, heuristics are usually used to compute a sub-optimal solution. This paper analyses the decision algorithms for the adaptive streaming on the ICN, and proposes a novel algorithm that optimizes the overall performance of the system.Our algorithm exploits multi-criteria optimisation based on decision space composed of a set of metrics.Such approach, in contrary to previous proposals exploiting a single parameter for decision algorithm, e.g.packet delay [15] or path length [16], can select optimised solution, especially in the case of certain correlation between decision parameters.The effectiveness of proposed algorithm has been evaluated by simulation experiments.In these experiments, we compare proposed decision algorithm with other recently proposed multi-criteria algorithms as well as with the random selection strategy.These simulation experiments were performed assuming an Internet-scale video content consumption system, which models large Video-on-Demand (VoD) service provider.This model assures that the analysed decision algorithms are compared in quasi-realistic conditions. This paper is organized as follows: In Sect. 2 we present an overview of decision strategies used in currently proposed ICN solutions.Sect. 3 presents the multi-criteria decision algorithms developed for content server selection in ICN.The details of the proposed multi-criteria decision algorithm are presented in Sect. 4. It extends the current reference level-based algorithms and has improved performance, as it is shown in the simulation studies presented in Sect. 5.The simulations are directed to compare the proposed algorithm with other solutions in order to quantify its efficiency.The conclusions are presented in Sect.6. Analysis of ICN systems Recently, the ICN has gained attention in various research initiatives, e.g., ALICANTE [3], COMET [4], PSIRP/ PURSUIT [5,6], 4WARD/SAIL [9], DONA [17].Also some ICN-based mechanisms have been proposed in other ICT fields (e.g., Internet of Things [18]).Each of them follows new design paradigms, which treat the content as the primary citizen of the network.The investigated approaches differ in particularities, but all of them support: (1) ubiquitous and location independent content identifiers, (2) content aware routing of requests towards selected content server, (3) innetwork caching and content storage, (4) flexible data plane allowing for anycast and point-to-multipoint connections, and (5) application and location agnostic content access.In this way, the ICN becomes a sophisticated content access and delivery system instead of a simple host-to-host communication network. One of the research challenges in ICN is the appropriate decision process, which selects for example the best content source for serving incoming content requests.The investigated approaches assume that decisions could be taken by the network infrastructure, by the client applications, or by the content provider.Among solutions relaying on the network infrastructure, we can distinguish the "route-byname" [17,18] and DNS-like approaches [4,9].The "routeby-name" approach assumes that every ICN node forwards the content request towards the destination server based on its local knowledge.In these ICN systems, the decision about server selection is taken in distributed way as a concatenation of local optimizations.Therefore, the final solution may not be optimized in the global scope.On the other side, the "DNS-like" approaches collect information about available content replicas, content server status and network conditions, then they use it for selecting the best content server to serve consumer request.In principle, the DNS-like approaches are centralized and could lead to the globally optimal solution.However, the challenge for the DNS-like approaches is to design effective and scalable information system which collects information about content localisation, server load and network status with appropriate accuracy.The investigated approaches exploit distributed information systems designed on federation principles. The client side decision strategy assumes that the application selects the best content based on information collected by itself.The investigated approaches [3,19,20] exploit the dynamic probing and statistical estimation of different information such as round trip delay, bandwidth, servers responsiveness.The results presented in [19] confirm that even simple dynamic probing outperforms blind client-side approaches.However, the main limitation of client side strategy is its limited scalability in an Internet-wide ICN deployment. In order to overcome these limitations, the server side selection strategy has been investigated [20,21].It allows to aggregate information at the server side and to reuse it for redirecting the content requests coming from different specific areas.Moreover, the information at server side should be pro-actively collected and processed before content requests arrive.These features significantly improves scalability of server side selection strategies. Although the investigated ICN approaches differ in input parameters, decision strategies, and the system architectures all of them require an efficient multi-criteria decision algorithm. Multi-criteria decision algorithms In this section we briefly introduce the multi-criteria analysis and present the reference level decision approach [22,23], which constitutes a base for our algorithm.We believe that brief reminder of multi-criteria decision theory allows better understanding the role of decision algorithms in ICN systems.Note that, the main motivation for using multi-criteria decision methods in ICN systems comes from the complex set of input parameters covering content characteristics and location, server and network conditions and content transfer requirements.The multi-criteria optimization requires definition of the problem decision space m . This space covers all candidate solutions considered by the decision process.They are denoted as decision vectors x = (x 1 , x 2 , . . ., x m ) Each decision vector contains m decision variables.Any decision variable may have bounded amount of feasible solutions defined by some given constraints.Multi-criteria optimization focuses on optimizing a set of k objective functions 1 (x), 2 (x), . . ., k (x), which can be maximized or minimized.Note that the problem does not lose generality when we consider uniquely minimization.The aggregate objective function composes a vector of these objective functions: for each decision vector In multi-criteria optimization, a solution x is treated as dominating the solution x if and only if ∀k * ∈ {1, . . ., k} : and a solution x is called efficient if and only if there not exist another solution x , dominating x .The Pareto optimal set composes of all efficient solutions, while the Pareto Frontier covers all outcome vectors y coming from equation, y = Π(x) where x is an efficient solution.Whenever the Pareto optimal set contains more than one efficient solution, the Decision Process should choose one of them.In fact, the Decision Process could (1) provide a priori some knowledge about the problem in order to ensure that the efficient solution outgoing from the model is unique or (2) consider a posteriori the whole set of efficient solutions and choose one unique solution. Applying the multi-objective optimization [24] for ICN system is a challenging task because description of the network behavior is unattainable.Therefore, decision maker must select the most effective solution from a group of feasible and not dominated solutions described by m decision variables (m-criteria) [25,26].Moreover, the effectiveness of the decision algorithm strongly depends on the proper selection of considered decision variables (e.g., server load, routing path load, end-to-end packet transfer delay, available bandwidth at the server and user sides) as well as the algorithm itself. The commonly recognized approach to solve the multicriteria problem is to transform it into a single criterion problem by applying specific cost function (e.g., [27]), which takes decision variables as its argument.Although, any strict monotonic and convex functions could be used as a cost function, the Minkowski norm (1) of order p is widely exploited in many practical approaches, where v i i = 1, . . ., m are decision variables, w i are the weights of each variable and p is shaping factor enforcing non-linear aggregation of decision variables. The significant limitation of the above cost function is a need for "a priori" setting of decision variable weights w i and the shape factor p related to obtain non-linear aggregation.This feature limits applicability of Minkowski norm, since usually the ICN system has no "a priori" knowledge about how to fix the appropriate values of weights w i and shape factor p. Although, the ICN system could estimate values of some parameters, i.e. the server load and Round Trip Time (RTT) by active probing, there is still the problem of how to balance the importance of these two variables by fixing weights w i .Moreover, the implementers have to investigate how decision maker should tune the shape factor p to calculate the cost of candidate solutions. It is worth to mention that decision strategies based on some "a priori" assumptions about the values of weights are not the most effective ones.The main issue is that someone can always find a specific example where the decision algorithm does not select the best feasible solution.Let us consider a linear combination of two random variables corresponding to RTT and server load ( p = 1).In this case, a candidate with medium values of RTT and load will never be selected from the solution with significantly different values of decision variables, i.e. light load and high RTT or vice versa.The similar effect can be observed for the value of p.The decision maker must know in advance preferences about decision variables.However, the proper setting value of p is not a trivial issue because decision variables may be correlated. The commonly recognized approach to overcome this problem assumes independent evaluation of the decision variables.This heuristic is often the unique possible solution in content networks dimensioning (e.g., [28]).Let us remark that the independence of decision variables is acquired by a decision algorithm which uses (2) as the cost function.This means that the limit of Minkowski's norm with p going to infinity prefers feasible solution uniquely based on the most sensitive variable, while ignoring the others variables. In Fig. 1, we present the Pareto optimal set for different values of p.When M( p → ∞), the decision variables are treated independently.The independent treatment of decision variables constitute a base for multi-criteria decision algorithm with the reference levels proposed, among others, in [22] and [23]. The decision algorithms with reference levels use two reference parameters, called reservation level and aspiration level, in order to weight the importance of a particular decision variable.The reservation level defines the upper limit for the decision variable, which should not be exceeded by a feasible solution.On the other hand, the aspiration level constitutes the lower bound beyond which decision variables are undistinguishable because of the same preference level.The reference levels are fixed a'priori by the decision maker to express his/her preferences.Formally, the cost function is defined by equation ( 3), where reservation and aspiration levels for decision variable i are denoted by r i and a i , respectively.max The decision algorithm with the reference levels assumes that decision variables are independent, so there is no need for using shape parameter p.However, we still need to fix appropriate weights of the decision variables.Therefore, Kreglewski et al. (see [29]) proposed to calculate the values of reservation and aspiration levels based on the feasible solutions.Let s (m) = [v 1s , . . ., v ms ] be a solution of the space of feasible solutions ∈ Sxm .The reservation and aspiration levels of decision variable i are estimated based on the maximum and minimum values of this variable in the space of feasible solutions, see formula (4). The cost of considered solution is calculated using equation (3) with the reference levels determined by formula (4). In the proposed optimized reference level decision algorithm, described in the next section, we enhance the reference level approach by considering the impact of current decision on the future state of the ICN system.Such a prediction allows us to prevent ICN system from undesirable states, e.g., server or network overload.We believe that our approach is a step forward in decision algorithm analysis, which has potential to improve the performance of ICN systems. Optimized reference level decision algorithm As stated above, the authors of [29] proposed an algorithm that uses the maximum and minimum values of vector as the reference and aspiration level, respectively.So, the authors considered that the comparison terms c is that should be minimized (in the space of m variables) and maximized (in the space of S feasible solutions) are as indicated in (5).Formula (6) presents the decision algorithm. max The comparison terms c is depend on the value (max[V i ]− min[V i ]) but do not consider how the values of vector V i are distributed between max[V i ] and min[V i ], i.e., the comparison terms do not consider the variance between the elements of the vector V i . The algorithm presented below aims to reduce the decision importance (by reducing its weight) of the variables, whose space of feasible solutions has low value of variance.This is acquired by modifying the comparison terms [c is ]. The motivation is the following: when the feasible solutions have similar values for one of the specific variables (called variable i), then the selection of any solution does not change the state of the whole system (as far as variable i is concerned).So, we consider that such a variable should not be taken into account during the selection, which means in practice that such a variable should have lower weight within the decision algorithm. SoluƟon of decision algorithms ( c is and c is ' ) Fig. 2 Pareto optimal set for the case of M(p) = 1 (w1 = w2 = 1) Then, the proposed comparison terms [c is ] are as presented in (8) and the decision algorithm is the one presented in (9). The values c is decrease for higher values of σ i and lower values of c is are preferred in formula (9).In conclusion, decision variables with higher variance get higher weight in the decision algorithm. Consider a system with 4 feasible solutions (S = 4) and two variables (m = 2) with the values presented in Fig. 2A.The values of c is and c is provide to selection of different feasible solutions: s = 2 for c is and s = 3 for c is , as we can see in Fig. 2B.In the first case, the selection of s = 2 is based on a better parameter of variable i = 1.Due to the little difference of this variable for both the solutions (s = 2 and s = 3), the selection of any of them will not change the system so much (for variable i = 1).Therefore, the selection should be based on the values of variable i = 2, which is reached for the decision algorithm c is . Note that relation between c is described in formula (5) and ) is fewer or equal to 1, see (10).Therefore, c is ≥ c is . On the other hand, the zeros of c is and In the paper [30] we proposed a comparison term with similar characteristics as the present one, i.e., the value depended on the variance of V i .The major difference is that, in [30], the comparison term (named c is ) and then, the algorithm could prefer a solution with value equal to the reference level of one or more variables.Even when this does not disqualify the comparison term c is , we think that the current solution c is offers better results and we will demonstrate this in the simulation studies presented in the next section. The case In this case, the decision variable i is not considered in the decision algorithm, as it occurred for earlier solutions based on reference level algorithms [29,30]. The proposed algorithm reassesses the importance of variables with lower values of variance.This way, the system is more efficient since, indirectly, the decisions take into account the state of the selection.This means that the system reaches the saturation point more slowly than in the basic Reference level decision algorithm.The simulations will show this point. Let us remark that the proposed algorithm does not require more information than the basic reference level algorithm and, therefore, other mechanisms are not necessary.The unique requirement is some more lines of code, which means low capital and operational expenditures in deployment. Simulation environment and results We evaluate the proposed solution by performing simulations on an extensive model of network dedicated to video on demand (VoD) streaming.Such a model takes the parameters from the largest content and service providers, and includes network topology, server characteristics as their locations, service details as content duration and popularity.Moreover, the users are also added in this model following the current arrangement in the Internet. The model of the network topology is taken from the Internet topology that CAIDA [31] publishes every year.The topology only considers Autonomous Systems (36,000 domains) and inter-domain links (103,000 links).We classified the Autonomous Systems into tier-1, 2 or 3 by considering the peering, providing or consuming relations with the neighboring domains.The capacity of inter-domain link was assumed to be a value from a uniform distribution U[0.5,1,5] Gbps in tier-3 inter-domain links, following the guidelines in [32].We assumed a value 10 times higher in the case of capacity for inter-tier 2 links (U[5.0,15.0] Gbps) and 100 times higher in the case of inter-domain links with tier-1 (U[50.0,150.0]Gbps).In this topology, we placed content servers in the domains following the ideas proposed in [33].Specifically, for the top 50 largest content providers, network providers and CDNs (e.g., Level3, Global Crossing, Akamai, LimeLight, AT&T, Comcast, Google), the number of servers corresponds to the information from white papers (e.g., [34]) and illustrative information in the homepages.In other domains, we assigned a random number of servers between 50 and 150, which approximates the situation of Akamai: Akamai counts with 84,000 servers in 1000 domains.The total number of servers in the model is more than 200,000.Moreover, each server has up to 100 film tittles and may serve up to 200 streams in parallel.These data agree with current servers in the market [35]. The servers contain different content files, whose parameters were acquired from the 5000 most popular titles in filmweb [36] on December 1 st , 2010.The duration of these films is, on average, 4 100 s.To each tittle, we allotted a bandwidth value of streaming between 2.6 and 3.4 Mbps.Videos in the Netflix Canadian network [37] are streamed in this range of bandwidth. Content replication was allocated by using the Zipf's law, which models the video distribution in large networks [32,38,39] (more popular contents were copied more times in the network).We assumed a value of the skew parameter (Zipf's law) equal to 0.2 following the guidelines in [32].The copies were randomly located within the servers, but no server had two or more copies of the same content. Also user population was based on CAIDA data [31].We used values of user population which are proportional to the number of advertised prefixes in given domains.Since this number suffers light variation, we took the minimum value during a period of time of 5 days. When the topology is prepared together with servers and content, then we start the simulations by generating user requests of content.Each request of the users has attached the request desired and the domain from which the request arrives.Then, the system receives information of server and path loads (considering shortest path) and triggers the selec-tion algorithm.As a result, the algorithm selects the best server to serve the arrived request.Let us remark that, for simplicity purposes, the algorithm selects the server between 500 feasible servers selected previously in a random way (S = 500). Once the server is selected, then one connection is summed to the number of connections served by the server at this moment and the streaming bandwidth of the content is summed to the current load of each link of the end-to-end path (between the server and the user).When the server or any link in the path crosses over a certain threshold (200 for servers and assigned bandwidth for the links), then all the connections using the specific server/path are considered as unsuccessful. We used three different decision algorithms.The first decision algorithm is the random selection of content server.The second one is the basic reference level following formulas ( 6) and ( 7) and the third decision algorithm is the optimized reference level following formulas ( 9) and (10). As stated above, there are two reasons for considering the delivery of the content as unsuccessful: server overload or link over-load.Figures 3 and 4 present the relation between successful and unsuccessful connections (called success ratio) for increasing rate (λ) of arrival requests (whole system).The unsuccessful connections in Fig. 3 are provoked by server over-load, whereas, in Fig. 4, the unsuccessful connections are provoked by link over-load.Let us remark that the results are very dependent on the parameters of the model (e.g., the maximum capacity in servers).A threshold of 200 simultaneous connections in servers ensures that the overload provoked by servers and by links appears in similar range of λ. The results show that the basic reference level algorithm is more effective (i.e., it serves more successful connections) than the random one.For example, a success ratio equal to 0.9 offers a value of success ratio three times higher in the case of basic reference level algorithm. A value of success ratio equal to 0.9 is considered as a satisfying value.For success ratio equal to 0.9, the optimized Fig. 3 Success ratio due to server over-load Fig. 4 Success ratio due to link over-load reference level algorithm has a success ratio equal to 4 100 requests/s (see Fig. 4), which is almost 800 request/s more than in the case of basic reference level algorithm. The comparison of the three decision algorithms shows that the gain of the reference level algorithms (both) over the random algorithm is definitively major than the gain of the optimized reference level algorithm over the basic reference level one.On the other hand, the cost of introducing a reference level algorithm is high since the system should acquire network awareness into the decision process.It means that a monitoring system should be developed in the server side and in the network.Whereas, the cost of introducing the optimized algorithm is negligible in comparison to the basic reference level one.In fact, the cost is only programming a new algorithm but the necessary information from the network and server sides is the same. The last tests that we performed were destined to compare two optimized algorithms that base on the Reference level technique.The first one is the algorithm presented in [30] and the second one is the algorithm presented in this paper.As we can see in the results presented in Fig. 5, the success ratio due to server overload of the presented algorithm is lightly higher than the algorithm presented in [30], for all the values of λ.As pointed in the previous section, the reason we can find in the fact that the algorithm presented in [30] selects Fig. 5 Success ratio due to link over-load for two optimized Reference level algorithms solutions near to the reference level with higher probability and, then, the system saturates lightly faster than the present algorithm.Because of this, the present algorithm achieves better success ratio. In order to ensure that the results are trustworthy, we performed several times very long simulations.Each simulation counted 10 12 content requests.During the simulations, we checked the state of a number (100) of servers and links randomly selected.The goal of this was to understand whether the servers or links entered in any state loop as, e.g., permanent over-loaded state.The monitoring of the servers and links showed that all of them changed state (light or heavy load) many times without any remarkable pattern.This shows the trustworthiness of the results.At last, the stability of the results was checked by counting the success ratio in different moments of the simulations, i.e., when the simulations counted 0.5 × 10 12 , 0.75 × 10 12 and 1.0 × 10 12 content requests.In all the cases, the results were identical. Conclusions In this paper we discussed the multi-criteria decision problem applied to the selection of transmission parameters in Information Centric Networks.We presented the general problem and different approaches proposed in the literature.We argued that, for the case that concerns us, the Reference level techniques seem to be appropriate.The paper presents a new algorithm that optimizes the basic Reference level algorithm.The optimization bases on introducing into the system awareness about the state of the system after the selection.Concretely, the optimized algorithm takes the decision on the basis of preferred variables which are crucial for the future state of the system.Whereas, the basic reference level algorithm does not consider the future state of the system and takes the decisions for searching the best quality of the current content transmission, regardless of the future state of the system. The optimized Reference level decision algorithm prefers the variables with higher variance, since the selection based on these variables may have significant impact on the system, while the variables with small variance does not induce to a big change in the system after the selection. We performed simulations on an extended model of Information Centric Network in order to understand the gain of the proposed algorithms in comparison with currently used decision algorithms.The results showed that the optimized algorithm is slightly better than the basic algorithm.Even if the gain is not so high, we advise to use the optimized one since there is no supplementary cost of its use and in any situation the optimized algorithm behaves worse than the basic reference level algorithm. The results show significant improvement of the algobased on Reference level techniques compared to random selection.This point proves the efficiency of the ICN architectures that introduce information of the state of the system into the decision process of the parameters of the transmission.The simulations presented in this paper were also directed to compare two similar optimized Reference level-based algorithms in the considered model.The results indicated similar behavior of the system for both the algorithms (light gain for the algorithm proposed in this paper).Further research in this area will be directed to understand the influence of the different parameters of the simulations (e.g., content distribution within the servers) into the final results.
6,606.6
2015-03-18T00:00:00.000
[ "Computer Science" ]
The Inflammatory Cytokine Profile of Patients with Malignant Pleural Effusion Treated with Pleurodesis Patients with malignant pleural effusion (MPE) who underwent successful pleurodesis survive longer than those for whom it fails. We hypothesize that the therapy-induced inflammatory responses inhibit the cancer progression, and thereby lead to a longer survival. Thirty-three consecutive patients with MPE that were eligible for bleomycin pleurodesis between September 2015 and December 2017 were recruited prospectively. Nineteen patients (57.6%) achieved fully or partially successful pleurodesis, while 14 patients either failed or survived less than 30 days after pleurodesis. Two patients without successful pleurodesis were excluded because of missing data. Interleukin (IL)-1 beta, IL-6, IL-10, transforming growth factor beta, tumor necrosis factor alpha (TNF-α), and vascular endothelial growth factor in the pleural fluid were measured before, and after 3 and 24 h of pleurodesis. Their pleurodesis outcome and survival were monitored and analyzed. Patients who underwent successful pleurodesis had a longer survival rate. Patients without successful pleurodesis had significantly higher TNF-α and IL-10 levels in their pleural fluid than in the successful patients before pleurodesis. Following pleurodesis, there was a significant increment of IL-10 in the first three hours in the successful patients. In contrast, significant increments of TNF-α and IL-10 were found in the unsuccessful patients between 3 and 24 h after pleurodesis. The ability to produce specific cytokines in the pleural space following pleurodesis may be decisive for the patient’s outcome and survival. Serial measurement of cytokines can help allocate the patients to adequate treatment strategies. Further study of the underlying mechanism may shed light on cytokine therapies as novel approaches. Introduction Pleurodesis is regarded as a symptomatic treatment to prevent fluid re-accumulation in patients with malignant pleural effusion (MPE) [1,2]. Talc pleurodesis had been demonstrated as having a favorable impact on the survival of patients with MPE [3,4]. Tremblay et al. showed that patients who spontaneously pleurodesed after indwelling pleural catheter placement survived longer [5]. Ren et al. reported that the intrapleural staphylococcal superantigen has a survival benefit in addition to the resolution of MPE in patients with non-small cell lung cancer [6]. Recently, we showed that patients who underwent successful minocycline pleurodesis had a longer cancer-specific survival than those for whom it failed [7,8]. The successfully induced inflammatory response is proposed to prohibit the tumor invasion and metastasis, rather than simply through the physical barrier by the fibrin formation [8]. There are continued improvements to our understanding of the molecular connections between the inflammation and cancer [9,10]. While chronic inflammation might promote tumor formation, acute inflammation may well hamper the process, and is indeed used therapeutically to inhibit the tumor [10,11]. Cytokines are signaling molecules that are key mediators of inflammation. They can be generally classified as pro-inflammatory or anti-inflammatory, and as tumorigenic or tumor suppressive [12]. Interleukin-1 beta (IL-1β), IL-6, IL-10, transforming growth factor beta (TGF-β), and tumor necrosis factor alpha (TNF-α) are representative cytokines that are important modulators of inflammation and cancer progression [9,10]. Interleukin-6 and TNF-α are usually reported as pro-inflammatory cytokines. Interleukin-10 is a cytokine with anti-inflammatory properties. Interleukin-1 beta and TGF-β have dual function and pleiotropic nature. Vascular endothelial growth factor (VEGF) is a potent stimulator of angiogenesis and the mediator of the pleural fluid formation [13]. In the study, serial measurements of IL-1β, IL-6, IL-10, TGF-β, TNF-α, and VEGF in pleural fluid before and after chemical pleurodesis were performed prospectively in patients with MPE. They were correlated with the pleurodesis outcome and survival. We tried to find the differences in the cytokine profile between patients that succeeded in pleurodesis and those that did not succeed, and identify the cytokine decisive for the prognosis. Patients and Pleurodesis Consecutive patients with symptomatic MPE that were eligible for chemical pleurodesis at the Sun-Yat Sen Cancer Center, a 200-bed hospital, were prospectively recruited between September 2015 and December 2017. To reduce the confounding factors, patients underwent intrapleural urokinase therapy for loculated MPE or a trapped lung before pleurodesis was excluded [8,14]. Loculated MPE was defined as fluid collections with septa seen on chest computed tomography and/or ultrasonography or air-fluid levels in the pleural space on the chest radiograph. A trapped lung was suggested by mechanical restriction of the visceral pleura preventing lung expansion. All the patients received treatment for the underlying primary tumors according to the current guidelines and were followed by the medical oncologists. All the recruited subjects signed an informed consent for the procedures and the laboratory study. The institutional review board of the Sun Yat-Sen Cancer Center approved this study (No. 20160223A and No. 20170220A). The study was also approved by the ethics committee of the Sun Yat-Sen Cancer Center, and it was conducted in accordance with the ethical principles stated in the Declaration of Helsinki or the guidelines on good clinical practice. Since the discontinuation of the commercial production of minocycline in Taiwan during the study period, bleomycin has been adopted as the sclerosing agent [15]. In contrast to the talc poudrage, which cannot be blown through a catheter, and slurry accumulation in the dependent areas, bleomycin allows for a more even distribution in the pleural space. A size eight to 14 Fr self-retaining intrapleural catheter (SKATERTM Single step drainage set; Argon Medical Devices, Athens, TX, USA) was inserted. Pleurodesis was indicated on near-complete ipsilateral lung re-expansion when the daily drainage decreased to less than 150 mL for two consecutive days. Eligible patients were infused with 60 IU bleomycin (Nippon Kayaku Co. Ltd., Tokyo, Japan) diluted in 100mL sterile saline via the intrapleural catheter. The catheter was clamped for three hours after the instillation of bleomycin, and then reopened for suction. Patients were encouraged to change positions during the treatment to facilitate the mixing of the bleomycin with the pleural fluid. The LENT prognostic score, concerning the lactate dehydrogenase level in the pleural fluid, the Eastern Cooperative Oncology Group performance score, serum neutrophil-to-lymphocyte ratio, and the tumor type were evaluated for each patient [16]. The driver oncogene status of the lung adenocarcinoma, such as the epidermal growth factor receptor (EGFR) mutations, the echinoderm microtubule-associated protein like 4-anaplastic lymphoma kinase (EML4-ALK) translocation, and the estrogen receptor/progesterone receptor/human EGFR 2 status of breast cancer were recorded. Pleural Fluid Sample Collection In concordance with the daily practice, a pleural fluid sample was obtained through the intrapleural catheter immediately before the pleurodesis, three hours at the reopening of the catheter, and 24 h later, prior to the removal of the catheter. The pleural fluid sample (up to 15 mL) was immediately centrifuged at 10,000× g for 10 min to remove the cell debris and aggregates, and then stored at −80 • C until measurement. Measurement of Pleural Fluid Inflammatory Cytokines and VEGF The levels of IL-1β, IL-6, IL-10, TGF-β, TNF-α and VEGF in the supernatants were measured by Bio-Plex ® Pro Human Cytokine Multiplex assays (Bio-Rad Laboratories, Hercules, CA, USA) with MagPlex beads in a flat bottom microtiter plate according to the manufacturer's instructions. Antibody-coupled capture beads were prepared and plated. The plate was again incubated on a shaker and streptavidin-phycoerythin solution was added to the wells. After a last incubation step, beads were resuspended in assay buffer and the absorbance was measured with a MagPix (Luminex Corporation) using the xPONENT software (Luminex Corporation, Austin, TX, USA). Assessment of the Pleurodesis Outcomes and Analysis of the Survival Follow-up chest radiographs were obtained at one, three, and six months after pleurodesis and repeated as and when required. The success or failure of pleurodesis was determined according to the relevant definitions proposed by the American Thoracic Society and the European Respiratory Society Consensus Statement [1]. Complete success was defined as the long-term relief of symptoms related to the effusion, with the absence of the fluid re-accumulation on the chest radiograph until death. Partial success was defined as the diminution of the dyspnea related to the effusion, with only partial re-accumulation of fluid (less than 50% of the initial level), with no further therapeutic thoracenteses required for the remainder of the patient's life. Lack of success, as previously mentioned, was determined as failed pleurodesis. Fair-to-moderate inter-observer agreements about the definition of the non-expandable lung have been reported [17]. Independent interpretation of the pleurodesis outcome by two assessors (T.C.S. and L.-H.H.), followed by the consensus judgement, was completed in the study to reduce the observer bias. The survival time was calculated from the date of diagnosis of MPE and censored at the date of death or the last follow-up. The overall survival rate was compared between the patients that had succeeded and failed the pleurodesis. The baseline values and changes in IL-1β, IL-6, IL-10, TGF-β, TNF-α, and VEGF levels in the pleural fluids were compared between the patients with successful pleurodesis and those without. Because the pleural fluid concentrations of IL-1β, IL-6, IL-10, TGF-β, TNF-α, and VEGF were highly variable in patients with MPE [18], we also used the fold change within the individual patients as a comparison [19]. In addition to comparison between groups, comparison was also made at different time points within each group. Linear regression analyses were performed to measure the correlations between different cytokines before and 24 h after pleurodesis, and were presented with Pearson's correlation coefficient with the significance level. Statistical Analysis Descriptive statistics of mean, median, standard deviation, and frequency were used to process the demographic and laboratory data. Continuous variables were compared using the one-way ANOVA on ranks with SigmaPlot 14.0 (Systat Software, Inc.; San Jose, CA, USA), whereas categorical variables were compared using the chi-square test or Fisher's exact test. A p value of less than 0.05 for comparisons was considered to represent statistical significance. Survival estimates were derived by the Kaplan-Meier plots, while log-rank tests were used to assess the differences in survival among the groups using the statistical software package SAS, version 9.4 (SAS Institute; Cary, NC, USA). Patients' Characteristics and Pleurodesis Outcomes There were 84 patients diagnosed with MPE in the study period. Thirty-three patients underwent simple bleomycin pleurodesis, and 19 patients (57.6%) achieved successful or partially successful pleurodesis (10 with breast cancer and nine with lung adenocarcinoma). Nine patients failed the pleurodesis (three with lung adenocarcinoma, three with breast cancer, one with small cell lung cancer, one with ovarian cancer, and one with bladder urothelial carcinoma), with pleural fluid re-accumulation before death. Five patients survived less than 30 days after pleurodesis, with a shorter follow-up to evaluate the pleurodesis outcome (two with breast cancer, one with small cell lung cancer, one with ovarian cancer, and one with gastric cancer). The age, gender, and smoking history appeared comparable among the patients who succeeded or failed pleurodesis or survived less than 30 days (Table 1). Patients who underwent successful pleurodesis, failed the pleurodesis, and survived less than 30 days had a mean value of LENT score, 2.84, 3.67, and 4.40, respectively (p = 0.020). Survival Differences On the follow-up, patients who underwent successful pleurodesis had a significantly longer overall survival than those for whom it failed (median, 367 vs. 81 days; p < 0.001) (Figure 1). Survival Differences On the follow-up, patients who underwent successful pleurodesis had a significantly longer overall survival than those for whom it failed (median, 367 vs. 81 days; p < 0.001) (Figure 1). Pleural Fluid Inflammatory Cytokines and VEGF between Groups Two patients with breast cancer had dry drainage at the 24 h collection that led to missing data at the time point and were excluded from the subsequent analysis-one had failed pleurodesis and the other survived less than 30 days. We combined the remaining patients with failed pleurodesis and the patients who survived less than 30 days as a group (n = 12) in the subsequent cytokine analysis, so as to compare them with the successful pleurodesis group (n = 19), considering the obvious prognostic difference and aim to investigate the survival benefit of the patients who succeeded pleurodesis. For the baseline level, patients without successful pleurodesis had a significantly higher pleural fluid TNF-α and IL-10 level before pleurodesis than those who succeeded ( Table 2 T1 Pleural Fluid Inflammatory Cytokines and VEGF between Groups Two patients with breast cancer had dry drainage at the 24 h collection that led to missing data at the time point and were excluded from the subsequent analysis-one had failed pleurodesis and the other survived less than 30 days. We combined the remaining patients with failed pleurodesis and the patients who survived less than 30 days as a group (n = 12) in the subsequent cytokine analysis, so as to compare them with the successful pleurodesis group (n = 19), considering the obvious prognostic difference and aim to investigate the survival benefit of the patients who succeeded pleurodesis. For the baseline level, patients without successful pleurodesis had a significantly higher pleural fluid TNF-α and IL-10 level before pleurodesis than those who succeeded (Table 2 T1 values and Figure 2). Following the instillation of bleomycin, there was a significant increment of IL-10 in the first three hours in patients who underwent successful pleurodesis. By contrast, there was a significant increment of TNF-α, and IL-10 between 3 and 24 h in patients without successful pleurodesis (Table 3, and Figure 3). Data expressed as median and range. IL, interleukin; TGF-β, transforming growth factor beta; TNF-α, tumor necrosis factor alpha; VEGF, vascular endothelial growth factor. 1 Figure 2. Measured cytokines in the pleural fluid at different time points. Interleukin (IL)-1β, IL-6, IL-10, transforming growth factor beta (TGF-β), tumor necrosis factor alpha (TNF-α), and vascular endothelial growth factor (VEGF) levels were compared between patients who succeeded pleurodesis (grey column; n = 19) and those not succeeded (white column; n = 12). Comparison was also made at different time points within each group. Box plots demonstrate the concentrations in the pleural fluid. The dots represent the 5th and 95th percentiles, respectively. The error bars cover the 10th to 90th percentiles and the box covers 25th to 75th percentiles. The line within the box represents the median value. T1, before pleurodesis; T2, three hours after pleurodesis; T3, 24 h after pleurodesis; * p < 0.05; ** p < 0.01; *** p < 0.001. IL-1β, IL-6, IL-10, TGF-β, TNF-α, and VEGF levels measured at the later time points were divided by those measured at the earlier time points and compared between patients who underwent successful pleurodesis (grey column; n = 19) and those not succeeded (white column; n = 12). Box plots demonstrated the ratios. The dots represent the 5th and 95th percentiles, respectively. The error bars cover the 10th to 90th percentiles and the box covers the 25th to 75th percentiles. The line within the box represents the median value. T1, before pleurodesis; T2, three hours after pleurodesis; T3, 24 h after pleurodesis; * p < 0.05; ** p < 0.01. There was no significant difference in the baseline levels or changes following the pleurodesis of IL-1β, IL-6, TGF-β, and VEGF between the patients who underwent successful pleurodesis and those that did not (Tables 2 and 3; Figures 2 and 3). Comparison of Pleural Fluid Inflammatory Cytokines and VEGF at Different Time Points within Each Groups Compared with the relatively lower baseline level, there was a significant increase in IL-10 at 24 h in patients who underwent successful pleurodesis (Figure 2). An initial suppression of TGF-β, followed by a subsequent elevation was noted in both groups, with significant differences in patients who underwent successful pleurodesis. Discussion To the best of our knowledge, this is the first study to measure the inflammatory cytokines and VEGF changes following pleurodesis and correlate these factors with the outcome and survival. The possible confounders were taken into consideration ( Table 1). The pleural fluid TNF-α levels were significantly higher in the malignant pleural effusion, which may be attributed to the increased local production in the pleural cavity by macrophage, T-lymphocyte, or mesothelial cells upon exposure to inflammatory process [20]. Patients without successful pleurodesis had a higher pleural fluid TNF-α level before pleurodesis. Following pleurodesis, they had an increment of TNF-α in the late stage (between three and 24 h). The tumor necrosis factor alpha is usually regarded as pro-inflammatory and tumorigenic. Deregulation of the TNF-α signaling pathway is associated with many inflammatory disorders, including rheumatoid arthritis, and inflammatory bowel disease, as the monoclonal antibody to TNF-α has been a standard treatment for these diseases. TNF-α have divergent effects on the regulatory T cells, and this contributes to their development and accumulation, although it can downregulate their suppressive capacity in some instances [21,22]. Patients with successful pleurodesis had a significant increment of IL-10 in the first three hours following the instillation of bleomycin. The early surge of IL-10 following pleurodesis accompanied by a longer survival suggests an anti-tumor effect of IL-10. On the contrary, the higher pleural fluid IL-10 levels before pleurodesis and a late increment of IL-10 between 3 and 24 h following pleurodesis in patients without successful pleurodesis suggested the crucial role of IL-10 as a feedback regulator of the increased pro-inflammatory cytokine, TNF-α [12,23,24]. In fact, we also observed feedback regulation in patients who underwent successful pleurodesis when performing intragroup comparison ( Figure 2). IL-10 is usually regarded as an anti-inflammatory cytokine and has been reported to exert anti-tumor effects through the increasing tumor antigen-specific CD8+ T cell infiltration and the INF-γ-mediated induction of antigen presentation [12,[25][26][27]. Pegylated IL-10 had been developed and demonstrated as an effective anti-tumor immune response with long-lasting immunologic memory [28]. Clinical efficacy has been seen as monotherapy, or in a combination with anti-PD-1 antibodies [29]. However, IL-10 has paradoxical effects on different types of immune response and is considered a potential switcher of immunity [26,27]. The indiscrimination in the baseline levels or changes following pleurodesis of IL-1-β, IL-6, and TGF-β between patients with or without successful pleurodesis was consistent with their reported dual tumor promoting or inhibitory function. Interleukin-6 is regarded as a pro-inflammatory cytokine, although certain anti-inflammatory activities were also attributed to IL-6 [30,31]. The IL-6/JAK (Janus tyrosine kinase)/STAT (signal transducers and activators of transcription) signaling pathway is aberrantly hyperactivated in many types of cancer. Interleukine-6 also exerts immunosuppression in the tumor environment by stimulating the infiltration of myeloid-derived suppressor cells, tumor-associated neutrophils, and cancer stem-like cells. Interleukin-1β activates innate immune cells including antigen presenting cells, and drives polarization of CD4+ T cells towards T helper type (Th) 1 and Th17 cells, to exert anti-tumor effects [32]. Activation of the NLRP3 inflammasome in dendritic cells induces IL-1β-dependent adaptive immunity against tumors [33]. Contrarily, IL-1β within the tumor microenvironment has been reported to promote carcinogenesis, tumor growth, and metastasis through driving chronic non-resolved inflammation, endothelial cell activation, tumor angiogenesis, and the induction of immune-suppressive cells. During intragroup comparison, both groups had an initial suppression of TGF-β, followed by a subsequent elevation, especially in patients who underwent successful pleurodesis (Figure 2). In the early stages of cancer, TGF-β functions as a tumor suppressor, while in the later stages, the TGF-β exerts tumor-promoting effects [34][35][36]. Its effects also depend on the cellular context. TGF-β from the inflammatory tumor microenvironment may cause cancer cell apoptosis and tumor suppression. In contrast, it may also induce an epithelial-mesenchymal transition that promotes cancer cell invasion and metastasis, cancer stem cell heterogeneity and drug resistance [34,35]. TGF-β had been adopted as a sclerosing agent for pleurodesis [37,38], and the anti-TGF-β antibody could inhibit the pleural fibrosis in the rabbit empyema model [39]. Contrarily, the TGF-β inhibitor was proposed as a new line of defense against cancers. Pleural fluid IL-1β and TNF-α were positively correlated, and pleural fluid IL-6 and TGF-β were negatively correlated, both before and 24 h after pleurodesis. The pleiotropic nature of IL-1β or TGF-β makes it a challenging target requiring further study [36]. Similar to the study of Hooper et al. [40], and our earlier study [7], no association existed between the baseline pleural fluid VEGF levels and pleurodesis failure. There was also no association between the changes in pleural fluid VEGF following pleurodesis with the outcome. The mesothelium itself may regulate the first steps of the pleural fibrosis following the instillation of a sclerosing agent, through the inflammatory response, which is decisive for the pleurodesis outcome and survival [41,42]. Further studies to observe the release of inflammatory cytokines from the mesothelial cells or other cells within the pleural space, such as macrophages after the addition of the sclerosing agent, and the effects of the conditional media, cytokines and their antagonists on cancer cell viability, apoptosis, pyroptosis, proliferation, migration, and invasion are warranted to clarify the mechanism [43][44][45]. The choice of MPE managements, i.e., chemical pleurodesis, indwelling pleural catheter drainage or repeat thoracenteses, depends on the expected survival, lung expandability, and cost-effectiveness, which remains a challenge for clinicians [1,2]. The LENT scoring system appeared to be a valuable prognostic score in patients with MPE [16], as it predicted the shorter survival of patients who failed pleurodesis and may aid clinical decision making in the diverse patient population. The ability to produce specific cytokines in the pleural space after the instillation of the sclerosing agent may be decisive for the pleurodesis outcome and survival. The time points of pleural fluid collection in concordance with the daily practice of pleurodesis in the study allows us to measure the cytokines changes to allocate the patients to adequate treatment strategies in the future. The pleural fluid cytokine concentration 3 h after pleurodesis might be diluted by the amount of saline instilled with bleomycin. This is a drawback and may need be corrected by the protein concentrations in the pleural fluid. Manipulation of the co-stimulatory or co-inhibitory checkpoint proteins, such as PD-1 and PD-L1, allows for the reversal of tumor-induced T-cell anergy. Cytokines or their specific inhibitors involved in the signaling between the tumor cells and the microenvironment have not, as yet, been systemically studied. In addition to the immune checkpoint inhibitors, the recombinant cytokines can potentially increase the number of patients who will benefit from the immunotherapy [46]. Strategies to target the tumor immunosuppressive network, rather than targeting a single molecule, should be established in the future. In addition, intrapleural cytokine therapy may have the benefit of focused treatment without a systemic effect for patients with MPE [36]. In this study, more patients with symptomatic MPE underwent chemical pleurodesis (33/84, 39.3%), as compared with our earlier series (33.2%) and the historical control (24%) [8,47]. However, there was still an attrition in the subsequent analysis, such as dry drainage that led to missing data at some time points. The tumor heterogeneity of the study group is another concern. To confirm the study findings, such limitations need to be addressed and could be overcome in the future with more and adequate patients recruited. Conclusions The ability to produce specific cytokines in the pleural space following pleurodesis may be decisive for the patient's outcome and survival. Serial measurement of cytokines can help allocate the patients to adequate treatment strategies. Further study of the underlying mechanism may shed light on cytokine therapies as novel approaches.
5,339.6
2020-12-01T00:00:00.000
[ "Medicine", "Biology" ]
The JHU Parallel Corpus Filtering Systems for WMT 2018 This work describes our submission to the WMT18 Parallel Corpus Filtering shared task. We use a slightly modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017), which computes an adequacy score and a fluency score on a sentence pair, and use a weighted sum of the scores as the selection criteria. This work differs from Zipporah in that we experiment with using the noisy corpus to be filtered to compute the combination weights, and thus avoids generating synthetic data as in standard Zipporah. Introduction Todays machine translation systems require large amounts of training data in form of sentences paired with their translation, which are often compiled from online sources.This has not changed fundamentally with the move from statistical machine translation to neural machine translation, also we observed that neural models require more training data (Koehn and Knowles, 2017) and are more sensitive to noise (Khayrallah and Koehn, 2018).Thus both the acquisition of more training data such as indiscriminate web crawling and corpus filtering will have large impact on the quality of state-of-the-art machine translation systems. The JHU submission to the WMT18 Parallel Corpus Filtering shared task uses a modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017).For a sentence pair, Zipporah uses a bag-of-words model to generate an adequacy score, and an n-gram language model to generate fluency score.The two scores are combined based on weights trained in order to separate clean data from noisy data.The original version of Zipporah generates artificial noisy training data to train such classifier, in this submission we also treat the Paracrawl corpus as the negative examples. Related Work Zipporah builds upon prior work in data cleaning and data selection. For data selection, work has focused on selecting a subset of data based on domain-matching.Moore and Lewis (2010) computed cross-entropy between in-domain and out-of-domain language models to select data for training domain-relevant language models.XenC (Rousseau, 2013), an open-source tool, also selects data based on crossentropy scores on language models.Axelrod et al. (2015) utilized part-of-speech tags and used a class-based n-gram language model for selecting in-domain data and Duh et al. (2013) used a neural network based language model trained on a small in-domain corpus to select from a larger mixeddomain data pool.Lü et al. (2007) redistributed different weights for sentence pairs/predefined sub-models.Shah and Specia (2014) described experiments on quality estimation which, given a source sentence, select the best translation among several options. For data cleaning, work has focused on removing noisy data.Taghipour et al. (2011) proposed an outlier detection algorithm which leads to an improved translation quality when trimming a small portion of data.Cui et al. (2013) used a graph-based random walk algorithm to do bilingual data cleaning.BiTextor (Esplá-Gomis and Forcada, 2009) utilizes sentence alignment scores and source URL information to filter out bad URL pairs and selects good sentence pairs.Similar to this work, the qe-clean system (Denkowski et al., 2012;Dyer et al., 2010;Heafield, 2011) uses word alignments and language models to select sentence pairs that are likely to be good translations of one another. We focus on data cleaning for all purposes, as opposed to data selection for a given domain.We aim to create a corpus of generally valid translations, which could then be filtered to adapt to a particular domain. Zipporah We use a slightly modified version of the Zipporah Corpus Filtering toolkit (Xu and Koehn, 2017).Zipporah works as follows: it first maps all sentence pairs into the proposed feature space, and then trains a simple logistic regression model to separate known good data and bad data.Once the model is trained, it is used to score sentence pairs in the noisy data pool. Zipporah uses two features inspired by adequacy and fluency.The adequacy feature uses bagof-words translation scores, and the fluency feature uses n-gram language model scores. Adequacy Score Zipporah generates probabilistic dictionaries from an aligned corpus, and uses them to generate bag of words translation scores for each sentence.This is done in both directions. Given a sentence pair (s f , s e ) in the noisy data pool, we represent the two sentence as two sparse word-frequency vectors v f and v e .For example for any French word w f , we have v , where c(w f , s f ) is the number of occurrences of w f in s f and l(s f ) is the length of s f .We do the same for v e .Then we "translate" v f into v e , based on the probabilistic f2e dictionary, where For a French word w that does not appear in the dictionary, we keep it as it is in the translated vector, i.e. assume there is an entry of (w, w, 1.0) in the dictionary.We compute the cross-entropy between v e and v e , where c is a smoothing constant to prevent the denominator from being zero, which we set c = 0.0001 for all experiments.We perform similar procedures for English-to-French, and compute xent(v f , v f ).We define the adequacy score as the sum of the two: Fluency Score Zipporah trains two 5-gram language models with a clean French and English corpus, and then for each sentence pair (s g , s e ) scores each sentence with the corresponding model, F ngram (s g ) and F ngram (s e ), each computed as the ratio between the sentence negative log-likelihood and the sentence length.We define the fluency score as the sum of the two: Classifier We train a binary classifier to separate a clean corpus from noisy corpora, based on the 2 features proposed.Higher orders of the features are used in order to achieve a non-linear decision boundary.We implement this using the logistic regression model from scikit-learn (Pedregosa et al., 2011), and use the features in the form of (x 8 , y 8 ). Training Data We Since much of the raw Paracrawl data is noisy (Khayrallah and Koehn, 2018), we also train a version where we simply use the portion of Paracrawl released for the shared task as the negative examples to train our classifier, without generating synthetic noisy data.We experiment with using both the full portion of Paracrawl and a 10, 000 line subset. Results We include the results of running the three versions of Zipporah in Table 1.The final column is the average score across the 6 test sets. • Zipporah-synthetic denotes the system with synthetic negative examples as in the original version of Zipporah. • Zipporah-paracrawl denotes the system trained with the Paracrawl as the negative examples. • Zipporah-paracrawl-10000 denotes the system trained with a 10000 sentence subset of Paracrawl.In general, our systems lag behind the top performing systems by about 3 BLEU on the average of the six test sets.The different Zipporah systems perform similarly, with a slight edge to the original version with synthetic parallel data.This indicates that a subset can be used for faster training of Zipporah. Zipporah does not require building an initial NMT system to score the data, as required by some of the top performing systems.Zipporah also has a very fast run time, the most expensive part being the language model scoring. Our submissions are more competitive in the SMT experiments, and lag behind the top performing system system by less than a BLEU point (averaged across the test sets) for SMT systems trained on 100 million sentences.This may be due to the fact that Zipporah's adequacy and fluency scores directly track the translation and language model components of SMT. Conclusion Our submission to the WMT 2018 shared task on parallel corpus filtering was based on our Zipoorah toolkit.We varied methods to generate negative samples for the classifier to detect noisy sentence pairs, with similar results for synthetic noise, the full raw corpus to be filtered, and a subset of it. We note that our method is quite simple and fast, using only n-gram language model and bagof-words translation model features. use clean WMT training data as the examples of clean text.The original version of Zipporah creates synthetic negative training examples by shuffling the clean data set, both at the corpus and sentence levels in order to generate inadequate and non-fluent text. Table 1 : Results of our Zipporah variants, compared to the submission with the best average test score.
1,941.8
2018-01-01T00:00:00.000
[ "Computer Science" ]
Gradient descent localization in wireless sensor networks Gradient descent localization in wireless sensor networks Meaningful information sharing between the sensors of a wireless sensor network (WSN) necessitates node localization, especially if the information to be shared is the location itself, such as in warehousing and information logistics. Trilateration and multilateration positioning methods can be employed in two-dimensional and three-dimensional space respectively. These methods use distance measurements and analytically estimate the target location; they suffer from decreased accuracy and computational complexity especially in the three-dimensional case. Iterative optimization methods, such as gradient descent (GD), offer an attractive alternative and enable moving target tracking as well. This chapter focuses on positioning in three dimensions using time-of-arrival (TOA) distance measurements between the target and a number of anchor nodes. For centralized localization, a GD-based algorithm is presented for localization of moving sensors in a WSN. Our proposed algorithm is based on systematically replacing anchor nodes to avoid local minima positions which result from the moving target deviating from the convex hull of the anchors. We also propose a GD-based distributed algorithm to localize a fixed target by allowing gossip between anchor nodes. Promising results are obtained in the presence of noise and link failures compared to centralized localization. Convergence factor issues are discussed, and future work is outlined. Introduction Wireless sensor networks are used in a wide range of monitoring and control applications such as traffic monitoring, environmental monitoring of air, water, soil quality or temperature, smart factory instrumentation, and intelligent transportation.The nodes are usually small radio-equipped low-power sensors scattered over an area or volume of a few tens of square or cubic meters, respectively.There is information sharing between sensors and for this information to be meaningful, the nodes or sensors have to be located.Although global positioning systems (GPS) achieve powerful localization, it is costly and impractical to equip each sensor in a WSN with a GPS device.Besides, in many environments such as indoors and forested zones, the GPS signal may be weak or even unavailable.This explains the vast on-going research devoted to efficient localization for WSNs. Node information may be processed either centrally or in a distributed manner.In centralized localization, distance measurements are collected by a central processor prior to calculation.In distributed algorithms, the sensors share their information only with neighbors but possibly iteratively.Both methods face the high cost of communication, but, in general, centralized localization produces more accurate location information, whereas distributed localization offers more scalability and robustness to link failures. Node localization relies on the measurements of distances between the nodes to be localized and a number of reference or anchor nodes.The distance measurements can be via radio frequency (RF), acoustic, or ultra-wideband (UWB) signals.Measurements that indicate distance can be time of arrival (TOA), angle of arrival (AOA), or received signal strength (RSS).TOA measurements seem to be most useful especially in low-density networks, since they are not as sensitive to inter-device distances as AOA or RSS.The TOA distance measurements usually correspond to line-of-sight (LOS) arrivals that are hampered by additive noise.The consequent measurement errors can be adequately modeled by zero-mean Gaussian noise with variance σ 2 .The inclusion of a mean µ in this Gaussian model may be necessary to account for possible non-line-of-sight (NLOS) arrivals. Accurate location information is important in almost all real-world applications of WSNs.In particular, localization in a three-dimensional (3D) space is necessary as it yields more accurate results.Trilateration and multilateration positioning methods [1] are analytical methods employed in two-dimensional (2D) and three-dimensional (3D) spaces, respectively.These methods use distance measurements to estimate the target location analytically, and suffer from poor performance, decreased accuracy, and computational complexity especially in the 3D case.More specifically, trilateration is the estimation of node location through distance measurements from three reference nodes such that the intersection of three circles is computed, thereby locating the node as shown in Figure 1.Multilateration is concerned with localization in a 3D space in which more than three reference nodes are used [2]. Practically, when distance measurements are noisy and fluctuating, localization becomes difficult.The intersection point in Figure 1 becomes an overlapped region.With this uncertainty, analytical methods become almost useless and we resort to optimization methods.Iterative optimization methods offer an attractive alternative solution to this problem.The Kalman filter, which is an iterative state estimator, can be used for node localization in case of noisy measurements.However, its computational and memory requirements may not be met adequately by the limited resources of a sensor system, subsequently resulting in poor performance [3].Thus, the most common iterative optimization method is the computationally efficient gradient descent algorithm, which has been widely dealt with in the literature for the 2D case [4,5].This chapter addresses localization in a three-dimensional space of stationary and moving wireless sensor network nodes by gradient descent methods.First, it is assumed that a central processor collects the data from the nodes.TOA measurements will be assumed throughout.An evaluation analysis of the performance of the localization algorithm considered is performed.The effect of measurement noise has also been studied.The work also investigates tracking of moving sensors and proposes a method to counteract some associated problems such as falling into local minima [6].Second, distributed GD localization will be handled using a proposed gossip-based technique in which anchor nodes exchange data to iteratively compute the positions and gradients locally in each anchor [7].This distributed method serves to mitigate the effects of noise and link failures. Centralized gradient descent (GD) localization in 3D wireless sensor networks 2.1. Stationary node localization Localization in 3D space is particularly important in practical applications of WSNs, but many of its aspects remain unexplored as the typical scenario for WSN localization is set up in a 2D plane [8].In a 3D space, at least four anchor nodes are needed whose locations are known.An Gradient Descent Localization in Wireless Sensor Networks http://dx.doi.org/10.5772/intechopen.69949estimate of the ith distance d i , i = 1, 2, 3, 4, between the ith anchor node (x i , y i , z i ) and the node to be localized (x, y, z) is needed. The TOA distance measurement technique is assumed.TOA is the time delay between transmission at the node to be localized and reception at an anchor node.This is equal to the distance d i divided by the speed of light if either RF or UWB signals are used.The backbone of the TOA distance measurement technique is the accuracy of the arrival time estimates.This accuracy is hampered by additive noise and NLOS arrivals.The measurement errors are modeled as additive zero-mean Gaussian noise.The total additive Gaussian measurement noise will be modeled as Nðμ, σ NLOS 2 Þ, where the letter N denotes the normal or Gaussian distribution, μ is the mean, and σ 2 NLOS is the variance, taking into account NLOS as well as LOS arrivals.The occasional inclusion of a mean accounts for the biased location estimate resulting from NLOS errors [9,10]. To determine the TOA in asynchronous WSNs, two-way TOA measurements are used.In this method, one sensor sends a signal to another that immediately replies.The first sensor will then determine TOA as the delay between its transmission and reception divided by two [10]. Gradient descent iterative optimization in three dimensions results in slower convergence when compared to the 2D case due to tracking along an extra dimension.This is true for all iterative optimization methods.Due to limited exploration of 3D scenarios in the literature, the present work presents practical results relating to the GD localization problem in threedimensional WSNs.The definition of an objective or error function is normally required for optimization methods whose purpose is to minimize this function to produce the optimal solution.In GD localization, the objective error function is usually defined as the sum of squared distance errors from all anchor nodes.As such, we may write the objective error function as: and where p = [x, y, z] T is the vector of unknown position coordinates (x, y, z), t i is the receive time of the ith anchor node, t o is the transmit time of the node to be localized, c is the speed of light (= 3 Â 10 8 m/s), and N is the number of anchor nodes.The difference (t it o ) is the TOA that can be measured (with measurement noise) in asynchronous WSNs as explained. Minimization of the objective function produces the optimal solution that is the position estimate of the node to be localized.This problem is solved iteratively using GD as follows: where p k is the vector of the estimated position coordinates, α is the step size, and g k is the gradient of the objective function given by: If we define the term B k,i as: (5) then the three components of the gradient vector at the kth iteration will be: ∂f ∂y The initial position coordinates may be chosen to be the mean position of all anchor nodes.The required number of iterations for convergence is a tradeoff between energy consumption, which is critical to WSNs, and the degree of accuracy. A minimum of four anchor nodes are needed to estimate position in a 3D space.The estimation accuracy increases as a function of the number of anchor nodes.Since the objective function is the sum of the squares of the differences between estimated distances and measured distances, distance measurement errors are squared, too.This problem is countered by weighting distance measurements according to their confidence to limit the effect of measurement errors on localization results [11].So the objective function accommodating different weights may be expressed as: Weighting, however, may result in sub-optimal solutions if only four anchor nodes are used.Since usually there are only a few anchors in a real WSN [12], the use of five anchor nodes is a good choice to achieve better accuracy without undue deviation from real settings. In a 3D WSN, the error function of Eq. ( 1) is a 4D performance surface with a global minimum and several local minima.To avoid local minima, the gradient descent must run several times with different starting points, which is expensive computationally.To better visualize the local minima problem, localization in a 2D space is considered to enable performance surface plotting in a 3D space.Three anchors (30, 45), (80, 65), and (10, 80) are chosen with d i = 32.0156,83.2166, and 60.0000 corresponding to a point p = (10,20).Then, plotting the following objective function results in Figure 2 with azimuth = 90 and elevation = 0 . The presence of a global minimum at p and a neighboring local minimum can be discerned from Figure 2. Therefore, GD search of the minimum along the performance surface often gets trapped in a local minimum especially when tracking a moving node.In the following section, a solution will be presented to solve the local minima problem in a moving sensor localization setting. Simulation scenario GD localization in a 3D WSN is simulated in MATLAB.The anchor node locations are chosen at random in a volume of 200 Â 200 Â 200 m 3 .It is assumed that the target node to be localized (whether stationary or moving) has all anchor nodes within its radio range, and that the target node lies within the convex hull of the anchors.The LOS and NLOS measurement noise is assumed to obey a normal distribution N(µ, σ 2 ).In subsequent simulations, noisy TOA measurements are simulated by adding a random component to the exact value of the time measurement.The latter is readily computed for simulation purposes from knowledge of the exact node position to be localized, the anchor positions, and the speed of light c. Simulation results We first consider four anchor nodes to localize a node of position (60, 90, 60) in the 3D space assuming that the standard deviation (SD) of the zero-mean Gaussian TOA measurement noise, the convergence factor or step size, and the number of iterations to be SD = 0.001 µs, α = 0.25, and j = 100, respectively.The anchor positions are (10,100,10), (100, 90, 10), (10,70,100), and (100, 80, 100).Simulation results localized the target node as (60.28, 84.02, 58.65).When five anchor nodes are used, they provide an almost ideal target localization of (60.16, 89.64, 60.09).The fifth anchor position is (90, 90, 150).The issue of energy consumption may appear to disfavor iterative methods compared to analytical methods.This is not the case, however, when the target is moving, since updating would then be must whether iterative or other methods are employed. Moving node localization and tracking GD can be used to track a moving target in real time.The measurement sample interval determines the measurement update rate.A bit of care is required in adjusting the sample interval to avoid conflict with moving sensor velocity and motion models which may be completely unknown [9].The moving node must provide multiple measurements to the anchors as it moves across space.It has the opportunity to reduce environment-dependent errors as it averages over space.Many computational aspects of this problem remain to be explored [10]. In Refs.[13,14], the problem of avoiding local minima for moving sensor localization is handled by smart use of available anchors and good initialization.Although these works are also based on minimizing cost functions, they are not general GD algorithms.Moreover, these works require good initial estimation of the target location.It is therefore worthwhile to attempt achieving moving sensor localization without the need to estimate the initial moving target location.As a solution to this problem, we introduce the concept of diversity in the iterative GD localization problem. The algorithm below is proposed in Ref. [6] to localize a moving sensor in a 3D space with the provision of local minima avoidance.The foreseen success of the proposed method is based on the idea that, as the updated position begins to wander away from the global minimum in the direction of a local minimum, it is highly probable that it will return to the right track if some anchor nodes are replaced.Anchor node replacement results in a consequent change in the performance surface shape and hence local minima positions. Algorithm 1: Proposed GD localization of a moving sensor [6] 1. Estimate a suitable measurement sample interval or update rate. 2. Cluster available anchor nodes into sets of five nodes each.The number of resulting sets P will be: where N is the total number of heard anchor nodes. 3. Randomly draw M sets from P obeying a uniform distribution. 4. Perform M independent gradient descent localization procedures on the moving sensor using these M sets. 5. Iterate the gradient descent algorithm up to the L-th update, and calculate the final f(p) for each of the M sets.Discard the sets that produce f(p) greater than a certain threshold γ. Find the point p with the minimum f(p). 6. Stop the algorithm if the moving sensor tracking halts. 7. Complete the M sets by randomly choosing other sets from P, and repeat steps 4-6 starting with the final position of p that corresponds to the minimum f(p). The different parameters appearing in Algorithm 1 should be properly chosen.These are M, N, L, and the threshold γ.As discussed in the problem description, N should not be unduly large in practical settings.Assuming that five anchors per set are involved in localization, N must not be much greater especially when the WSN area or volume is limited.As for M, it naturally determines the computational overhead; GD localization must run M times in each round of position estimation.To reduce the amount of computation to a minimum, the choice of M must achieve a tradeoff between computational complexity and sufficient diversity of anchor sets in order to cancel unsuitable candidates and retain functional ones.The threshold γ depends on the specific application and how tolerant the latter is to the final value of the error function f(p).In the simulations, the moderate value of 7 m 2 is used as a default setting.This means that the estimated squared distance error associated with each anchor is (7/5) m 2 on average according to Eq. ( 1). As for L, it has been assigned the value 150 iterations in the present simulation settings, which is, however, an ad hoc value that worked for the particular settings under consideration.To ensure accurate tracking, a check on the error function of all running estimations can be performed after each certain interval (e.g. 30 iterations) and then the decision is made whether to proceed or replace the diverging sets. Applying an iterative optimization algorithm for M subsets of heard anchors, when M can be as large as 20, has been implemented in Ref. [12], albeit without diversity, in the context of least median square (LMS) secure node localization in WSNs to combat localization attacks.Algorithm 1 has been inspired from Ref. [12] by adapting it to: 1. Suit the simpler iterative GD localization algorithm with the aim of local minima avoidance rather than secure localization. 2. Repeat itself with diversity to avoid divergence due to local minima as the target moves along its path. A final remark concerns the communication overhead; the proposed algorithm does not add to the communication complexity.With each iteration, and after the sensing has been achieved, only one broadcast (communication) of the distance measurement is enough from each of the N anchors.It is in the fusion center that the various combinations of P are sorted out and their associated computations performed. Simulation results In the following scenarios, a moving node is tracked and localized.We assume five anchor nodes since this offers the best estimation accuracy.We first illustrate GD tracking of a node moving along a helical path (Figure 5).The three dimensions representing the moving target location are given by: The angle θ is continuously increasing and r and k are constants.Figure 5 shows the moving node helical path and its GD track for values of θ varying from zero to 2π, together with the anchor positions shown as small circles.The anchors are assumed to be in the radio range of the helical trajectory.The constant values r and k are 40 and 20, respectively.Noise-free distance measurements are assumed throughout.Next, and to better illustrate the proposed algorithm in Ref. [6] for moving target tracking, and the effect of the various inherent parameter values, a straight line path segment is considered.The details are outlined in the following steps: a.A target node is moving 0.5 m in each of the three x, y, and z axes in each of 200 steps, which gives a true track distance of 100 m/dimension.The true track is illustrated by the straight line in Figure 6.The estimated track begins with an initial point of (50, 50, 50) and converges to the true track for a while but then deviates from it due to the local minima associated with this problem.This deviation is shown clearly in Figure 6. b.The same scenario is repeated except that the track is divided into two segments.The first segment uses the same previous anchor nodes.In the second segment, the anchor nodes have been changed in an attempt to avoid the local minimum and resume tracking the true path.Figure 7 shows the corrected tracking behavior and the new set of anchor nodes. c.The proposed method of Algorithm 1 is applied with N = 7 resulting in P = 21, that is, seven anchor nodes are clustered in 21 sets of five anchor nodes each.M is chosen to be equal to 10 and L equal to 150.The threshold is chosen as γ = 7.At the 150th update, the final f(p) is calculated for each of the 10 sets.The sets that produce an error function greater than 7 are discarded, and other sets from the remaining 11 sets are chosen to complete the 10 sets starting with the final position of p that corresponds to the minimum f(p).Iterative computations are continued for another 150 updates and the optimum set is It is worth noting that in the second segment, the first segment unsuccessful sets can be replaced in a deterministic manner rather than randomly, since one would by then have an idea of the location of the moving target.This is especially convenient for WSNs with widely scattered sensors, where sets with nodes that are distant from the moving target and that are likely to contribute to poor localization can be discarded. Future work may consider introducing distance-measurement noise and studying its effect on the performance of the proposed algorithm.In that case, the final f(p) may not be enough indication of the validity of any certain set of anchors due to noisy measurements.So averaging f(p) of the last 10 iterations of each segment of the estimated path, and for all M running sets, may be considered to obtain a more accurate comparison and a judicious subsequent selection of sets. Distributed gradient descent (GD) localization in 3D wireless sensor networks In Ref. [7], the authors propose a distributed GD localization method that is robust against node and link failures.The computation of sums is inherent in the GD localization problem and can therefore be made distributed by applying gossip-based distributed summing or averaging algorithms. It can be seen from Eqs. ((1), ( 6)-( 8)) that, there are four N-term sums that have to be computed in each iteration of the GD localization algorithm.For each of the four sums, each set of variables that constitute each of the N terms is resident in one of the N anchor nodes.This set of variables includes the current tracked or estimated position, the corresponding distance measurement and the location of the anchor node itself.This readily implies the possibility of computing each of the four sums in a distributed manner by sharing information (gossiping) among the anchor nodes.Upon completion of the distributed averaging or summing task, each anchor will possess an estimated value of all four sums, and then Eq. ( 3) can be computed in each anchor to obtain the estimated position of the node to be localized.This whole process is repeated in each iteration of the GD localization algorithm. The averaging or summing problem is the building block for solving many complex problems in signal processing.Gossip algorithms [15] are a class of randomized algorithms that solve the averaging problem through a sequence of pairwise averages.In our case, the communicating or gossiping nodes are the anchors, and we assume they are within transmission range of each other.Therefore, a simple gossip-based synchronous averaging protocol called the push-sum (PS) distributed algorithm [15,16] is used for this application. The push-sum gossip-based distributed averaging algorithm The PS algorithm is iterative and not exact.Therefore, every anchor node will obtain an estimate of the sums that differ slightly from that of the other anchors.The gossiping anchor nodes are assumed to work synchronously.The term "iteration" will be preserved for the GD time step, whereas the term "round" or "PS round" will used to indicate the PS time step.The total number of rounds will be designated as T. With every round t, a weight ω(i) is assigned to each node i, and initialized to ω(i) = 1/N, where N is the number of anchors.Likewise, a sum s(i) is initialized to s(i) = x(i), where x(i) is the resident summation element in node i.For round t = 0, each node i sends the pair [s(i), ω(i)] to itself, and in each of the remaining rounds t = 1,…,T, node i follows the protocol of Algorithm 2: Algorithm 2: The push-sum algorithm {Pushsum(x i )} [15,16] Input: N and T 1. Initialization: t = 0, sðiÞ ¼ xðiÞ and ωðiÞ ¼ 1=N f or i ¼ 1, … , N. Repeat. 3. Designate {ŝðrÞ , ωðrÞ } as the set of all pairs sent to node i at round t-1. 5. At each node i, a target node f (i) is chosen uniformly at random. 6. The pair [0.5 s(i), 0.5 ω(i)] is sent to target nodes f (i) and node i (the sending node itself). 7. [s(i)/ω(i)] is the estimate of the sum at round t and node i. Output: [s(i)/ω(i)] is the sum at round t and node i. X N i¼1 sðiÞ ¼ the sum, at all rounds t. The number of steps T needed such that the relative error in Algorithm 2 is less than ε with probability at least (1 -δ) is of order: where T is also referred to as the diffusion speed of the uniform gossip algorithm [15]. Distributed GD localization in WSNs The PS distributed averaging method of Algorithm 2 is considered as scalar version.It can be extended to a vector version [17] where nodes (anchors) exchange vector messages that are summed up element-wise.This concept readily conforms to our proposed distributed GD localization method in which we have to compute four sums in each iteration as in Eqs.((1), ( 6)-( 8)). At the kth iteration and in the ith anchor node, there reside f ðp k Þj i , ∂f ∂x j k, i , ∂f ∂y j k, i , and ∂f ∂z j k, i which can be considered the four elements of the vector. The core idea of our distributed GD localization algorithm is that, for each outer gradient iteration, a series of inner rounds reach consensus on each of the four N-term sums. Simulation results The GD localization problem in a 3D space is simulated in MATLAB as in Ref. [7].Four anchor node locations are chosen in a volume of 100 Â 100 Â 100 m 3 .It is assumed that the target node to be localized has all anchors within its radio range.The same four anchors given in the simulation results of Section 2 are used.The targeted node is (60, 90, 60).Error-free TOA measurements are assumed, and centralized localization is first performed with N = 4, α = 0.25 and p o = (50, 50, 50).After 100 iterations, it is found that the error function is 0.748 and the localized point is (60.1, 84.1, 58.8) which is very close to the targeted node. Treating the order in Eq. ( 12) as an exact value, we set the number of rounds of the PS algorithm (Algorithm 2), T, for a number of nodes N, equal to Note that δε ¼ 2 À12 is obtained when we set δ ¼ ε ¼ 2 À6 ≈ 0:0157.Substituting these values in Eq. ( 13), we find that T = 14 PS rounds when N = 4. Clearly, this implies that we may expect a relative error ε ≤ 0.0157 with probability higher than 0.9843 in the PS algorithm.The final accuracy in the estimated localization corresponds to the accuracy level ε set in the PS algorithm [16].Thus, from such estimated values of ε and (1 À δ), it can be deduced that the accuracy of our distributed localization algorithm is almost equivalent to that of centralized GD localization in the absence of noise and link failures. Distributed algorithms are robust against network failures, or, typically, link failures.The latter arise due to many reasons such as channel congestions, message collisions, moving nodes, or dynamic topology [18].Link failures can be modeled by the absence of a bidirectional connection between two nodes.All nodes operate in synchronism.At each time step, some percentage of the links between anchor nodes is randomly removed.The missing links may differ every time step since they are programmed to be randomly chosen, but their number remains fixed for each run of the code, and ensemble averaging over 100 trials is performed in each run.Figure 9 demonstrates the robustness of the proposed distributed algorithm.Even if we lose up to 50% of the links in every time step, the algorithm is still comparatively accurate.This is illustrated by Figure 9a and b which are plots of the error function versus iteration number in the presence of link failures.For N = 4, the number of available links is 6, and losing three (50%) of which results in a final localized target point of (59.8, 82.8, 58.4) with an error function of 0.9 when α = 0.25 [7]. Wireless Sensor Networks -Insights and Innovations For the purpose of comparison with centralized GD localization, we find that one link failure (25% of available links) isolates the corresponding node from the fusion center, and we have only three anchors to compute the target position, though randomly chosen in every time step.After ensemble averaging, the localized point is (60.1, 82.4,58.6) and the error function is 1.0, again when α = 0.25.This accuracy and in fact, even slightly better is achieved with the distributed scenario of four anchors and three link failures (50% of available links), which clearly shows the advantage of our proposed distributed localization algorithm over its centralized counterpart.The only disadvantage is that in every iteration, we must allow for a delay of 14 PS rounds (T). The simulations are repeated for noisy TOA measurements as shown in Figure 10.Gaussian measurement noise with zero-mean accounting for LOS arrivals only is assumed, and the SD is chosen to be 0.5 ns.This results in a distance error of 15 cm when UWB signals are used for sensing.The resulting plots are noticeably noisier than those of Figure 9, but are obviously interpreted in the same way as the noise-free cases.That is, the proposed distributed algorithm with three link failures (50% of links) performs better than the centralized algorithm with one link failure (25% of links) [7]. Step size considerations The fixed step size in this work should be chosen carefully; a too large step size would affect the performance advantage of the proposed distributed localization algorithm as well as the centralized one, whereas a small step size would increase the error function.It is worth mentioning that there are instances in the literature on distributed GD localization algorithms where only the optimal step size is computed in a distributed manner [19,20] rather than the GD sums in the present work.In Refs.[19,20], the optimization of the step size in each iteration depends on the node positions and gradients.The optimization method is called the Barzilai-Borwein or simply BB method [21], in which the step size is updated at each iteration using the estimated target position and gradient vectors of the current and past time iterations. The BB method cannot be applied successfully to our distributed GD localization under consideration [7], that is, by updating α at each iteration and in each anchor.Applying the BB method yields favorable results that are superior to those with fixed step size only in the cases of centralized localization, and distributed localization in the absence of link failures which is an ideal situation not found in practice.The reason is obvious since, in our work, the gradient components are found through gossiping among anchors and become, therefore, greatly affected in case of link failures causing the BB method to result in pronounced sub-optimality in the computation of α at each iteration and in each anchor.This conclusion was arrived at in Ref. [22], where the above situation was simulated and the BB method tested when applied to GD localization in WSNs.Linearly-varying step sizes are shown in Ref. [22] to have the best performance, as they do not involve gradient computations. Recapitulation and future trends The problem of sensor localization in a 3D space by the method of gradient descent has been investigated and solutions are presented to some impediments that are associated with the moving sensor case, namely, the local minima problem [6].The proposed method considers all possible combinations of a certain chosen number of anchor nodes from a larger set of available anchors.The foreseen success of the proposed method stems from the fact that a deviating estimated path toward a local minimum is almost certain to return to the right track if some anchor nodes are replaced.This is true since anchor node replacement entails a change of the shape of the performance surface along with different local minima positions.The anchor nodes placement is made uniformly random as the true track of the moving sensor to be localized is unpredictable, and it is performed periodically.The simulation results demonstrate the success of this method.The advantage gained is at the expense of increased computational requirements, and the proposed method also necessitates faster data processing in order to perform accurate moving sensor localization in real time. In Ref. [7], the GD localization algorithm in WSNs in a 3D space was combined with PS gossipbased algorithms to implement a distributed GD localization algorithm.The main idea is to compute the necessary sums by inter-anchor gossip.The method compared favorably with the centralized version as regards convergence, accuracy, and resilience against noise and link failures.Our simulation results demonstrate that centralized processing with four anchors and one link failure (25% of the links) introduce a localization error comparable to (and even slightly greater than) that introduced by the proposed distributed processing method with three link failures (50% of the links).This is achieved when the number of PS rounds is suitably selected. Despite the inevitable degradation of performance in case of noisy TOA measurements, the proposed distributed method retains its advantages over centralized processing with proper selection of the GD step size and number of PS rounds.It is therefore evident that resort to distributed techniques such as the proposed distributed GD localization algorithm [7] ensures robustness against link failures even in the presence of noisy TOA measurements, eliminates the need for a computationally-demanding central processor, and avoids a possible communication bottleneck at or near the fusion center [10]. As a future trend, compressive sensing (CS) or random sampling can be implemented to track a moving node in a centralized WSN using the iterative GD algorithm resulting in remarkable energy efficiency with tolerable error [23].Moreover, an efficient approach for (pseudo-) random sampling via chaotic sequences that has first appeared in Ref. [24] could initiate further investigation of CS concepts via chaos theory and the possibility of their application to WSN moving node tracking. Figure 3 is a plot of the error function versus the number of iterations for this last case of five anchor nodes.Retaining this scenario, another node (70, 45, 60) is localized as (70.03, 45.16, 59.85).Obviously, any node within the convex hull of the anchor nodes will be almost exactly localized with five anchors.The results of Figure3are repeated in Figure4taking into account the presence of NLOS arrivals and a greater noise standard deviation.In Figure4, SD = 0.002 µs, and µ NLOS = 0.006 µs.A reduction in the localization process accuracy is readily noticed: The point (60, 90, 60) results in a localization of (60.35, 88.97, 59.40).It is also clear from the figure that the solution is biased due to NLOS arrivals. Figure 3 . Figure 3. Error function versus the number of iterations when GD localization of a stationary target in 3D space is performed using five anchor nodes.Convergence factor = 0.25, TOA measurement noise SD = 0.001 µs. Figure 4 . Figure 4. Error function versus the number of iterations when GD localization of a stationary target in 3D space is performed using five anchor nodes.Convergence factor = 0.25, TOA measurement noise SD = 0.002 µs and µ NLOS = 0.006 µs. Figure 5 . Figure 5. Target node tracking along a helical path. Figure 6 . Figure 6.Tracking of a moving sensor in 3D space using iterative GD with initial point (50, 50, 50) and a fixed set of anchor nodes.Convergence factor = 0.1. Figure 9 . Figure 9. (a) Error function versus iteration number for centralized and proposed distributed GD localization for different cases of link failure conditions, α = 0.25.(b) A close view of Figure 9a demonstrating the comparative performance of the different centralized and proposed distributed localization algorithms.
8,230.6
2017-10-04T00:00:00.000
[ "Computer Science", "Engineering" ]
The response of superpressure balloons to gravity wave motions . Superpressure balloons (SPB), which float on constant density (isopycnic) surfaces, provide a unique way of measuring the properties of atmospheric gravity waves (GW) as a function of wave intrinsic frequency. Here we devise a quasi-analytic method of investigating the SPB response to GW motions. It is shown that the results agree well with more rigorous numerical simulations of balloon motions and provide a better understanding of the response of SPB to GW, especially at high frequencies. The methodology is applied to ascertain the accuracy of GW studies using 12 m diameter SPB deployed in the 2010 Concordiasi campaign in the Antarctic. In comparison with the situation in earlier campaigns, the vertical displacements of the SPB were measured directly using GPS. It is shown using a large number of Monte Carlo-type simulations with realistic instrumental noise that important wave parameters, such as momentum flux, phase speed and wavelengths, can be retrieved with good accuracy from SPB observations for intrinsic wave periods greater than ca. 10 min. The noise floor for momentum flux is estimated to be ca. 10 − 4 mPa. Introduction Superpressure balloons (SPB) have been used in both the troposphere and lower stratosphere since the early 1960s (TWERLE Team, 1977). The balloons use closed, inextensible, spherical envelopes filled with a fixed amount of gas. After launch, balloons ascend until they reach a float level where atmospheric density matches the balloon density. On this isopycnic or equilibrium density surface (EDS) a balloon is free to float horizontally with the motion of the wind. Hence, SPB behave as quasi-Lagrangian tracers in the atmosphere. Tracking the horizontal position of SPB using global positioning satellite (GPS) techniques means that SPB are well suited to study horizontal motions in the atmosphere. Measurement of vertical air motions is, however, more difficult because of the small vertical displacements that SPB generally undergo. A balloon displaced from its EDS experiences buoyancy forces that act to restore it, so it undergoes neutral buoyancy oscillations (NBO) around its EDS. Furthermore, the EDS itself will oscillate in the presence of gravity (buoyancy) waves (GW). By analysing the governing equation of motion through numerical integration, Massman (1978) explored the nature of both these factors, including the amplitude and phase response of an SPB to GW-induced sinusoidal variations of the EDS. Nastrom (1980) extended this work by considering the simultaneous wave-induced variations of density and vertical wind. He developed an analytical relationship between the amplitude and phase of a SPB in the presence of a sinusoidal gravity wave. Massman (1981) demonstrated how SPB can be used to study gravity wave activity in the Southern Hemisphere upper troposphere and lower stratosphere. An advantage of using SPB to study gravity waves is that, because the balloons drift with the background wind, they measure the intrinsic frequency (frequency relative to a moving reference frame). It is the intrinsic frequency that appears naturally in the Navier-Stokes equations that determine important wave properties. In contrast to either ground-or space-based sensors, SPB observations have the ability both to fully characterize wave packets and to provide such information over wide geographic regions (Alexander et al., 2010). The French Space Agency, CNES, developed and applied 8.5 m and 10 m diameter SPB and, more recently, developed 12 m diameter balloons that can carry payloads of up to 40 kg. These balloons are significantly larger than those used in previous studies and have long flight times on the order of months. A mixture of 8 and 10 m diameter SPB were used to study motions and transport in the Antarctic stratosphere during the Stratéole/Vorcore campaign in 2005 . The long duration of SPB flights during Vorcore proved invaluable in studies of atmospheric gravity waves and the geographical variation of wave sources (Vincent et al., 2007;Boccara et al., 2008;Hertzog et al., 2008;Walterscheid et al., 2012). In the subsequent Concordiasi campaign in 2010, held during the Antarctic late winter and spring, 12 m diameter SPB were used exclusively (Rabier et al., 2010). A limitation of the Vorcore observations of gravity waves by SPB was the effective 15 min sampling interval imposed by the data transmission rate. The corresponding Nyquist period of about 30 min was considerably longer than the approximately 5 min short period cutoff to the gravity wave spectrum due to the Brunt-Väisälä frequency in the lower stratosphere. In subsequent SPB campaigns this limitation was overcome by the implementation of a new communications system which allows a time resolution of about 30 s. Improved time resolution is particularly important for SPB studies in the tropics where convection is predicted to generate waves over a wide range of scales and periods, but with wavelengths between 5 and 50 km and periods between 10 and 60 min being especially prominent (Piani et al., 2000;Beres, 2004;Lane and Moncrieff, 2008;Jewtoukoff et al., 2013). Hence, SPB observations now cover the full range of the GW spectrum, a unique characteristic of this technique (Preusse et al., 2008;Alexander et al., 2010). This paper consists of two parts. In the first part we investigate the response of SPB to gravity wave motions by extending the analysis of Nastrom (1980) of the balloon equation of motion. We introduce a quasi-analytic method for analyzing the SPB response to an atmospheric wave. When an SPB responds to a gravity-wave-induced displacement of the EDS the equation of motion is such that there is a phase shift between the balloon and the EDS displacement. This phase shift is a factor in the retrieval of important GW parameters, including the intrinsic phase speed (i.e., the speed relative to the background wind). Amplitudes and phases derived from the simplified technique are compared with the numerical calculations of the equation of motion of the SPB and it is shown that they agree well. There is a specific emphasis on the response of the newer 12 m SPB, although the results are quite applicable to the smaller diameter balloons. In the second part of the paper, we test how well the improved instrumentation on the 12 m SPB is able to detect GW motions and retrieve wave parameters. For this aspect we carried out a large number of statistical realizations that covered the full spectrum of GW frequencies. The computational efficiency of the analytic technique means that it is very suitable for this analysis. This second part extends the work of Boccara et al. (2008) who dealt with Vorcore observations, and in particular only considered the case of hydrostatic waves. Theory Following Nastrom (1980), the governing equation of motion in the vertical direction for a balloon floating in the atmosphere is where the symbols are defined in Table 1. Physically, the terms on the right hand side of Eq. (1) can be attributed to the three non-negligible forces acting on the balloon. The first term is the buoyancy force, which acts whenever the balloon is displaced vertically to restore it to its EDS. The second term is the drag force, which acts to resist the motion of the balloon. The third term comes from a dynamic force supplied to the balloon by the surrounding atmosphere when it is in motion. Any other forces acting on the balloon, such as skin friction drag, aerodynamic lift and small-scale turbulence are assumed to be small in comparison (Nastrom, 1980). The left hand side of the equation is then the net force acting on the balloon. Assuming small vertical displacements and considering spherical SPBs, Eq. (1) can be simplified to where R is a wave-induced relative density perturbation and A is a constant dependent on balloon parameters (Nastrom, 1980). The neutral buoyancy oscillation (NBO) frequency ω B is the frequency with which a constant-volume balloon will oscillate around its EDS, and is given by with temperature T , vertical temperature gradient ∂T /∂z and atmospheric gas constant R a . A balloon of radius r and drag coefficient C d gives A as The first two terms of Eq. (2) originate from the buoyancy term and the third and fourth terms come from the drag and dynamic terms, respectively. This simplification assumes that the balloon is always near to its EDS, so that M a ≈ M B at all times. It is also assumed that the balloon is perfectly spherical. See Nastrom (1980) for further details. If the EDS is disturbed by a GW of intrinsic frequencyω and vertical velocity amplitude w o , so that the instantaneous vertical velocity is w = w o e −iωt , then the wave-induced fractional density perturbation is given by the polarization relation (e.g., Hines, 1960) as where ρ is the ambient density, N is the Brunt-Väisälä frequency, defined as and c p is specific heat capacity. High vertical resolution temperature soundings show that on global and seasonal scales N 2 ranges from ∼ 4 × 10 −4 to ∼ 8×10 −4 rad 2 s −2 at heights near 20 km (Grise et al., 2010). These values correspond to temperature gradients ranging between ∼ 0 and ∼ 7 K km −1 , with the largest values associated with the region just above the tropical tropopause. This means that for realistic temperature gradients, ω B is always greater than N; a gradient greater than 40 K km −1 is required for ω B /N < 1. Hence, at lower stratosphere heights, neutral buoyancy oscillations are always higher in frequency than the highest frequency gravity waves. Numerical model For a GW of given intrinsic frequency and amplitude, Eq. (2) can be solved numerically to derive ζ b as a function of time. As an example, consider a case study where the balloon parameters are typical of a 12 m diameter SPB used during the Concordiasi campaign. It is assumed that the atmospheric conditions used are similar to those experienced in the Antarctic lower stratosphere in early spring. Table 2 gives the basic atmospheric and balloon parameters. For the purposes of illustration, a gravity wave was used with a vertical wind perturbation amplitude of w o = 1 m s −1 and intrinsic periodτ = 15 min or angular frequency ofω = 6.98 × 10 −3 rad s −1 . This produces a fractional density perturbation of 5.85 × 10 −3 . Figure 1a shows the result of numerically solving Eq. (2) using a fourth order Runge-Kutta method (Press et al., 1992). For this example the total duration of the time series was 12.5 h (i.e., 50 oscillations) and a time step of 1 s (0.1 % of the period) was used, although the results are not particularly sensitive to the time step. Transients due to the initial conditions persisted for less than a cycle, so the results shown in Fig. 1a are the steady-state response. The red line represents the vertical position of the balloon plotted against time. The blue line represents the balloon displacement derived using an analytic method described below. While the numerical solution is almost sinusoidal, it is noticeable that higher frequency components are also present. The power spectral analysis of the whole 12.5 h period, shown in Fig. 1b, illustrates the absence of even harmonics and the dominance of the first harmonic over the other odd harmonics. The third harmonic is approximately ten percent in magnitude of the first harmonic and the fifth harmonic less than five percent. Higher harmonics are less than one percent of the first harmonic. This result supports the analysis of Nastrom (1980) that shows that only odd harmonics are present in the vertical displacement, with the first harmonic dominating. The harmonic content shown in Fig. 1b is typical of the response to short period waves. However, the amplitude of the harmonics decreases as the wave period increases. At a period of 30 min for example, the third harmonic has an amplitude of less than 3 % of the fundamental. Analytic model The dominance of the first harmonic in the balloon response shown in Fig. 1b suggests that a linear relationship between ζ b and ζ is a reasonable approximation to the balloon's response to a gravity wave. Hence we now consider the balloon and its environment as a quasi-linear system, treating the gravity wave as the input and the balloon response as the output signal of this system. Using linear system theory there will exist a transfer function (complex frequency response) relating the output to the input. The function is of the form: where Z and φ are, respectively, the absolute value and phase of the transfer function (Z); ζ b is the vertical variation of the balloon around its EDS due to the gravity wave. Here the phase, φ, is relative to the time of maximum wave displacement. Now consider a sinusoidal GW for which the complex amplitude is ζ = ζ o e −iωt , where ζ o is the wave vertical displacement amplitude. The vertical wind and density perturbation in terms of ζ are, respectively, It should be noted that Eq. (9) is an approximation that needs to be modified for large vertical wavelength GWs, such as those that might be generated by deep convection (Eckermann et al., 1998). Eckermann et al. (1998) show Eq. (9) is accurate to within a few percent in amplitude and phase for vertical wavelengths less than 20 km and so we use the approximation in the following analysis. Substituting Eq. (8) and Eq. (9) into Eq. (2) and evaluating the derivatives of ζ b and ζ gives From Eq. (7) and substituting for ζ b while retaining only the first harmonic in the non-linear drag term leads to where Y ≡ 1 − Z . A value for Z can be calculated iteratively using an initial value of Y = 1. |Z| converges to a fractional difference of less than 10 −4 within two or three iterations forω 2 N 2 (i.e., periods greater than about 10 min) and within six steps forω ∼ N . Hence, the SPB response to any gravity wave can be obtained using Eq. (7), as illustrated in Fig. 1a where the blue line is the analytic solution. It is evident that the analytic solution slightly overestimates the numerical solution, but the difference is no more than a few meters. Analysis Further insights into the response of large diameter SPB to wave-induced motions are gained by considering both the numerical and analytic approaches. Here we use the same balloon parameters and atmospheric conditions as given in Sect. 3 and derive the response as a function of a number of gravity wave parameters. Firstly, the value of w o was varied over a range from 0.1 to 2.0 m s −1 for the three different GW intrinsic periods of τ = 15, 30 and 60 min. Figure 2 shows the amplitude ratio, |Z|, and phase computed using both the numerical and analytic methods. For all three periods the two methods give amplitude ratios that do not differ by more than 5 % and phases that differ by no more than a few degrees. Similar results are displayed in Fig. 3. Here the amplitude and phase response derived from the numerical and analytic methods are plotted as a function of wave period for values of w o that were fixed at values 0.5, 1.0 and 1.5 m s −1 . The amplitude and phase start to vary markedly as the wave period approaches the buoyancy period of about 5 min. Nevertheless, the relative amplitudes derived by the two methods agree well. Similarly, the phases agree to within at least 5 • . We note that the expression that relates the density to vertical velocity perturbations, (Eq. 9), does not apply whereω > N , which is outside the internal gravity wave range. However, we retain it for the purposes of illustration of Z. For example, arbitrarily setting R = 0 whenω > N gives almost identical curves to those shown in Fig. 3 with discontinuities atω = N. In all subsequent analysis and discussion we are concerned only with internal waves in the range N >ω > |f |. The analytic model also works for wave packets. Figure 4 shows the numerical and analytic solutions for a wave packet of a wave with frequencyω and a Gaussian envelope defined as Here, the wave period is τ = 15 min and the "width" parameter t g = τ , so the packet has about five oscillations. Again, the numerical response shows some influence of the odd harmonics, as demonstrated in the lower panel of Fig. 4, but otherwise there is good agreement between both solutions. One benefit of the analytic approach is that it gives insight into the SPB response as a function of wave frequency. For example, whenω 2 N 2 it is evident from Eq. (11) that |Z| → 2N 2 /3ω 2 B ≡ |Z| EDS . This limiting value, when the balloon is on its EDS, has a numerical value of |Z| EDS ∼ 0.25 with the temperature gradient used here. Similarly, the phase limit is φ → 0. These are the limiting values evident in Figs. 2 and 3 and they correspond to the behavior of a perfect isopycnic balloon. The actual value of Z EDS will depend on the ambient conditions, especially the temperature gradient as this determines ω 2 B and N 2 . Z EDS is always less than 0.5 using realistic gradients in the lower stratosphere, as discussed at the end of Sect. 2. Manipulating Eq. (11) shows that Since 2/3N 2 < ω 2 B , the numerator in Eq. (13) is negative. The denominator is always positive, which means that φ is always negative and the balloon displacement lags the wave displacement. These results show that a SPB starts to depart substantially from its EDS for wave periods less than about 10 min (i.e., forω N/2). Two approximations give further insight into balloon behavior. First, in the low frequency limit whenω 2 N 2 which shows that the phase is proportional toω 2 ζ o =ωw o , all other parameters remaining constant. Hence the phase departures become greater for larger wave amplitudes and shorter periods, as seen in Fig. 2. so now the phase departure is greater for smaller amplitude waves, as observed in Fig. 3. Equations (14) and (15) also show that tan φ ∝ A ∝ r −1 whenω 2 << N 2 and tan φ ∝ A −1 ∝ r whenω ∼ N. So at lower frequencies the phase shifts will be greater for smaller balloons for a given wave amplitude, while the opposite is true when the wave frequency is near N. Finally, without going into details, it is straightforward to show that |Z| ∝ r for ω ∼ N. Fig. 1, but for the SPB response to a gravity wave packet defined by Eq. (12). The findings discussed above have ramifications for SPB measurements of gravity waves and the retrieval of important wave parameters, such as momentum flux. This issue is discussed further in the next section. Boccara et al. (2008) described a methodology by which SPB observations made during the Vorcore campaign could be analyzed to obtain gravity wave characteristics. To test the methodology, a series of Monte Carlo-type simulations were made that mimicked the SPB observations of GW-induced perturbations in pressure and horizontal balloon displacement. It was assumed that waves occurred in packets and a wavelet analysis technique was used to detect the packets in space and time and so to estimate the wave parameters. Simulations and retrieval of gravity wave parameters In the Boccara et al. (2008) simulations, waves were allowed to propagate in random directions in the horizontal, but it was assumed that all waves propagated energy and momentum upward. Using the associated errors in the measured meteorological parameters and by repeating the simulations many times they were able to estimate the uncertainties and biases in the retrieved GW parameters, such as momentum flux. Briefly, it was found that, the horizontal direction of wave propagation was accurately retrieved but that momentum fluxes were somewhat underestimated. Here we make use of the techniques described in Sect. 3 above to accurately model the SPB displacements and repeat the Boccara et al. (2008) simulations, but with the measurement parameters and uncertainties appropriate to the Concordiasi campaign SPB observations. There were important differences between the Vorcore and Concordiasi observations which make the later measurements of wave fluxes more accurate: -Observations were made at 30 s intervals in Concordiasi, but only at 15 min intervals in Vorcore. Hence, in Concordiasi the full spectrum of GW motions from the between the Brunt-Väisälä (∼ 5 min) and inertial (∼ 13 h) periods could be studied, whereas in Vorcore the measurements were restricted to periods greater than 1 h . -More sensitive GPS measurements were available on the SPB during Concordiasi than in Vorcore, with Table 3 summarizing the instrumental uncertainties. Most importantly, it was possible to measure directly the vertical displacement of the balloons with an accuracy improved by a factor of 10 compared to the previous campaign. Having direct measurements of vertical displacement means that momentum fluxes can be derived without using the indirect and less accurate method used in Boccara et al. (2008), as discussed below. Simulations To test our retrievals of gravity wave parameters a large number of simulated SPB observations was made and then analyzed and the results compared with the original input parameters. Each simulation produced a notional 10-day time series with a basic 30 s time sample period. The balloons were assumed to drift eastward with a constant zonal wind speed of 10 m s −1 at a latitude of 60 • S, so that, without any wave perturbations, there was a steady change with time in the longitudinal position, but not in the latitudinal. Time series were of the SPB observables: pressure (p + p T ), temperature (T + T T ), position in terms of longitude and latitude (x+x , y ) and vertical balloon displacement (ζ b ) were then synthesized. Here, an overbar indicates the ambient value while the primed value indicates the wave-induced perturbation. It should be noted that the pressure and temperature perturbations are a combination of the relevant wave perturbation and of the pressure and temperature changes due to the vertical displacement of the balloon in the presence of background gradients Wave packets for a general wave parameter ψ were derived with the form where Re means the real part; ψ is the complex wave perturbation amplitude derived from the gravity wave polarization relationships; k, l and m are the zonal, meridional and vertical wavenumber, respectively. The basic methodology for each simulation is 1. First, chooseω from a uniform random distribution in the range |f | <ω < N . 2. Then choose the intrinsic phase speed,ĉ, and direction of propagation, θ (counterclockwise from east), from uniform random distributions in the ranges 0 ≤ĉ ≤ 100 m s −1 and 0 ≤ θ < 360 • . The zonal and meridional wavenumbers are then derived from k = k h cos θ and l = k h sin θ, where k h =ω/ĉ. The vertical wavenumber is derived from the dispersion equation where H is the density scale height. In contrast to Boccara et al. (2008), the sign of m is set randomly, so that −|m| (+|m|) means a wave with an upward (downward) group velocity. 4. The complex wave amplitudes are then computed. In order to make the simulations as realistic as possible, the horizontal perturbation velocity aligned along the direction of propagation, u || , was first derived at the appropriateω based on the mean horizontal wind spectrum derived from the actual SPB observations. Other wave parameters, u , υ , w , p , T are then derived from the GW polarization relations Alexander, 2003, 2012). 5. The vertical displacement of the SPB, ζ b , is then computed from the wave vertical displacement, ζ = iw /ω, using either of the methods discussed in Sect. 3. 6. Finally, the total pressure and temperature values were computed from Eqs. (16) and (17) and time series of all observables computed and saved for later analysis. The above procedure was repeated 1000 times so that the retrievals of wave parameters could be tested over the complete spectrum of wave frequencies and propagation directions. Retrievals The formulae used to retrieve the wave characteristics from the balloon observations are based on those of Boccara et al. (2008). However, their work only dealt with hydrostatic waves and used only pressure measurements to infer the balloon vertical displacements. The two improvements achieved during the recent Concordiasi campaign (i.e., higher sampling rate and better precision of GPS vertical positions) enable us to relax these constraints, and extend the previous formulae. In the following description of the wave characteristics retrieval algorithm, we focus on its novel features and only briefly mention those that have not changed, for which Boccara et al. (2008) should be consulted. As stated previously, the balloon observables are the 3-D position, pressure (p), and temperature (T ). At first, the zonal and meridional velocities (u and v, respectively) are computed by centered finite differences from the horizontal positions. The density (ρ) is obtained using the perfect gas law: A flight-mean density (ρ) and pressure (p) are computed, and the total pressure perturbation is obtained from the latter as p T = p − p. Similarly, the perturbations in zonal and meridional velocities (u and v , respectively) are obtained as departures from the flight mean values. The Eulerian pressure perturbation (p ) is then estimated from the total pressure perturbation: which is the reciprocal of Eq. (16) assuming hydrostatic equilibrium for the background atmosphere. Note here that the balloon vertical displacement (ζ b ) is simply the departure from the flight-mean altitude. In particular, no assumption is made at this stage about the balloon flying at constant density. A complex Morlet wavelet transform (Torrence and Compo, 1998) is then applied to all time series (u , v , ζ b , p T , p ). From now on, all the equations in this section refer to the complex amplitudes of the wavelet coefficients, which are denoted with a tilde over the perturbations (e.g.,ũ). These coefficients correspond to the decomposition of the wave signals in small ω − t blocks in the intrinsic frequency-time domain. The wavelet set of frequencies are chosen to match the range of gravity-wave intrinsic frequencies (i.e., from |f | to N). Boccara et al. (2008), θ is determined as the angle for which the modulus of the horizontal wind perturbation projected on that direction is maximized. θ is thus found with a 180 • ambiguity, which is resolved later on. As in The intrinsic phase speed in the wave direction of propagation is readily inferred from the polarization relation (e.g., Fritts and Alexander, 2012): where δ − = 1 − f 2 /ω 2 . Hence,ĉ is estimated aŝ where theũ * denotes the complex conjugate ofũ . To compute the wave momentum flux, we assume that the balloon vertical displacement is that of a perfect isopycnic tracer. As previously discussed, this will be a source of error when the balloon departs from this ideal behavior (i.e., when ω → N). Yet this assumption enables us to relate the balloon vertical displacements to those of air parcels. In particular, the Lagrangian component of the pressure disturbance ( dp dz ζ b ) can then be related to the Eulerian value: This equation is obtained in the same manner and is equivalent to Eq. (9) in Boccara et al. (2008), but includes in the second bracket an additional term associated with nonhydrostatic waves. Similarly, we use the full non-hydrostatic polarization relation between the horizontal and vertical velocity disturbances: which, with the help of Eq. (24), enables us to relate the wave momentum flux from the balloon observables: where Im (z) stands for the imaginary part of z. Equation (26) turns out to be the same equation as the hydrostatic version of Boccara et al. (2008). We demand here that the momentum flux be positive, which may require a sign switch ofũ (i.e., a rotation of θ by 180 • ). In other words, at this stage of the analysis all the wave packets are assumed to propagate upward in the atmosphere. The vertical wavenumber of the wave packets can be inferred from a combination of Eqs. (22) and (25): Note that, in agreement with the previous assumption on the wave vertical direction of propagation, m < 0 here. The actual sign of m is now determined as follows. First, expressing p T as a function ofw with the help of Eqs. (22), (24) and (25), one obtains The sign of Re(wp * T ) is thus the opposite to that of m. Because ζ b ≡ H R for a perfect isopycnic balloon one obtains with the help of Eq. (5): Hence, the sign of m can be inferred from the balloon observables. If m > 0, θ is rotated by 180 • , and the sign of Re(ũ * w) is reversed. This process fully resolves the initial 180 • ambiguity in θ. The horizontal wavenumber k h is then derived from Eq. (19), the gravity wave dispersion relation. Finally, the ground-based angular frequency (ω) is obtained from the Doppler-shift equation: ω =ω +ū k h cos θ +v k h sin θ. (30) Results Instrumental and wave propagation factors always impose limits on the extraction of GW parameters from observations (Alexander, 1998;Alexander and Barnet, 2007;Alexander et al., 2010). In principle, there are no limits on the range of GW frequencies or wavelengths that can be determined using SPBs of the type described here. However, there are likely to be difficulties in determining momentum fluxes for short period waves where a balloon departs from its EDS. Furthermore, the uncertainties that are inherent in the instruments carried on the SPB will set a noise floor, below which fluxes cannot be reliably determined. Similarly, the wavelet analysis itself will start to breakdown when packet amplitudes fall below some critical value. The procedures described above allow the limitations of the SPB momentum flux measurements and the uncertainties of other wave parameter to be explored. In order to test the various factors that influence the accuracy of the SPB flux measurements, a series of preliminary investigations were conducted. Outcomes of trials that do not contain instrumental noise indicate the influence of the wavelet analysis technique and of the retrieval algorithm. Repeating the analysis of the same data set, but with noise now included, then shows the effects of instrumental noise. Results for one such comparison are illustrated in Fig. 5. Here a wave packet of the form was used. Other wave parameters were derived via the GW polarization relations as the packet amplitude, u o , was changed systematically from 0.001 to 10 m s −1 . In this example values ofτ = 60 min,ĉ = 40 m s −1 and θ = 300 • were used, but the conclusions are quite general. Figure 5a shows that u w is determined well for values of u o 0.05 m s −1 for trials both with and without noise. In the case ofĉ and of θ the same situation applies for the no noise case, but the effects of instrumental noise become noticeable for values of u o 0.2 m s −1 . Similar outcomes were found for other wave parameters, such as wavelength, which indicates that all wave parameters can be successfully retrieved if the velocity amplitudes are above a threshold of u o ∼ 0.2 m s −1 , although u w can be reliably determined to lower values. In the results discussed below 1000 simulations were used. Deriving and then retrieving data from this number of simulations is quite time consuming, so the analytic method was used to determine ζ b , since the results are similar to the more time-consuming numerical technique. Figures 6 and 7 show plots of retrievals of GW parameters from datasets that either include instrumental noise (lower panels) or no noise (upper panels). Results color coded in red and blue denote waves with m < 0 and m > 0, respectively. Simulations of momentum flux show very good comparisons between the input and output values at all periods greater than ∼ 10-20 min (Fig. 6a, d). The effects of instrumental noise are minimal. However, it is clear that there are systematic differences between the input and retrieved values at short periods. To understand why, consider Eq. (26), which can be expressed using Eq. (15) from Boccara et al. (2008), as which in turn can be expressed as where |Z| EDS was defined in Sect. 4. So the systematic deviations in retrieved flux at short periods mark the departure of the balloon off its EDS. The retrievals of phase speed and direction are also excellent, especially in the non-noisy situations (Fig. 6b, c), but they show some systematic differences when wave frequencies are near f and N, especially when instrumental noise is included (Fig. 6e, f). For θ, whenω ∼ f the wind perturbation hodograph is almost circular, which makes the precise determination of direction of propagation more difficult. This accounts for the small spread in values of θ near f . While the changes in θ are small (no more than a few degrees), the variations inĉ are proportionately larger at both ends of the spectrum. Figure 7 shows that similar systematic deviations from input values are evident at short and long periods in other important wave parameters. There are a number of reasons why the retrieved values may show a bias at both short and inertial periods. Firstly, the retrieval analysis assumes that the SBP is moving on an isopycnic surface, but the SPB departs significantly from its EDS at short periods, as illustrated in Fig. 3. In particular, it is the phase variations in Z that vary most rapidly with frequency for N >ω > N/2, and produce the systematic bias. A second, more subtle, effect is caused by the use of wave packets in the simulations. Packets described by Eq. (31) have a width in frequency space of ω ∼ω. When either ω ∼ N orω ∼ |f | the wave packets will project onto some wavelet coefficients associated with frequencies greater than N or less than |f |. Furthermore, in this situation factors such as (N 2 −ω 2 ), (ω 2 −f 2 ) or δ − , which appear in almost all expressions used to retrieve the wave parameters, reverse sign thereby accentuating the effect. Nevertheless, these "nongravity wave" coefficients are retained in the retrieval process provided that the central frequency of the wave packet is located between N and |f |. If they are discarded then a significant fraction of the wave momentum flux is lost. Another factor in the degradation of performance near N is the effect of instrumental noise (e.g., Fig. 6e) acting in concert with the change in wave amplitude with frequency in the simulations. As noted in item 4 in Sect. 5.1 above, the starting value of u || was derived from the observed spectrum of horizontal kinetic energy, which scales as ∼ω −2 . Hence, u || is smaller at higher frequencies for shorter periods. Furthermore, the KE spectrum itself was derived from the average over all flights, which means that wave amplitudes for specific wave packets at a given frequency are probably underestimated, and are therefore more likely to be noisier than they would be in practice. A simple test in which the wave amplitudes input into the retrieval process were increased by a factor of 3 confirmed the latter hypothesis. It showed that the random variations at short periods evident in, say, Fig. 6e had almost disappeared. Finally, it is stressed that the important momentum flux parameter is the one least influenced by noise. This supports the simulations shown in Fig. 5a, where values of u w are recovered well down to small values of u o . Momentum flux and wave propagation direction are also the two parameters that do not contain frequency dependent terms such as δ − , which explains the retrieval of these parameters over a wider frequency range. Table 4 summarizes the statistics of the retrievals of important wave parameters. Except for the intrinsic and groundbased period ratios, the results for the whole wave spectrum and the more restricted frequency range N/2 ω 1.5f are included. For the reasons discussed above, it is the latter frequency range that provides the more realistic results. For the wave periods the median values of the retrieved to input values are included as well as the mean values. For the intrinsic period the median and means are identical and show that the recovered values slightly underestimate the true values. The mean values of the ground-based periods are biased by some outliers, and the median values give a more accurate indiction of the accuracy of the retrieved values. Overall, the wave parameters are well recovered. All the results just discussed have been obtained with time series containing a single gravity-wave packet. In the atmosphere, however, multiple sources acting at different times may simultaneously produce a number of wave packets in the volume sampled by SPB. Boccara et al. (2008) studied Fig. 6, but for (a) and (d) the ratio of simulated to input horizontal wavelength ((λ h ) sim /(λ h ) in ). (b) and (e) for vertical wavelength ratio ((λ z ) sim /(λ z ) in ). (c) and (f) difference between simulated and input ground-based horizontal phase speed (cg sim − cg in ). Table 4. Mean values of simulated parameters and their standard deviations. Here, ĉ, θ , (ρ o u w ) and c g are the differences between the respective simulated and input values. The other quantities are the ratios of the simulated to input values. Theτ ratio denotes the ratio of the retrieved intrinsic wave period to the input value and the τ g ratio is the ratio of the retrieved to input ground-based period. how the superposition of wave packets could change the performance of their retrieval algorithm. They noted first that the wavelet analysis used to retrieve wave parameters is well suited to separate wave packets that occur at the same time provided their respective central frequencies are sufficiently distinct. However, when superposition in the time-frequency space does occur, Boccara et al. (2008) noted a slight degradation of their retrieval, for example, gravity-wave momentum fluxes could be underestimated by ∼ 20 % when 10-day time series include 10 randomly chosen wave packets. Still, it is difficult to know precisely how many wave packets do occur within any given time interval in the real atmosphere. The number will vary due to many factors, including the distance from the source(s) and the dispersive characteristics of gravity waves contributing to separate wave packets according to their frequency (e.g., Prusa et al., 1996). Therefore, the multiple wave-packet experiments were not repeated, and we assume that the associated uncertainty in the retrieved wave parameters is negligible compared to uncertainties in current gravity-wave drag parameterization schemes. Conclusions Superpressure balloons provide the only direct way to measure, over wide geographic regions, momentum fluxes and other important wave parameters in terms of intrinsic frequency and phase speed. These measurements help constrain gravity-wave drag parameterization schemes, notably the distribution of momentum flux as a function of the 2-D horizontal phase speed. Building on the work of Nastrom (1980) and others, we analyze the response of an SPB to vertical displacements induced by gravity waves. Using the known uncertainties of the various instruments carried on the latest versions of SPB developed by CNES, we estimate the accuracy to which fluxes and other important wave parameters can be measured as a function of wave amplitude. The analysis is particularly focussed on SPB operating in the stratosphere. Both numerical and quasi-analytic techniques are used, with the analytic technique giving particular insight into the SPB response as a function of wave frequency. It is shown that the response is well behaved for intrinsic wave frequencies lower than about N/2. At low frequencies the ratio of the balloon vertical displacement to the wave displacement has a limiting value determined solely by atmospheric temperature and its gradient. Numerically the value is about 0.25 for conditions in the Antarctic springtime stratosphere. At frequencies higher than ∼ N/2, the balloon starts to depart significantly from its isopycnic surface or EDS. Following Boccara et al. (2008) a statistical analysis of the simulated response of 12 m diameter SPB to gravity wave packets propagating in the Antarctic stratosphere is used to show that momentum flux is measured with high accuracy forω N/2, as is the direction of wave propagation. Momentum fluxes can be accurately measured down to values of about 10 −4 mPa (Fig. 5a). As newer instruments are installed, including more accurate GPS measurements of displacement, reductions in this noise floor are possible. Other wave parameters such as intrinsic phase speed and horizontal and vertical wavelengths are also recovered with good accuracy, although the optimum frequency range is N/2 ω 1.5f due to factors that complicate the retrieval process when ω ∼ f . An important outcome is that the retrieval process is independent of the vertical direction of wave propagation propagation. This means that it will be possible to derive the net momentum flux when the analysis is applied to real data, such as that acquired during the 2010 Concordiasi campaign.
9,710.4
2013-12-13T00:00:00.000
[ "Physics", "Environmental Science" ]
A Decision Analytic Approach to Exposure-Based Chemical Prioritization The manufacture of novel synthetic chemicals has increased in volume and variety, but often the environmental and health risks are not fully understood in terms of toxicity and, in particular, exposure. While efforts to assess risks have generally been effective when sufficient data are available, the hazard and exposure data necessary to assess risks adequately are unavailable for the vast majority of chemicals in commerce. The US Environmental Protection Agency has initiated the ExpoCast Program to develop tools for rapid chemical evaluation based on potential for exposure. In this context, a model is presented in which chemicals are evaluated based on inherent chemical properties and behaviorally-based usage characteristics over the chemical’s life cycle. These criteria are assessed and integrated within a decision analytic framework, facilitating rapid assessment and prioritization for future targeted testing and systems modeling. A case study outlines the prioritization process using 51 chemicals. The results show a preliminary relative ranking of chemicals based on exposure potential. The strength of this approach is the ability to integrate relevant statistical and mechanistic data with expert judgment, allowing for an initial tier assessment that can further inform targeted testing and risk management strategies. Introduction Manufactured chemicals are widely used in products such as cosmetics, plastics, and electronics, and have applications in almost all industrial processes in sectors including energy, agriculture, and pharmaceuticals [1]. Increasing dependence on manufactured chemicals has not, however, been matched by an adequate increase in our understanding of the risks these may pose to the environment and human health [2]. Many chemicals in U.S. commerce today have unknown environmental fates and poorly understood potential for human exposure, including some of the most ubiquitous commercial chemicals, such as surfactants, fragrances, cleaning agents and pesticides [3,4]. In this context, exposure is the contact of a stressor (i.e., a chemical agent) with a receptor (i.e., a human or a human population) for a specific duration of time [5]. Because of the lack of resources and sufficient scientific information on toxicity [6] and exposure [3] for the assessment of all chemicals, efforts are typically, and rationally, devoted to assessing those chemicals believed to pose the greatest potential risks based on production volume and chemical properties. Within the domain of human health risk assessment, toxicity is an indication and measurement of the severity of adverse health effects a chemical causes in relation to an exposure level (dose). We broadly define exposure to be the contact of a stressor with a receptor for a specific duration of time [5]. The stressors of interest are chemical agents that can potentially lead to an adverse impact and the receptors of interest are individuals or population of individuals. Exposure is complex and dynamic in nature due to its spatial and temporal characteristics. For this reason, exposurebased prioritization efforts focus on relative exposure potential as a means to evaluate and rank chemicals. While prioritization is in of itself a risk management strategy, other risk management decisions may follow to include the allocation of scarce resources to complete future risk assessments, collection of additional data or testing, and/or (bio) monitoring. Therefore, the resolution and precision of the data incorporated in these efforts may vary according to the overall objective of the prioritization. The U.S. EPA Office of Chemical Safety and Pollution Prevention recently performed a chemical prioritization exercise to identify 83 ''TSCA Work Plan Chemicals'' [7] as candidates for risk assessment during the next few years. Broad stakeholder input was used to identify prioritization and screening criteria and data sources. Chemicals were evaluated based on their combined hazard, exposure potential, and persistence and bioaccumulation characteristics using a two-step process. In the first step, a set of data sources was used to identify 1,235 chemicals meeting one or more criteria suggesting concern, namely: known reproductive or developmental effects; persistent, bioaccumulative, and toxic (PBT) properties; known carcinogenicity; and presence in children's products. Excluding those chemicals not regulated under TSCA and those with physical and chemical characteristics that do not generally present significant health hazards narrowed the number of chemicals down to 345 candidates. In the second step, a numerical algorithm was used to score each chemical based on three characteristics: hazard, exposure, and potential for persistence or bioaccumulation. Candidate chemicals that ranked highest on the basis of their total score were identified as work plan chemicals; those that could not be scored because of an absence of exposure or hazard data were identified as candidates for information gathering. Using the methodology described above, EPA has been able to identify a priority set of chemicals for near-term assessment based on criteria widely accepted as warranting concern. The scoring algorithm is transparent and the data sources are well documented. Focusing on chemicals with documented evidence of concern (i.e. ''data-rich'') is reasonable in light of limited prototypes for post hoc screening and the paucity of available resources. However, this approach may not adequately address the need to make decisions about the thousands of chemicals in commerce and the hundreds of new chemicals introduced each year for which there is little or no information [1,3]. To support the development of novel rapid approaches for evaluating potential exposure of both existing and emerging chemicals, the EPA has initiated the ExpoCast research program [8]. This program is keenly interested in characterizing exposures across the chemical life cycle -manufacturing, transportation, product formulation, consumer product usage and finally disposal. EPA seeks to build on current chemical exposure models and knowledge to generate robust new protocols that better support chemical evaluation, risk assessment and risk management. Recent activities under this program have evaluated utility of available approaches for the purpose of rapidly prioritizing large numbers of chemicals on the basis of exposure [9,10]. A number of exposure models were recently comparatively evaluated through the EPA Expocast model challenge, where a set of approximately 50 data-rich chemicals of different classes were ranked by several different approaches [10]. The chemicals were chosen to include high interest chemicals with a range of properties. Each modeling approach was capable of analyzing a different number of chemicals from the full set because of varying input requirements. Key findings of the comparative analysis among the prioritization schemes indicated significant differences in chemical ranking as a result of several factors: (1) which processes the model described across the source to effects continuum [11]; (2) the exposure metric or surrogate metric used for prioritization and which statistic (i.e., median, upper bound or lower bound estimate); (3) whether the model inputs included actual, modeled or unit emissions; (4) which exposure pathways were considered (i.e., from aggregated sources or through a dominant pathway); and (5) which type of exposure scenarios were considered (i.e., direct or indirect, diffuse source or concentrated source, etc.) [10]. Only mechanistic models characterizing exposure associated with environmental sources could rapidly evaluate and rank potential exposure for the majority of chemicals. To a great extent, this was due to both the minimum data requirements and the availability of predictive tools (i.e., QSARs) to generate model inputs that could be used to describe fate and transport under steady state and equilibrium conditions. Of the other models evaluated in the EPA Expocast model challenge, those designed for evaluation of chemicals in specific exposure scenarios lacked data for chemical and scenario specific input parameters and were thereby inhibited in their ability to produce ordinal rankings for the 55 chemicals. Arguably, one of the major limitations of the models evaluated, and perhaps one of the larger knowledge gaps in exposure-based chemical prioritization itself, involves complex social behaviors that determine how humans come in contact with manufactured chemicals, particularly those emanating from near field sources (e.g., residential and consumer products). Thus there is a pressing need for enhancing current approaches with tools and techniques developed for understanding human behaviors, such as human factors engineering and marketing research, to better define scenarios describing how products are used. Accurate use scenarios among population groups of interest are necessary to properly characterize the consumer use component of a chemical's life cycle. Decision support tools borne out of the social sciences may also have a place in chemical prioritization. Multi-criteria Decision Analysis (MCDA), a rule-based method of classification for priority setting, is both a set of techniques and an approach for ranking alternatives [12,13]. MCDA is a promising approach for exposurebased prioritization because it is transparent and understandable, yet complex and rigorous enough to include scenario-based reasoning, stochastic processes and value of information analysis. Moreover, it is amenable to sparse data [14,15,16,17]. These characteristics complement some of the limitations of currently available statistical, mechanistic, or logic models, which provide useful frameworks for gathering relevant data but lack the social and policy context for risk-informed decision making. MCDA can merge a variety of types of exposure metrics from descriptions of physical chemical properties to the socioeconomic measures which characterize human activity, chemical use and contact to ultimately inform screening level risk estimates. Permitting structured integration of different types of information, MCDA methods provide a means for combining quantitative chemical property, production and use data with expert judgments and stakeholder preferences. MCDA assessment criteria can be adaptively weighted and modified in real time to evaluate both data-rich and data-limited chemicals. Use of MCDA methods to support prioritization decision making under high uncertainty has been demonstrated many times including hazard identification and assessment. Risk management alternatives of industrial hazards or industrial consequences were relatively ranked using an MCDA approach by Paralikas and Lygeros [18]. The method recognizes that a single factor could not be used to define flammability and that different methods, tools, codes and legislation use varying sets of fire hazard properties as an example. Using the MCDA framework, the different decision criteria were successfully integrated using fuzzy logic to deal with linguistic variables and uncertainties allowing broad application for chemical hazard ranking decisions. In another example, life cycle assessment (LCA) was incorporated within a decision framework to prioritize future research and evaluate sensitivities to missing information in an assessment of processes for synthesizing single walled carbon nanotubes [14]. Engineered nanomaterials present uncertainties similar to chemicals in consumer products in terms of unknown environmental and human health across all life stages from formulation to disposal. This paper demonstrates how analytical tools, such as LCA and MCDA, can offer a versatile and transparent approach to exposure-based prioritization utilizing results from several approaches evaluated in the EPA ExpoCast model challenge. The purpose of prioritization within this context is to focus resources on further evaluation of safety for chemicals with high potential for exposure and risk. A combination of exposure assessment model output with qualitative exposure criteria within such a decision framework has been recommended in the exposure-based waiving protocol within Europe's REACH Regulation [19] which shares some similar goals for human and environmental health protection. Materials and Methods We propose a decision analytic approach for exposure-based chemical prioritization to address the need for novel, rapid exposure potential screening protocols. In this approach, we build on current research and existing models by evaluating relevant chemical exposure criteria within a larger MCDA framework. We employ a two-part prioritization model that incorporates both properties of the chemical itself and properties of the chemical's life cycle (Figure 1). The chemical property and life cycle property assessments are structured to analyze exposure-related information associated with specific chemical properties and distinct life cycle phases, respectively. Relevant chemical and life cycle properties are grouped into several criteria based upon the means by which each property contributes to the chemical's overall exposure potential (e.g., properties associated with a chemical's ability to bioaccumulate vs. those associated with its ability to be metabolized by the human body). Chemical and life cycle properties in each criterion are then further divided into various sub-criteria. The numerical values associated with these properties for a given chemical serve as inputs to the model. Input data can be obtained from a number of different sources, including existing databases, current literature and expert judgment. The criteria within this decision model were selected by reviewing those used in the models submitted to the ExpoCast model challenge, [10] and then structured into a hierarchical framework based on discussions with exposure science experts. Within each sub-criterion, the constituent chemical or life cycle property is evaluated to determine its contribution to overall exposure potential. Input values for individual properties are compared against established numerical thresholds, which define distinct levels of risk that span the range of possible values for the given sub-criterion. Thresholds are used to score property values based on the indicated level of risk (e.g., a compound with a longer half-life may have higher potential for exposure than a compound with a shorter half-life, all other things being equal). Following an MCDA approach, sub-criterion scores are then combined according to explicit decision rules to derive scores for their higher-level criterion. Chemical property and life cycle phase criterion scores are then combined to produce a Chemical Properties Exposure Score (CPES) and a Life Cycle Exposure Score (LCES) for each chemical. These scores reflect relative estimates of chemical exposure potential as indicated by available chemical property and life cycle property data, respectively. Exposure scores may then be integrated to derive aggregate measures of exposure potential, which can be used to compare and prioritize chemicals on a relative basis, or can remain separate and be plotted on a risk matrix for a more qualitative assessment. Chemical property and life cycle phase criteria can be weighted within each assessment to reflect their relevance to the user's management objectives. Weights may indicate a specific focus of the assessment or reflect expert judgment of a criterion's predictive reliability or relative importance. Criterion weights can be adjusted to refine the scope of a particular assessment to a particular class of chemicals (e.g., pesticides), a particular exposure scenario (e.g., occupational exposure), or a particular exposure target (e.g., environmental contamination). When eliciting subjective weights, it is important to utilize best practices to avoid potential biases and inconsistencies [20,21]. Numerous elicitation techniques exist, including rank-based methods and swing-weight methods [13,21,22]. Chemical Properties Assessment As seen in Figure 1, the Chemical Properties Assessment considers four main criteria to estimate potential risk for human exposure: bioaccumulation potential, persistence, ADME (Absorption, Distribution, Metabolism, and Elimination), and physical hazard potential. Each criterion constitutes a unique set of subcriteria, which define the distinct chemical property data points that serve as inputs to the assessment. Observed chemical properties used to estimate exposure potential are defined by the specific sub-criteria under each of the four main criteria. Using thresholds established for each sub-criterion, individual data points are evaluated and assigned scores representing the potential for exposure indicated by the observed chemical property. Once these initial scores have been calculated, the highest within each set of sub-criteria is assigned as that criterion's exposure score. When certain chemical-specific data are unavailable, as is often the case in this context, it may not be possible to assign scores to each sub-criterion. By defining each criterion's exposure score as the highest of its associated sub-criteria scores, we account for this possibility. By employing this approach, criterion scores can be assigned even in the presence of sparse data. Each chemical's bioaccumulation, persistence, ADME, and physical hazard scores are combined with their associated weights. Weighted criteria exposure scores are then summed to produce initial chemical property exposure score for each chemical. Once this has been done for the set of chemicals being assessed, the initial chemical property exposure scores are normalized from 0 to 1 to produce relative rankings. Bioaccumulation. Bioaccumulation is a process in which a chemical substance is absorbed by an organism via all routes of exposure in the natural environment, for example through dietary and ambient environmental sources, and increases in concentration over time [23]. Using three bioaccumulation-related subcriteria, we evaluate surrogate chemical properties in order to predict the compound's ability to bioaccumulate. Bioconcentration factor (BCF). A compound's BCF is a dimensionless number representing the relative concentration of the compound in organic tissues. In general, chemicals with relatively higher BCFs have greater potential for exposure, and thus are more likely to adversely impact human health and the environment. In this model, four distinct numerical thresholds were used to evaluate chemical BCF data. These thresholds are shown in Table 1, and were used to assign each chemical a BCF sub-criteria score from 1-4 based on the indicated level of bioaccumulation potential. Thresholds are based on previously published values employed by existing exposure assessment models: the EPA Design for the Environment Program [24], and the Clean Production Action's Green Screen for Safer Chemicals Initiative [25]. To address minor numerical discrepancies, the more conservative thresholds were chosen when values differed between models. Log kow. A compound's K ow , or octanol-water partition coefficient, describes its ability to transition between water and carbon-based media. Chemical compounds with relatively higher log K ow are capable of greater movement within the environment; they are thus more adaptive and have higher potential for human exposure and absorption. In this model, four distinct numerical thresholds were used to evaluate chemical K ow data. These thresholds are shown in Table 1, and were used to assign each chemical a log K ow sub-criteria score from 1-4 based on the indicated level of bioaccumulation potential. Thresholds are based on previously published values employed by existing exposure assessment models: the EPA Design for the Environment Program [24], and the Clean Production Action's Green Screen for Safer Chemicals Initiative [25], with the more conservative threshold chosen when values differed between models. Molecular weight. Previous studies have identified a significant correlation between a compound's molecular weight and its ability to bioaccumulate [26,27]. Results from these studies support the general conclusion that heavy molecules do not easily bioaccumulate, as their size hinders passage through lipid membranes. Lower weight chemicals thus possess a relatively greater potential for human exposure. These and similar findings have been used to inform chemical testing policy and legislation such as the OECD Chemical Substance Control Law (CSCL) in Japan [28] and the EPA Toxic Substances Control Act (TSCA) in the United States [29]. A single cut-off threshold is employed by our model to evaluate molecular weight data. Molecules 1000 amu or greater are given a bioaccumulation criteria score of 1, regardless of their other subcriteria scores within the bioaccumulation category (BCF & log Kow). The 1000 amu cut-off follows TSCA premanufacture notification policy [29], and is based on current understanding that molecular weights in this range are generally better indicators of chemical bioaccumulation potential than other surrogate properties [26]. Persistence. Persistence corresponds to the length of time a chemical can exist in the environment before degrading or being transformed by natural processes [23]. Persistent chemicals are more likely to come into contact with humans compared to chemicals that degrade quickly in the environment. We consider the half-life in water, soil, sediment, and air for each chemical as surrogate indicators of persistence for the purpose of evaluating exposure potential. The numerical thresholds used for evaluating chemical half-life data are shown below in Table 1. Thresholds were used to assign each chemical four distinct half-life sub-criteria scores from 1-4 based on the level of persistence indicated by each of the four halflives (in water, soil, sediment, and air). Threshold values for water, soil, and sediment are based on previously published values employed by existing exposure assessment models: the EPA Design for the Environment Program [24], and the Clean Production Action's Green Screen for Safer Chemicals Initiative [25], using the more conservative thresholds. The threshold value for air follows science-based guidance for evaluating chemical long-range transport potential and overall persistence [30]. Chemicals with half-lives in air that are less than two days are assigned an associated sub-criteria score of 1 (''Low''), while those with half-lives in air greater than or equal to two days are assigned an score of 3 (''High''). Exposure-Based Chemical Prioritization PLOS ONE | www.plosone.org ADME. Properties that describe a chemical's ability for absorption, distribution, metabolism, and excretion (ADME) are indicators of the potential for biologically relevant human exposure. Chemicals that can be easily absorbed by the body and that are resistive to metabolism or excretion pose a greater threat for extended exposure; therefore it is useful to focus on the entrance and exit of the chemicals within the context of the body. Though recent and current ADME-related research efforts have focused on establishing appropriate surrogate properties and developing predictive models, general consensus has not been reached regarding an accepted approach to ADME assessment for environmental chemicals [10]. Building on current research and existing models, a new ADME assessment protocol intended for screening-level exposure-based chemical prioritization was incorporated into the framework [10]. This method utilizes QikProp software Version 3.0 [31], a QSAR-based model to obtain surrogate chemical property values, which were then integrated to evaluate ADME properties along various sub-criteria briefly discussed below. All QikProp values are based on a 24-hour exposure period. Incidentally, QikProp is a three-dimensionally based structure method, so the SARs depend on the solvent accessible surface area. The properties calculated are dependent on the conformer adopted at the time of calculation and could be sensitive to molecular orientation. In addition, QikProp was designed exclusively to develop organic pharmaceutical compounds, so cannot be used for metals and inorganic compounds. Thus, if the analytics discussed herein are to be applied to metals and inorganic compounds, another QSAR system is needed. Absorption. The chemical absorption assessment is based on two QikProp predictors which describe oral availability. The first descriptor represents a qualitative measure of oral absorption potential, and takes values of 1, 2, or 3 for low, medium, or high, respectively. The second descriptor represents a numerical probability of oral absorption on a 0 to 100% scale, with ,25% and .80% designating low and high probability, respectively. These values were combined to derive an absorption score (1-3) for each chemical. Distribution/Excretion: Distribution and excretion-related properties were combined into a single assessment. QikProp predicted octanol/water partition coefficients, serving as surrogates for half-life within the human body, were categorized into bins using subjective thresholds to derive a distribution/excretion score (1-4) for each chemical. Metabolism. The assessment of metabolism was derived from the QikProp descriptor representing the number of expected possible metabolites for each chemical over a 24-hour period in the human body. These values were categorized based on the predicted half-life of each chemical in order to represent metabolism via natural degradation in the body. These values were combined to generate average metabolism scores (1-4) for each chemical. Physical hazard potential. Highly flammable and reactive chemicals pose human and environmental threats that may not be considered in standard exposure or toxicity-based assessments. Though the properties that determine a given chemical's flammability and reactivity may be distinct from those that determine its environmental fate and transport, the threat of physical hazard is nonetheless directly related to the likelihood of exposure. The risk of physical hazards (e.g., combustion) is thus an exposure-related risk, and we assess each chemical's hazardrelated properties in order to anticipate threats that may not be considered in other exposure or toxicity-based screenings. In accordance with existing National Fire Protection Association (NFPA) standards and classifications [32], flammability and reactivity were assigned scores of (1-4) using established NFPA thresholds. Chemical Life Cycle Properties Assessment Similarly to the assessment of chemical properties, we estimate potential for human exposure by assessing three main life cycle phases of manufactured chemicals: production, consumer use, and disposal. Each phase constitutes a unique subset of exposurerelated criteria, which define the distinct life cycle characteristics that serve as inputs to the assessment. The different criteria associated with each of the three life cycle phases designate the individual life cycle properties that will serve as indicators of a chemical's exposure potential during the relevant phase. All life cycle criteria are evaluated quantitatively, with higher values indicating higher potential for exposure. Instead of establishing thresholds for each sub-criteria as in the assessment of chemical properties, raw values are used but then normalized across the set of chemicals for each individual sub-criteria. This provides bounds for the range of values and assists in making comparative assessments. Criteria scores are then calculated by summing the sub-criteria scores. Again, these scores are normalized across the set of chemicals to account for criteria containing more sub-criteria than others, and then multiplied by their weights to produce an initial Life Cycle Properties Exposure Score (LCES). Once initial LCESs have been calculated for all chemicals, we derive final LCESs by normalizing initial scores to the highest and lowest observed scores across all chemicals. Production. Number of Potential Exposure Sources: Each chemical is evaluated to determine the possibility for human exposure during processes associated with production of the chemical. We consider one potential source (occupational microenvironments) defined as any workplace environment in which a release might occur during chemical manufacture and/or processing. Each chemical is assigned a score of either 0 or 1 based on whether the compound presents risk of exposure during production. Projected average annual number of production sites. A chemical's exposure risk is increased if it is produced in many locations. Ubiquity classifications for each chemical were used to estimate the amount of chemical production sites [10]. Higher scores indicate increased potential for human exposure during chemical production: very widespread (5), widespread (4), moderate (3), localized (2), low (1). Regional geometric mean production quantity (MQR). In addition to how widespread production is, estimates are made of the quantity produced. This is estimated using the Regional Geometric Mean Production Quantity (MQ R ), measured in units of kilotons per year. This is an estimated quantity, but production quantities could also be provided by industry. Consumer Use The assessment evaluates several sub-criteria relevant to the consumer use phase in the life cycle of manufactured chemicals. Based on the intended uses of each chemical, primary consumer class is defined as either strictly industrial, or industrial and individual. Chemicals used during industrial processes (e.g., monomers, solvents) and chemicals otherwise noted to have primarily industrial consumers were defined to have a strictly industrial consumer class. Chemicals used in agriculture (e.g., pesticides, insecticides, herbicides) or as food/cosmetic additives (e.g., preservatives, anti-microbials) were defined to have both industrial and individual consumers. Chemicals directly incorporated into consumer products during their production (e.g., plastics, coatings, fabrics, flame retardants) are also defined to have both industrial and individual consumers. Number of potential exposure sources. Each chemical was evaluated to determine the possibility for human exposure during processes associated with both industrial and individual consumer uses of the chemical. Ten distinct potential sources associated with consumer exposure were considered (i.e., outdoor air, water, soil, biota, indoor air/dust, in-vehicle air, object contact, tap water, other water, food/beverages) by assigning each chemical a score from 0-10 based on possibility for exposure via each unique source during consumer use of the compound. Projected average annual number of individual consumers. Chemicals defined as having industrial and individual consumer classes were assessed to determine their potential for exposure to individual consumers in non-industrial settings. Chemical ubiquity classifications were used to represent the relative size of each chemical's average, annual, individual consumer base. Chemicals defined as having strictly industrial consumer classes were assigned individual consumer scores of 0. Remaining chemicals were assigned scores from 1-5 based on their ubiquity, with higher scores indicating increased potential for individual consumer exposure during non-industrial use: very widespread (5), widespread (4), moderate (3), localized (2), low (1). Projected average annual number of industrial consumers. To assess chemicals' potential for exposure to industrial consumers, we employ the ubiquity classification to estimate the average, annual size each chemical's industrial consumer base. As none of the chemicals assessed were defined as having a strictly individual (non-industrial) consumer base, all chemicals were assigned scores from 1-5 based on their ubiquity classification, with higher scores indicating increased potential for industrial consumer exposure during use of the chemical: very widespread (5), widespread (4), moderate (3), localized (2), low (1). Projected average annual quantity consumed per individual/industrial consumer. The average annual quantity of each chemical consumed per consumer was predicted using the relative size of the chemical's total consumer base (including both individual and industrial consumers), and its MQ R . Relative measures of consumption quantity per consumer (Q) were calculated by dividing each chemical's projected mean production volume by their total number of consumers, assuming chemicals with higher consumption quantities to have increased potential for consumer exposure. Projected annual quantities consumed per individual consumer were calculated using the same equation as that for industrial consumers: where (n iIndividual +n iIndustrial ) represents the chemical's total consumer base, or the number of individual consumers plus the number of industrial consumers. Susceptible populations. To determine if there was a heightened exposure risk to susceptible populations (in this case, children), particular processes associated with individual consumer use of the chemical were evaluated. Nine distinct potential sources associated with exposure to children were considered (Outdoor Air, Water, Soil, Indoor Air/Dust, In-Vehicle Air, Object Contact, Tap Water, Other Water, and Food/Beverages), and each chemical was assigned a score from 0-9 based on possibility for exposure via each unique source. Disposal Number of potential exposure sources. Each chemical was evaluated to determine potential for human exposure resulting from disposal events. We consider four distinct disposal-related sources (Outdoor Air, Water, Soil, Biota), assigning each chemical a score from 0-4 based on potential for exposure via each unique source during and after disposal of the compound. Projected average annual number of disposal events. Each chemical's total number of consumers was estimated to determine an annual number of associated chemical disposal events. Assuming that each chemical's industrial and individual consumers dispose of equal amounts of the compound, we define the projected number of disposal events as each chemical's total number consumers, and assign scores of 1-10, with higher scores representing greater potential for disposalrelated human exposure. Projected average annual quantity disposed. To account for assumed variations in the actual quantities disposed during industrial and individual consumer disposal events, we assume that 0.1% of the net production volume of each chemical is disposed of in order to evaluate disposal-related exposure potential. Note that the use of this unit value assumes that no chemical-or productspecific data were available. With larger disposal quantities indicating higher potential for post-disposal chemical exposure, we calculate relative disposal quantities of each chemical (Q DISP ) as: Integrating Chemical Properties and Life Cycle Exposure Scores Once assessments of chemical properties and life cycles have been performed on all chemicals, those chemicals lacking sufficient data to calculate either a chemical properties exposure score or life cycle exposure score are removed from the remainder of the prioritization. Though these chemical's available scores may indicate significant threat of exposure, they are excluded from the integration process as their scores can skew final exposure potential relationships. The remaining chemicals are renormalized as: where xES denotes the relevant exposure score (either chemical or life cycle). Next, the remaining chemicals' exposure scores (chemical property and life cycle property) are summed to produce aggregate exposure scores. These scores represent cumulative measures of exposure potential based on each chemical's distinct properties and characteristics of its projected life cycle. Aggregated exposure scores, which all lie in the range of 0-2, are used to numerically rank chemicals based on their potential for human exposure. In addition to this quantitative integration, chemical property and life cycle scores can be visualized using a risk-reporting matrix ( Figure 2) for a more qualitative assessment of aggregate chemical exposure potential. In this method of integration, chemical property and life cycle exposure scores are converted from a scale of 0-1 to a scale of 0-5 by multiplying the initial score by a factor of five to place them within the 565 risk matrix, with each chemical's position representing a qualitative, cumulative measure of exposure potential based on both chemical and life cycle properties. Qualitative exposure potential thresholds (red, yellow, or green) can be defined within the matrix to designate high, moderate, and low risk regions. Data Set For the case study, a set of 51 chemicals was selected from those presented and evaluated in the model challenge ( Table 2), representing a wide variety of chemical classifications (e.g., organics, metals, etc.). Sub-criteria scores for these chemicals were collected from numerous reports and online databases, and the sources for each sub-criterion are listed in Table 3. Case study data can be found in the online File S1. Prioritization First, the data for each chemical was compiled. It was found that some chemicals were difficult to assess due to a lack of readily available data. If a chemical did not have any sub-criteria scores for at least one of its criteria, that chemical was removed from the analysis process as having too little data for analysis. Nine of the 51 chemicals (largely metals) were removed for this reason. Following the MCDA approach outlined above, each of the remaining test chemicals was assessed. Scores for each criterion were weighted by allocating equal weights (i.e., bioaccumulation, persistence, ADME, and physical hazards each weighted 25%; production, consumer use, disposal each weighted 33.33%). The final prioritization under this weighting distribution is shown in Table 4. The risk matrix comparison under this weighting distribution is shown in Figure 3. Discussion As stated above, one of the major limitations of currently available exposure models involves the inability to fully characterize the influence of complex social behaviors on resulting exposures or contact between humans and manufactured chemical across all life stages of the chemical. This is especially true for chemicals used in residential and consumer products, those arising from near field sources. A multi-criteria decision model was developed to combine typical physiochemical screening level data with measures to characterize human activities. As a proof of concept to show the utility of this approach, a case study was conducted on a small set of chemicals that were also analyzed using higher tiered statistical and mechanistic exposure models in a model challenge [10]. The models used in the model challenge considered different types of exposure scenarios including indirect exposures from diffuse environmental sources and direct, concentrated exposures from micro-environmental sources (i.e. from a personal care product or within a residence), though the latter had significant limitations in terms of necessary data to produce exposure estimates. Ranking results were obtained by three models and the comparative analysis is reported elsewhere [10]. Some agreement between ranking results was observed, but in general these models produced widely incongruous results across a number of different domains of information. Interestingly, some of the results using the MCDA model developed herein coincide with results from these more complex models. The majority of the chemicals (13 of 14) ranked in the top one-third of the list in Table 4 (Rank 1-14), are also ranked in the top one-third of one of the models evaluated in the challenge. In general this agreement is with a ''far-field'' indirect diffuse source model which does not incorporate human activity at the micro-environmental level. Nonylphenol was the exception as it was ranked low by all other mechanistic models. Similarly, the bottom third of the ranked list in Table 4 (Rank 28-42) shows high agreement with results from a model from the challenge. One model used characterized both far-field and near-field exposures and the other two were far-field models. Because this analysis was conducted as a proof of concept, an exhaustive search for quality data and subsequent data validation was not conducted independently of the model challenge. However, the absence of the mechanistic relationships involved in the exposure models as well as the equal weighting scheme used in our example would lead to the assumption that the input drivers of the challenge models would be different than the input drivers of MCDA model. To fully explore this assumption and the utility of this methodology for larger scale research prioritization or policy guidance, the results of the case study underscore the need for quality data inputs. Only nine of the chemicals had to be excluded. These chemicals have properties that exclude them from the domain of applicability of the analytics, e.g. models, QSAR type, and other tools. As mentioned, metals and inorganic compounds are not characterized by the ADME models used in this study. For the majority of compounds that fall within the domain of applicability, the MCDA approach is useful. As shown in Table 4, the majority of the chemicals used in plastics appear in the top half of the ranked list denoting highest exposure potential by highest aggregated exposure score. Plastics are broadly related to exposures that occur in all locations across the life-cycle of the chemicals. The chemicals in the bottom half of the ranked list (lower exposure potential) fit into a number of other of categories, but 11 of 21 are or were used as pesticides/herbicides, agriculturally, in homes or in public and commercial areas. The two pesticides/herbicides, Parathion and Methoxychlor, are ranked relatively low on the list in Table 4. Both chemicals were exclusively used in agriculture only, but have been previously banned or restricted by the EPA and do not have other uses like 1,2,3-trichlorobenzene, ethylene thiourea, and hexachlorobenzene which were also used exclusively in agriculture but are now used as a nonfood commercial additives. The remaining chemical in the agricultural only category is aldicarb. Aldicarb was restricted more recently in 2010 and will not be completely phased out until 2018, so exposure potential may be higher than the others in this category. It should be noted that the nature of this analysis is to score chemicals in a comparative and relative manner, as opposed to assigning an absolute measure of exposure risk, which would not be practical or appropriate for a screening tool such as this. The relative assessment of chemical exposure potential is therefore dependent upon the set or sub-set of chemicals under consideration, and must be considered when designing the analysis and interpreting the results. If a risk matrix is used for interpretation or communication of exposure potential results, it is important to note that a chemical with a high chemical property score and low life cycle property score (or vice versa) may be displayed has having a low exposure risk. When the risk matrix is used for score integration, however, these chemicals will appear on the boundaries of the matrix and can easily be identified as outliers that may warrant further assessment. Figure 3 shows the results of the case study on such a risk matrix. The risk matrix approach can be used to graphically visualize qualitative risk categories such as high, medium and low risk. The case study chemicals mostly fall within the same middle risk range of the matrix. Six chemicals fall into the higher exposure risk potential category and seven chemicals fall into the low exposure risk potential category based on the delineations shown in Figure 2. As a high tier screening, this type of representation may be useful for rapid visualization and categorization of large number of chemicals; however risk matrices should be used with caution when guiding risk management decisions [35]. Both the ranking and risk matrix approaches highlight the potential promise of multi-criteria decision analytic models for exposure-based prioritization, but further development beyond this effort is warranted. Given that the baseline weighting scenario -equal weights distributed among the chemical property and life cycle criteria -is likely an unrealistic one, a sensitivity analysis should be conducted to explore the effects of uncertainty in both the scoring of chemical parameters and the weighting schemes on the final chemical prioritization. This will help identify chemicals which are targets for further exposure assessment and data collection, ideally including better release characterization, proximal exposure assessment, and biomonitoring. Finally, it is important to recognize that these results are strictly a measure of exposure potential and do not consider toxicological properties. Risk is a function of both hazard and exposure. The means by which organisms are exposed to stressors are complex; with many feedback loops (e.g., an outcome may itself become a stressor or modify other stressors). Risks related to chemical ingredients in products depend not only on the inherent properties of that chemical, but also the manner in which the chemical is formulated and used. Exposure potential therefore might be integrated with computational toxicology to paint a more complete picture of risk and to effectively prioritize the numerous chemicals in commerce. Conclusions In this paper, we have presented a decision analytic approach to exposure-based prioritization of manufactured chemicals. The proposed methodology allows for structured and transparent analysis of chemical exposure potential through integration of heterogeneous metrics used to evaluate exposure risk-related information associated with both chemical properties and life cycle phases. The model is scalable to assess as many chemicals as is necessary for the project scope, and the MCDA framework is able to accommodate varied inputs and exposure potential indicators, providing an adaptive and easy-to-use screening tool for rapid prioritization in the face of sparse data. In addition, the use of weighting in the model allows for specific user objectives, expert judgment, and data availability considerations to be explicitly implemented within the assessment. The proposed approach builds on earlier models and current research relating to rapid evaluation of exposure potential. Specifically, it integrates the results of mechanistic and statistical approaches with semi-quantitative categorical data to describe exposure potential. In this paper, we attempt to address the need for high-level screening tools that (1) are capable of more detailed assessments than those provided by simpler predictive models (i.e., limited to persistence and bioaccumulation as indicators of exposure), and (2) have less intensive data requirements than more complex models, so as to remain efficient at the screening level. It is important to note that work on this model is ongoing, and that the initial framework presented in this paper is primarily intended to illustrate the application of decision analytic methods to supplement existing exposure potential estimation techniques. Currently, our developmental efforts are focused on: (1) refining ADME assessment criteria and calculations; (2) identifying optimal surrogates for bioaccumulation potential; (3) implementing value of information (VOI) techniques to quantify data gaps and prioritize further research efforts; (4) improving normalization algorithms; and (5) developing a supplemental logic model for more specific exposure scenario evaluation. Additionally, we are working to develop formal means of considering expert judgment and empirical chemical exposure data within our assessments. In Methyl mercury 1 22967926 n/a n/a Insufficient Data the future, we anticipate that the decision analytic approach will be able to provide decision makers with important and reliable information to support efficient, exposure-based prioritization of manufactured chemicals. Supporting Information File S1 Case Study Data.
9,909.2
2013-08-05T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Generalized Mutual Synchronization between Two Controlled Interdependent Networks and Applied Analysis 3 Assumption 2. Suppose that the vector function g(⋅) is Lipschitz continuous, namely, for any x ∈ R, y ∈ Rn and a constant μ > 0, the following inequality holds: 󵄩󵄩󵄩󵄩g (y) − g (x) 󵄩󵄩󵄩󵄩 ≤ μ 󵄩󵄩󵄩󵄩y − x 󵄩󵄩󵄩󵄩 . (5) Assumption 3. Suppose that the time-varying delays τ 1 (t), τ 2 (t) are continuous differentiable functions with 0 ≤ τ 1 (t), τ 2 (t) ≤ h < ∞ and 0 ≤ ̇ τ 1 (t) ≤ ε 1 < 1. Clearly, this assumption holds for constant τ 1 (t), τ 2 (t). Remark 4. Assumptions 2 and 3 are both general assumptions, which hold for a broad class of real-world chaotic systems, such as Lorenz system, Chua’s oscillator, Chen system, and Lü system [28]. Hence, in the following sections, we always assume that both assumptions hold. Lemma 5 (see [26]). If there are any vectors x, y ∈ R, then the following inequality is true: xTy ≤ 1 2 xTx + 1 2 yTy. (6) 3. Generalized Mutual Synchronization Criteria In this section, by designing appropriate adaptive controllers, we can establish some sufficient conditions to insure the generalized mutual synchronization of the proposed general model in Section 2. Obviously, we can deduce some similar criteria for any simple or typical examples from this general model. Combining (1) and (2) and (3), we can express error system of controlled interdependent networks A and B in terms of ė i (t) = ẏ i (t) − Jẋ i (t) = g (y i (t)) − Jf (x i (t)) + b i N Introduction In recent years, extensive efforts have been devoted to understanding the properties of complex networks [1][2][3][4][5].Particularly, as one of the most interesting and significant collective behaviors in real world, synchronization in complex dynamical networks has received increasing interest owing to its many potential applications in nature, socioeconomic systems, or engineering [6].In the existing literature, it has been recognized that the network topology plays a significant role in synchronizability of diffusively coupled complex networks [7,8].Also, by using some effective control schemes, a variety of synchronization phenomena have been discovered in various complex networks (see [9][10][11][12][13][14][15][16][17][18] and relevant references therein).However, the studies mentioned above focused almost exclusively on the inner synchronization inside a single, noninteracting network. Li et al. [19] studied the outer synchronization (in this paper, we call it mutual synchronization to be defined in Section 2) referring to the synchronization between two or more networks.However, to the best of our knowledge, it can be realized mainly by the open-plus-closed-loop method [19,20] or based on the drive-response concept [21][22][23][24][25][26][27] considering only the intranetwork coupling of network itself.Zheng et al. [28] and Wu et al. [29] further studied the outer synchronization between two complex networks considering two kinds of internetwork coupling, but nevertheless, they both still derived the synchronization criteria based on driveresponse concept and did not place the outer synchronization in the context of interdependent networks. It is well known that many real-world network systems do interact with and depend on each other; for instance, various infrastructures such as transportation, water supply, fuel, and power stations are coupled together; realistic neuronal networks have a clustered structure and they can be viewed as interdependent networks; the epidemic can spread between the coupled networks of the infection layer and the prevention layer; dealing with secure information and cryptography, one can couple two systems to achieve the mutual synchronization, and so forth.Recently, Buldyrev et al. [30] studied the interdependent networks by presenting future smart grid as a real-life example, where the electrical power grid depends on the information network for control and the information network depends on the electrical power grid for their electricity supply.Then, Mei et al. [31] emphasized that it was urgent to research interdependent networks theory for smart grid.Also, Brummitt et al. [32] demonstrated how interdependence affected cascades of load using a multiple branching process approximation.In a word, efforts have been directed to the cascading failures and robustness of interdependent networks [33][34][35][36][37].In general, it has been recognized that interdependent topologies, especially interlinking strategy and internetwork coupling strength, play a vital role in cascading behaviors and robustness of interdependent networks.Analogously, this motivates us to attempt to explore the effects of interdependent topologies on the mutual synchronization between two interdependent networks. Quite recently, Um et al. [38] placed synchronization behavior in the context of interdependent networks, where the one-dimensional regular network is mutually coupled to the WS small-world network.Based on the mean-field analytic approach, it has been revealed that the internetwork coupling and the intranetwork coupling play different roles in the synchronizability of the WS network.However, it is still limited to inner synchronization in one of the two interdependent networks and hence it is necessary and significant to study the mutual synchronization between two controlled interdependent networks. The major contributions of our work are as follows.First, we propose the general model of two controlled interdependent networks and , which take into account not only the intranetwork coupling, but also the time-varying internetwork delays coupling.Second, we place the synchronization in the context of two controlled interdependent networks and study the generalized mutual synchronization of the proposed model.Third, in the numerical examples, to explore the potential application in smart grid, we couple the NW small-world network described by chaotic power system nodes and the scale-free network described by Lorenz chaotic systems following two interdependent interlinking strategies, respectively.Finally, we verify the influences of intranetwork and internetwork coupling and internetwork delays on the controlled mutual synchronizability, which can help to design the optimal interdependent networks. The remaining part of this paper is organized as follows.Section 2 introduces some useful mathematical preliminaries and proposes the general model of two controlled interdependent networks.The generalized mutual synchronization is investigated and the main theoretical results of this paper are given in Section 3. In Section 4, two numerical examples are provided to explore the potential application in smart grid and to illustrate the correctness and effectiveness of the theoretical results.Finally, some conclusions and further work are given in Section 5. Preliminaries and Model Presentation 2.1.Notations.The standard mathematical notations will be utilized throughout this paper.Let R ∈ (−∞, +∞), R be the -dimensional Euclidean space and let R × be the space of × real matrices; I ∈ R × denotes the -dimensional identity matrix; we use A or x to denote the transpose of the matrix A or the vector x, respectively; max is the maximum eigenvalue of corresponding real symmetric matrix; ‖x‖ = √ x x stand for the 2-norm of the vector x; ⨂ presents the Kronecker product of two matrices. Model of Two Controlled Interdependent Networks. For simplicity and without loss generality, we consider the following model of two controlled interdependent networks (1) and (2) (we call networks and , respectively, in this paper) consisting of identical nodes with time-varying internetwork delays coupling.The dynamical equations for the model of controlled interdependent networks and can be given by where x () = ( 1 (), 2 (), . . . ()) ∈ R (y () = ( 1 (), 2 (), . . . ()) ∈ R ) is the state variable of the th node in network () at time ; : R + × R → R ( : R + × R → R ) is a smooth vector function; A = ( ) × (B = ( ) × ) stands for the intranetwork coupling matrix describing the topological structure of the network (); namely, if there is a connection from node to node in network (), then ( ) = 1; otherwise, ( ) = 0; however, C = ( ) × (or D = ( ) × ) is the internetwork coupling matrix representing the direct interaction from in network to in network (or from in network to in network ); that is, if there exists a connection from in network to in network (or from in network to in network ), then ( ) = 1; otherwise, ( ) = 0; ( ) and ( ) are the intranetwork and internetwork coupling strength for node , respectively; ) is an inner coupling matrix describing the interactions between the coupled variables; 1 (), 2 () are the time-varying internetwork coupling delays between networks and , respectively; u () ∈ are the nonlinear controllers to be designed later for the mutual synchronization. Remark 4. Assumptions 2 and 3 are both general assumptions, which hold for a broad class of real-world chaotic systems, such as Lorenz system, Chua's oscillator, Chen system, and Lü system [28].Hence, in the following sections, we always assume that both assumptions hold. Lemma 5 (see [26]).If there are any vectors x, y ∈ R , then the following inequality is true: Generalized Mutual Synchronization Criteria In this section, by designing appropriate adaptive controllers, we can establish some sufficient conditions to insure the generalized mutual synchronization of the proposed general model in Section 2. Obviously, we can deduce some similar criteria for any simple or typical examples from this general model. Combining (1) and ( 2) and (3), we can express error system of controlled interdependent networks and in terms of where J = (x ) is the Jacobian matrix of the function (x ).Remark 6.From (7), one can find that adding appropriate controller to nodes is an alternative method to obtain mutual synchronization between two networks.In this paper, we thus mainly focus on the controlled mutual synchronization between two networks in the general context of two interdependent networks.Therefore, the intranetwork coupling matrices A and B and the internetwork coupling matrices C and D can be chosen arbitrarily, meaning that it is not necessary for assuming diffusivity, symmetry, or irreducibility of the matrices A, B, C, and D. In addition, the topology structure, node dynamics, and dimension of state vector of one network can be different from the other. Remark 7. It is well known that the time delays commonly exist in node dynamics, intranetwork coupling, and internetwork coupling.However, we just consider the time-varying internetwork coupling delays regardless of the others to explore the effects of internetwork coupling behavior on the mutual synchronization.It is noted that many networks of interest, like the Kuramoto model, have nonlinear coupling functions.Similarly, for simplicity, we just consider the linear intranetwork and internetwork coupling.Theorem 8. Suppose that Assumptions 2 and 3 hold and that the adaptive controllers (8) and the corresponding update laws (9) are added to the error system (7).Thus, generalized mutual synchronization between controlled interdependent networks and with time-varying internetwork delays coupling can be asymptotically realized.Consider where are the time-varying feedback gain and are arbitrary positive constants. Remark 9. From the proof of the Theorem 8, we know that () is positive definite, V() is negative definite, and lim → ∞ e () = 0.According to Lyapunov stability theory, we can also get that the synchronization state e () = 0 is asymptotically stable. Remark 10.It is noted that ( 17) is just a sufficient condition, but not the necessary one for the mutual synchronization between controlled interdependent networks and . Based on Theorem 8, we can further obtain some similar synchronization criteria in the following two corollaries. Combining ( 23) and ( 24), we find that the values of () are irrelevant to 2 (), , and under the action of the proposed adaptive controllers ( 8) and ( 9).Thus, in the following sections, it is reasonable not to consider the effects of 2 (), , and on the mutual synchronization between controlled interdependent networks and . Numerical Simulations and Results In this section, two numerical examples and their simulations are given to illustrate the correctness and effectiveness of the theoretical results obtained in the previous sections and to identify the factors that influence the mutual synchronizability. To measure the speed and performance of mutual synchronization process, we define Actually, is the 2-norm of the synchronization error e(), 0 < < +∞.Thus, the values of ‖e()‖ in the initial stage and at the end of simulations imply the mutual synchronization speed and performance, respectively.It should be particularly noted that, in all of the following simulations, the main figures and insets describe the values of ‖e()‖ during 0 ≤ < 5 and at the end of simulations ( = 5), respectively. For simplicity and for comparing, we further assume that the internetwork coupling links are bidirectional and the coupling strength of each node is equal; that is, = , = , = , and = .From Remark 14, we know that the time evolutions of e () are not relevant to 2 (), , and ; thus, it is also reasonable to assume = = , 1 () = 2 () = () to simulate the influences of internetwork coupling strength and delays on the mutual synchronizability.Here, we employ the following two interlinking strategies to produce the interdependency matrices C and D in the two examples respectively. (i) One-to-one support dependence interlinking strategy [30] (strategy I for short): node in network only depends on node in network and vice versa. (ii) Multiple support dependence interlinking strategy [37] (strategy II for short): node in network may randomly depend on more than one node in network and vice versa. Example 15.In this example, we generate the interdependency matrices C and D following the strategy I and design the adaptive controllers according to Theorem 8.When = = = 1, () = 0.5, the mutual synchronization errors e () are depicted in Figure 1, which shows that controlled interdependent networks and can easily achieve the generalized mutual synchronization using the designed controllers.Next, we further simulate the influences of internetwork delays and intranetwork and internetwork coupling strength on the mutual synchronizability between the networks and .We Example 16.In this example, we produce the interdependency matrices C and D following the strategy II.To measure the effect of the number of interlinking edges on the mutual synchronizability, we define ⟨⟩ as the average number of interlinking edges for each node in network and the same to network .We conduct similar simulations as those in Example 15.First, we set = = = 1, ⟨⟩ = 3, () = /(1 + ); thus, the time evolutions of the synchronization errors e () are depicted in Figure 6, which shows that interdependent networks and can achieve the generalized mutual synchronization successfully.Then, Figures 7, 8 ).In addition, Figure 11 implies that, to some extent, increase of ⟨⟩ is equivalent to the increase of internetwork coupling strength . Conclusions and Future Work In this paper, we extend previous research on the outer synchronization between two complex networks to our work on generalized mutual synchronization between two controlled interdependent networks by considering the time-varying internetwork delays coupling.Our model and relevant results are general and can be easily extended to other interdependent networks because there are not any constraints imposed on the intranetwork and internetwork coupling configuration matrices.Based on Lyapunov theory and corresponding mathematical techniques, some sufficient criteria have been derived to guarantee that the proposed interdependent networks model is asymmetrically synchronized.Two numerical examples have been provided to illustrate the feasibility and effectiveness of the theoretical results and to further simulate the effects of internetwork delays, intranetwork and internetwork coupling strength on the mutual controlled synchronizability.In comparison, we find that, under the proposed adaptive controllers, the intranetwork coupling strength enhances the mutual synchronization, while the internetwork coupling delays and coupling strength suppress it.This indicates that the synchronization phenomenon in interdependent networks is different from that in a single network, which highlights the necessity and significance of considering the mutual synchronization in the context of interdependent networks.Thus, with the help of our findings, one can further understand the mutual synchronization phenomenon in two interdependent networks and design interdependent networks with optimal mutual synchronizability for many potential practical applications.However, the mutual synchronization between two interdependent networks is extremely complex, and we cannot consider all the factors that influence the synchronizability altogether.Also, our theoretical and numerical results are still conservative and the proposed control schemes are still a bit complicated because of the generality of the model.Therefore, how to simplify the control laws and reduce the number of controlled nodes is another important topic and remains to be researched in future.Thus, utilizing the designed controller, one can derive the synchronization conditions based on Lyapunov function approach, which is widely used in dynamic system analysis and design by some recent articles [40][41][42][43][44]. Figure 2 : Figure 2: The curves of ‖e()‖ for the networks and interlinked following strategy I with = = = 1 and different internetwork delays ().
3,872.4
2014-03-11T00:00:00.000
[ "Mathematics" ]
Relativistic wave equations with fractional derivatives and pseudo-differential operators The class of the free relativistic covariant equations generated by the fractional powers of the D'Alambertian operator $(\square^{1/n})$ is studied. Meanwhile the equations corresponding to n=1 and 2 (Klein-Gordon and Dirac equations) are local in their nature, the multicomponent equations for arbitrary n>2 are non-local. It is shown, how the representation of generalized algebra of Pauli and Dirac matrices looks like and how these matrices are related to the algebra of SU(n) group. The corresponding representations of the Poincar\'e group and further symmetry transformations on the obtained equations are discussed. The construction of the related Green functions is suggested. Introduction The relativistic covariant wave equations represent an intersection of ideas of the theory of relativity and quantum mechanics. The first and best known relativistic equations, the Klein-Gordon and particularly Dirac equation, belong to the essentials, which our present understanding of the microworld is based on. In this sense it is quite natural, that the searching for and the study of the further types of such equations represent a field of stable interest. For a review see e.g. [1] and citations therein. In fact, the attention has been paid first of all to the study of equations corresponding to the higher spins (s ≥ 1) and to the attempts to solve the problems, which have been revealed in the connection with these equations, e.g. the acausality due to external fields introduced by the minimal way. In this paper we study the class of equations obtained by the 'factorization' of the D'Alambertian operator, i.e. by a generalization of the procedure, by which the Dirac equation is obtained. As the result, from each degree of extraction n we get a multi-component equation, hereat the case n = 2 corresponds to the Dirac equation. However the equations for n > 2 differ substantially from the cases n = 1, 2 since they contain fractional derivatives (or pseudo-differential operators), so in the effect their nature is non-local. In the first part (Sec. 2), the generalized algebras of the Pauli and Dirac matrices are considered and their properties are discussed, in particular their relation to the algebra of the SU (n) group. The second, main part (Sec. 3) deals with the covariant wave equations generated by the roots of the D'Alambertian operator, these roots are defined with the use of the generalized Dirac matrices. In this section we show the explicit form of the equations, their symmetries and the corresponding transformation laws. We also define the scalar product and construct the corresponding Green functions. The last section (Sec. 4) is devoted to the summary and concluding remarks. Let us remark, the application of the pseudo-differential operators in the relativistic equations is nothing new. The very interesting aspects of the scalar relativistic equations based on the square root of the Klein-Gordon equation are pointed out e.g. in the papers [2]- [4]. Recently, an interesting approach for the scalar relativistic equations based on the pseudo-differential operators of the type f ( ) has been proposed in the paper [5]. One can mention also the papers [6], [7] in which the square and cubic roots of the Dirac equation were studied in the context of supersymmetry. The cubic roots of the Klein-Gordon equation were discussed in the recent papers [8], [9]. It should be observed, that our considerations concerning the generalized Pauli and Dirac matrices (Sec. 2) have much common with the earlier studies related to the generalized Clifford algebras (see e.g. [10]- [12] and citation therein) and with the paper [13], even if our starting motivation is rather different. Generalized algebras of Pauli and Dirac matrices Anywhere in the next by the term matrix we mean the square matrix n × n, if not stated otherwise. Considerations of this section are based on the matrix pair introduced as follows. Definition 1 For any n ≥ 2 we define the matrices where α = exp(2πi/n) and in the remaining empty positions are zeros. where I denotes the unit matrix. Proof: All the relations easily follow from the Definition 1. Definition 3 Let A be some algebra on the field of complex numbers, (p, m) be a pair of natural numbers, X 1 , X 2 , ..., X m ∈ A and a 1 , a 2 , ..., a m ∈ C. The p − th power of the linear combination can be expanded: where the symbol {X p1 1 , X p2 2 , ..., X pm m } represents the sum of the all possible products created from elements X k in such a way that each product contains element X k just p k − times. This symbol we shall call combinator. 3) The relation (2.18) can be proved by the induction, therefore first let us assume p = 1, then its l.h.s. reads k2 k1=0 z k1 = 1 − z k2+1 1 − z and r.h.s. gives so for p = 1 the relation is valid. Now let us suppose the relation holds for p and calculate the case and since z r·k = z −p·k , the sum can be rewritten Obviously, the last sum coincides with G p (z), which is zero according to already proven identity (2.16). Let us remark, last lemma implies also the known formula The product can be expanded c j x n−j (−y) j and one can easily check that For the remaining j, 0 < j < n we get This multiple sum is a special case of the formula (2.12) and since α n = 1, the identity (2.19) is satisfied. Therefore for 0 < j < n we get c j = 0 and formula (2.27) is proved. Definition 6 Let us have a matrix product created from some string of matrices X, Y in such a way that matrix X is in total involved p − times and Y r − times. By the symbol P + j (P − j ) we denote permutation, which shifts the leftmost (rightmost) matrix to right (left) on the position in which the shifted matrix has j matrices of different kind left (right). (Range of j is restricted by p or r if the shifted matrix is Y or X). Example 7 Now, we can prove the following theorem. Obviously this equation is valid irrespective of the assumption α p+r = 1, i.e. it holds for any n and α = exp(2πi/n). It follows, that Eq. (2.33) is satisfied for any α. Lemma 10 Q rs Q pq = α s·p Q kl ; k = mod(r + p − 1, n) + 1, l = mod(s + q − 1, n) + 1, (2.35) Theorem 11 The matrices Q pr are linearly independent and any matrix A (of the same dimension) can be expressed as their linear combination a kl Q † rs Q kl = a rs n = 0. This equation contradicts our assumption, therefore the matrices are independent and obviously represent a base in the linear space of matrices n × n, which with the use of the previous lemma implies the relations (2.42). Theorem 12 For any n ≥ 2, among n 2 matrices (2.34) there exists the triad and moreover if n ≥ 3, then also (2.44) Proof: We shall show the relations hold e.g. for indices λ = 1n, µ = 11, ν = n1. Let us denote Actually the relation {X p , Z r } = 0 is already proven in the Theorem 8, obviously the remaining relations (2.43) can be proved exactly in the same way. The combinator (2.44) can be similarly as in the proof of Theorem 8 expressed which for matrices obeying relations (2.46) give Since the first multiple sum (with indices j) coincides with Eq. (2.12) and satisfy the condition for Eq. (2.19), r.h.s. is zero and the theorem is proved. Now let us make few remarks to illuminate content of the last theorem and meaning of the matrices Q λ . Obviously, the relations (2.43), (2.44) are equivalent to the statement: any three complex numbers a, b, c satisfy (aQ λ + bQ µ + cQ ν ) n = (a n + b n + c n )I. (2.48) Further, the theorem speaks about existence of the triad but not about their number. Generally for n > 2 there is more than one triad defined by the theorem, but on the other hand not any three various matrices from the set Q rs comply with the theorem. Simple example are some X, Y, Z where e.g. XY = Y X, which happens for Y ∼ X p , 2 ≤ p < n. Obviously in this case at least the relation (2.43) surely is not satisfied. Computer check of the relation (2.47) which has been done with all possible triads from Q rs for 2 ≤ n ≤ 20 suggests, that a triad X, Y, Z for which there exist the numbers p, r, s ≥ 1 and p + r + s ≤ n so that X p Y r Z s ∼ I also does not comply with the theorem. Further, the result on r.h.s. of Eq. (2.47) generally depends on the factors β k in relations and computer check suggests the sets in which for some β k and p < n there is β p k = 1 also contradict the theorem. In this way the number of different triads obeying the relations (2.43), (2.44) is rather complicated function of nas shown in the table n : 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #3 : 1 1 1 4 1 9 4 9 4 25 4 36 9 16 16 64 9 81 16 Here the statement triad X, Y, Z is different from X ′ , Y ′ , Z ′ means that after any rearrangement of the symbols X, Y, Z for marking of matrices in the given set, there is always at least one pair β k = β ′ k . Naturally, one can ask if there exists also the set of four or generally N matrices, which satisfy a relation similar to Eq. (2.48) For 2 ≤ n ≤ 10 and N = 4 the computer suggests the negative answer -in the case of matrices generated according to the Definition 9. However, one can verify: if U l , l = 1, 2, 3 is the triad complying with the theorem (or equivalently with the relation (2.48)), then the matrices n 2 × n 2 where the last multiple sum equals zero according to the relations (2.12) and (2.19). Obviously for n = 2 the matrices (2.45) and (2.51),(2.52) created from them correspond, up to some phase factors, to the Pauli matrices σ j and Dirac matrices γ µ . Obviously, from the set of matrices Q rs (with exception of Q nn = I) one can easily make the n 2 − 1 generators of the fundamental representation of SU (n) group where a rs are suitable factors. For example the choice gives commutation relations allow to write down the set of algebraic equations If the variables µ, π λ represent the fractional powers of the mass and momentum components and after n − 1 times repeated application of the operator Γ on Eq. For n > 2 the Eq. (3.2) is new, more complicated and immediately invoking some questions. In the present paper we shall attempt to answer at least some of them. One can check, that the solution of the set (3.2) reads where (U l is the triad from which the matrices Q l are constructed in accordance with Eqs. (2.51), (2.52)) and h 1 , h 2 , ...h n are arbitrary functions of p. At the same time, π λ satisfy the constraint First of all, one can bring to notice, that in Eq. (3.2) the fractional powers of the momentum components appear, which means that the equation in the x−representation will contain the fractional derivatives: Our primary considerations will concern p−representation, but afterwards we shall show how the transition to the x−representation can be realized by means of the Fourier transformation, in accordance with the approach suggested in [14]. Further question concerns relativistic covariance of Eq. (3.2): How to transform simultaneously the operator Γ(p) → Γ(p ′ ) = ΛΓ(p)Λ −1 (3.10) and the solution to preserve the equal form of the operator Γ for initial variables p λ and the boosted ones p ′ λ ? Infinitesimal transformations First let us consider the infinitesimal transformations where dω represents the infinitesimal values of the six parameters of the Lorentz group corresponding to the space rotations and the Lorentz transformations where tanh ψ i = v i /c ≡ β i is the corresponding velocity. Here, and anywhere in the next we use the convention, that in the expressions involving the antisymmetric tensor ǫ ijk , the summation over indices appearing twice is done. From the infinitesimal transformations (3.13), (3.14) one can obtain the finite ones. For the three space rotations we get and for the Lorentz transformations similarly The definition of the six parameters implies that the corresponding infinitesimal transformations of the reference frame p → p ′ changes a function f (p) : where d/dω stands for Obviously, the equation combined with Eq. (3.21) is identical to Eqs. (3.13), (3.14). Further, with the use of formulas (3.12) and (3.21) the relations (3.10), (3.11) can be rewritten in the infinitesimal form If we define The six operators L ω are generators of the corresponding representation of the Lorentz group, so they have to satisfy the commutation relations Definition 13 Let Γ 1 (p), Γ 2 (p) and X be the square matrices of the same dimension and Then for any matrix X we define the form (3.31) One can easily check, that the matrix Z satisfies e.g. where Q 0 is the matrix (2.51), i.e. there exists the set of transformations Y , that and a particular form reads The last sum can be rearranged, instead of the summation index j we use the new one k = i − j for i ≥ j, k = i − j + n for i < j; k = 0, ...n − 1, then the Eq. (3.40) reads and if we take into account that Γ n 0 = Γ n = p 2 , then this sum can be simplified For the term k = 0 we get and for k > 0, using Eqs. For p < n − k ≡ l the last sum can be with the use of the relation (2.3) modified therefore only the term p = n − k contributes: i.e. the sequence of non zero components can be only in one block, whose location depends on the choice of the phase of the power (p 2 ) 1/n . The g j are arbitrary functions of p and simultaneously the constraint p 2 = m 2 is required. Now we shall try to find the generators satisfying the covariance condition for Eq. where the bold 0,1 stand for zero and unit matrices 2 × 2. The Dirac equation is covariant under the transformations generated by where j, k, l = 1, 2, 3. Obviously, to preserve covariance, one has with the trans- where κ is any complex constant, satisfy the commutation relations (3.29), (3.30). Proof: After insertion of the generators (3.58) into the relations (3.29), (3.30) one can check, that the commutation relations are satisfied. In fact, it is sufficient to verify e.g. the commutators [L ϕ1 , L ψ2 ], [L ϕ1 , L ψ1 ] and [L ψ1 , L ψ3 ], the remaining follow from the cyclic symmetry. Let us note, the formula (3.58) covers also the limit case |κ| → ∞, then Further, the generators M ω can be rewritten in the covariant notation Now the Pauli -Lubanski vector can be constructed: which has satisfy where s is the corresponding spin number. One can check, that after inserting the generators (3.63) into relations (3.64), (3.65), the result does not depend on κ So the generators of the Lorentz group, which satisfy Eq. (3.49), can have the form where M ω are the n × n matrices defined in accordance with the Lemma 15. There are n such matrices on the diagonal and apparently these matrices may not be identical. Finally, it is obvious that Eq. (3.36) is covariant also under any infinitesimal transform where the generators K ξ have the similar form as the generators (3.66) satisfying the commutation relations for the generators of the Lorentz transformations? In this paper we shall not discuss this more general task, for our present purpose it is sufficient, that we proved existence of the generators of infinitesimal Lorentz transformations, under which the Eq. (3.36) is covariant. Finite transformations Now, having the infinitesimal transformations, one can proceed to finite ones, corresponding to the parameters ω and ξ: where p → p ′ is some of the transformations (3.15) -(3.18). The matrices Λ satisfy which for the parameters ϕ (space rotations only) and ξ imply Assuming the constant elements of the matrices R ϕj and K ξ , the solutions of the last equations can be written in the usual exponential form The space rotation by an angle ϕ about the axis having the direction u, | u| = 1 is represented by For the Lorentz transformations we get instead of Eq. (3.72) The solution of Eq. (3.75) reads The Lorentz boost in a general direction u with the velocity β is represented by where The corresponding integrals can be found e.g. in the handbook [16]. Let us note, from the technical point of view, solution of the equation where Λ, Ω are some square matrices, can be written in the exponential form only if the matrix Ω satisfies This condition is necessary for differentiation Obviously the condition (3.85) is satisfied for the generators of all the considered transformations, including the Lorentz ones in Eq. (3.79), since the matrix N does not depend on ψ. (N depends only on the momenta components perpendicular the direction of the Lorentz boost.) Equivalent transformations Now, from the symmetry of the Eq. where R ω (Γ 0 ), R ω (Γ 0 ) are generators (3.66) and Y (p) is the transformation (3.38), will satisfy the same conditions, but with the relation (3.26) instead of the relation (3.49). Similarly the generators K ξ (Γ 0 ) in relation (3.68) will be for Eq. (3.2) replaced by The finite transformations of the Eq. (3.2) and its solutions can be obtained as follows. First let us consider the transformations Λ(Γ 0 , ω, u) given by Eqs. (3.74) and (3.79). In accordance with Eq. (3.37) we have and correspondingly for the solutions of Eqs. (3.2), (3.36) In the same way, the sets of equivalent generators and transformations can be obtained for the diagonalized equation (3.36). Let us remark, according to the Lemma 14 there exists the set of transformations Γ(p) ↔ Γ 0 (p) given by the relation (3.37). We used its particular form (3.38), but how will the generators differ for the two different matrices X 1 and X 2 ? The last relation implies and according to the relation (3.32) It follows that there must exists a matrix X 3 [e.g. according to implication (3.35) one can put then the relation (3.97) can be rewritten i.e. the generators R ω (Γ, X 1 ), R ω (Γ, X 2 ) are equivalent in the sense of the relation (3.94). Scalar product and unitary representations Definition 16 The scalar product of the two functions satisfying Eq. (3.2) or (3.36) is defined: where the metric W is the matrix, which satisfies The conditions (3.104), (3.105) in the above definition imply, that the scalar product is invariant under corresponding infinitesimal transformations. For example for the Lorentz group the transformed scalar product reads and with the use of the condition (3.104) one gets According to a general definition, the transformations conserving the scalar product are unitary. In this way the Eqs. Then also for the Lorentz transformations one gets provided that the constant κ in Eq. (3.58) is real and |κ| ≤ m. Also the generators K ξ can be chosen in the same way: The structure of the generators R ω (Γ 0 ), K ξ (Γ 0 ) given by Eqs. (3.66), (3.68) suggests, that the metric W satisfying the condition (3.111) can have a similar structure, but in which the corresponding blocks on the diagonal are occupied by unit matrices multiplied by some constants. Nevertheless, let us note, that the condition (3.111) in general can be satisfied also for some other structures of W (Γ 0 ). From W (Γ 0 ) we can obtain matrix W (Γ)− the metric for the scalar product of the two solutions of Eq. (3.2). One can check, that after the transformations and simultaneously the unitarity in the sense of conditions (3.104), (3.105) is conserved -in spite of the fact that equalities (3.108) -(3.110) may not hold for R ω (Γ, X), K ξ (Γ, X). Space-time representation and Green functions If we take the solutions of the wave equation (3.2) or (3.36) in the form of the functions Ψ(p), for which there exists the Fourier picturẽ In this way we get for our operators: and in the same way Apparently, the similar relations are valid also for the remaining operators K ξ , W, Z, Z −1 and the finite transformations Λ in the x−representation. Concerning the translations, the usual correspondence is valid: p α → i∂ α . Further, the solutions of the inhomogeneous version of the Eqs. (3.36), (3.2) can be obtained with the use of formula (2.27): The last equation contains the fractional derivatives defined in [14]. Obviously, the functionsG 0 ,G can be identified with the Green functions related to x− representation of Eqs. (3.36), (3.2). With the exception of the operatorsR ϕj (Γ 0 ),W (Γ 0 ) and i∂ α all the remaining operators considered above are pseudo-differential ones, which are in general non-local. The ways, how to deal with such operators, are suggested in [3], [5], [14]. A more general treatise of the pseudo-differential operators can be found e.g. in [17]- [19]. In our case it is significant, that the corresponding integrals will depend on the choice of passing about the singularities and the choice of the cuts of the power functions p 2j/n . This choice should reflect contained physics, however corresponding discussion would exceed the scope of this paper. Summary and concluding remarks In this paper we have first studied the algebra of the matrices Q pr = S p T r generated by the pair of matrices S, T with the structure given by the Definition 1. We have proved, that for a given n ≥ 2 one can in the corresponding set {Q pr } always find a triad, for which Eq. (2.48) is satisfied, hereat the Pauli matrices represent its particular case n = 2. On this base we have got the rule, how to construct the generalized Dirac matrices [Eqs. In the further part, using the generalized Dirac matrices we have demonstrated, how one can from the roots of the D'Alambertian operator generate a class of relativistic equations containing the Dirac equation as a particular case. In this context we have shown, how the corresponding representations of the Lorentz group, which guarantee the covariance of these equations, can be found. At the same time we have found additional symmetry transformations on these equations. Further, we have suggested how one can define the scalar product in the space of the corresponding wave functions and make the unitary representation of the whole group of symmetry. Finally, we have suggested, how to construct the corresponding Green functions. In the x− representation the equations themselves and all the mentioned transformations are in general non-local, being represented by the fractional derivatives and pseudo-differential operators in the four space-time dimensions. In line with the choice of the representation of the rotation group used for the construction of the unitary representation of the Lorentz group according to which the equations transform, one can ascribe to the related wave functions the corresponding spin -and further quantum numbers connected with the additional symmetries. Nevertheless it is obvious, that before more serious physical speculations, one should answer some more questions requiring a further study. Perhaps the first could be the problem how to introduce the interaction. The usual direct replacement ∂ λ → ∂ λ + igA λ (x) would lead to the difficulties, first of all with the rigorous definition of the terms like (∂ λ + igA λ (x)) 2/n . On the end one should answer the more general question: Is it possible on the base of the discussed wave equations to build up a meaningful quantum field theory?
5,853.4
2000-03-15T00:00:00.000
[ "Physics" ]
AICOMP - FUTURE SKILLS IN A WORLD INCREASINGLY SHAPED BY AI Globalisation and modernisation are creating an increasingly diverse and interconnected world. To make sense of and function well in this world, individuals need, for example, to master changing technologies and to make sense of large amounts of available information. They also face collective challenges as societies - such as balancing economic growth with environmental sustainability, and prosperity with social equity. In these contexts, the competences that individuals need to meet their goals have become more complex, requiring more than the mastery of certain narrowly defined skills Introduction The development of artificial intelligence (AI) is already having a massive impact on the world of work and daily life.The automation of processes and the optimisation of systems through the use of AI technologies are leading to constant change and new demands on people.Already today, certain competences are particularly in demand in order to be able to act successfully in a world shaped by artificial intelligence.These include, for example, the ability to collaborate with AI systems (distributed cognition) in creative problem solving or the ability to analyse and interpret large amounts of data.The question of which competences are needed in a living and working world that is influenced by artificial intelligence to be able to act in both private and professional contexts is one of the most important questions in various fields of science.In order to answer this question we have conducted several research steps and have now constructed an initial competence framework which we call "AIComp" (Acronym: Artificial Intelligence Competences).The aim of this study is to process the currently available scientific work on AI-related competences, create a synopsis of competence requirements and bundle them in form of AI-related "Future Skills-Profiles", which serve as larger and overarching competence fields.They each contain knowledge-, skills-and value-related requirements that are important for successful actions in a world permeated by AI.This paper follows a four step flow: In the first part (section 2), the State of the art research literature on AI literacy and competence is analysed and classified.In the next part (section 3), a methodology for qualitative meta-studies is described.A six-step research-, analysis-and design process is described step-by-step, leading into the design of the AIComp model in section four.This is presented in overview and fully presented in the appendix online. Which competences for AI? We base our work on the consideration that AI needs to serve individuals and society to freely and actively develop in a changing world.Our ambition is to cover competences for both economic and social purposes.This is rooted in an understanding of human capital in a wide sense including social-, educational-and economic capital (Bourdieu 1983).We strive to identify competences of behavioural nature that follow the underlying concept of "action competence" (Ehlers 2020) which we call "Future Skills" and which support individuals to act successfully in AI-related contexts in their professional and private lives.Thus, we strive to identify AI-related Future Skills which are important for a broad range of individuals, instead of focussing those competences that are of use only in a specific trade, occupation or walk of life.In the international literature, the so-called "KSAVE model" has become established for the operationalisation of action competences (Binkley et al. 2012).It provides that action competences are constituted by the three dimensions already mentioned above: Knowledge -Skills -Attitudes/Values/Ethics (see fig. 1).For the competence model "AIComp", we also chose this three-part competence structure for each AI-related Future Skill-Profiles.In addition we formulated larger clusters in which we group Future Skills profiles which are called "areas of action".Overall, this results in the following structure: • Areas of action, which contain • AI-related Future Skills Profiles, which include • Knowledge (K) + Skills (S) + Attitudes(A) Interim conclusion: Requirements for an AI skills framework In conclusion, it appears that AI-related competence as an action competence model is not yet sufficiently elaborated and represents a research gap.With the present research overview, we intend to take a first step, based on the existing literature, to establish an action competence-oriented framework.As a first definition of AI-related competences we propose: Future Skills for a lifeworld that is increasingly shaped by AI is the ability to successfully act in emergent and complex situations.The framework concept should fulfil the following functions in particular: 1. systematise already empirically based and/or analytically derived competences and competence requirements and relate them into a model based on KSAVE components and action competence, 2. make visible competence requirements that are placed on certain groups. AI-Related Competences: A Qualitative Meta-analysis In this chapter we give an overview of the step-by-step process we used to analyse existing AI-related competence frameworks.We are then going to present our aggregated new AI-related competence framework. On the Methodology of Qualitative Meta-analysis A qualitative meta-analysis is a systematic summary of empirical studies using the instrument of qualitative content analysis (Timulak 2009).It serves to find (meaning) structures, concepts and constructs in the present study on the topic of AI competences.We proceed in the following steps: Research and analysis stage 1.Research: Keyword-based search in search engines on the topic of AI-related competence approaches as well as lists of skills and descriptions.2. AIComp I: Create a unified list of skills and competences and their descriptions.3. AIComp II: Cleaning the data by expanding multidimensional formulations into one-dimensional ones as well as paraphrasing and deleting duplicate mentions.4. AIComp III: Synthesis and grouping based on four content dimensions: Knowledge Assets, Application Skills, Creative Skills and Innovation, and Critical Analysis, Reflection and Ethics.Further paraphrasing to increase concept clarity.Finally, mapping of K-S-A where possible. Design stage 5. AIComp IV: Mapping of the individual skills to the Future Skills-Profiles.The result is competence descriptions for individual competence profiles, which are operationalised by K-S-A in each case.6. AIComp qualitative final: Recontextualisation of the competence descriptions into a final qualitative model of AIComp. Research and Analysis Stage In the following section, we describe the meta-analysis and the research steps carried out for this purpose. Step 1: Research The research phase pursued the goal of collecting international research papers in German or English on the topic of AI-related competences from the period of the years 2019 to 2023.The following keywords were used for the search: • Artificial intelligence, or AI for short The focus of the search was on elaborated competence frameworks and lists of competence elements for non-technical learners.Furthermore, publications that explicitly deal with the concepts of AI literacy or AI competence but do not contain lists or frameworks themselves were classified as relevant.(see Table 1).Step 2: Creating a unified list of skills and competences and their descriptions In an inventory, all formulations of competence inventory items related to AI were then listed.In this way, a total of 167 competence items of different types, lengths and complexity could be listed in an inventory. Step 3: Data cleaning, expanding multidimensional formulations, paraphrasing and deleting duplicate items A further version was then created using qualitative content analysis procedures.For this purpose, all formulations of competence inventory items that were included twice were removed (115 mentions).Then the formulations were checked for their dimensionality.In this process, those formulations that contained several aspects or dimensions in one item were broken down into their parts and checked to see whether these were already contained in other formulations.New aspects were included as paraphrased formulations.In a further step, all formulations were paraphrased and adapted to a common linguistic style, while retaining the content-related aspects.The result was 34 individual formulations of competence inventory items. Step 4: Synthesis and grouping based on four content dimensions The competence was now analysed and allocated to a literacy model for media literacy in order to check to what extent they contained balanced dimensions according to the model (see Baacke 1997).These dimensions included: knowledge assets (13), application skills (5), creative skills and innovation (6) and ability for critical analysis, reflection and ethics (10). Marginal wording changes were made to sharpen the clarity of the concept. In addition, all these collected competence inventory items were roughly assigned to the competence dimensions: Knowledge (19) -Skills (18) -Attitudes (26). Design Stage: Design of the AIComp Model The design of the initial AIComp model into a Future Skills-structure for AI took two further steps.A three-level structure was used as guiding concept.At the level one, this consists of items from the competence inventory that describe knowledge, skill descriptions or attitudes.At level two these are summarised into AI-related Future Skills-profiles.At level three, the Future Skills-profiles are divided into three areas of action: • those that relate more to personal competences, • those that relate to competences that enable creative use of AI technologies, applications or concepts, • those that contain the necessary competences to master the changes in organisations, communication and cooperation structures resulting from AI. Step 5: AIComp -Mapping the individual skills to the Future Skills-profiles.The result is competence descriptions for individual competence profiles, each of which is operationalised by K-S-A. First of all, 13 AI-related Future Skills-profiles were developed from the Future Skills-profiles of the NextSkills-study and on the basis of the inventory of 167 AI-related skills determined qualitatively in the state of research.These include those competences that are required when the corresponding Future Skills-profile is related to AI-infused living and working environments.The qualitative inventory items assigned to each of these competences answer the question of which knowledge, which skills and which attitudes are considered necessary in order to be able to act successfully within this framework. Step 6: AIComp qualitative final recontextualisation of the competence descriptions into a final qualitative model of AIComp In the last step of the process, the formulations of the Future Skills-profiles are made more precise on the basis of the existing inventory items.As a result, the qualitative inventory currently contains 67 AI-related items that are grouped into 13 Future Skills-profiles. I -Digital competence Being able to utilise AI tools and apps, to develop them in a productive way, to leverage them for one's own purposes; and to reflectively, critically and analytically comprehend their technical modes of action in relation to the individual and society as a whole, including knowledge of the potentials and limits of AI and its modes of action. I.1 Being able to thoroughly evaluate and analyse the implications of the technological inner logic of AI systems based on their impact on organisations or society. I.2 Being able to evaluate, analyse and make sense of AI technologies and applications regarding their functions and the benefits of their application. I.3 Being able to assess and analyse the influence of AI technologies on the handling of data. II. Design thinking competence Being able to employ concrete methods and techniques for realising creative development processes dealing with problems and issues related to AI in a wayin way that is open ended and involving all stakeholders in a collaborative process for problem-solving and solution design. III. Innovation competence Being willing to advance AI innovation within organisations as a subject andans theme as well as in processes, and to incorporate AI into organisational innovation ecosystems. IV. System competence Being able to comprehend AI tools and AI concepts as embedded within complex personal-psychological, social and technical systems, as well as identifying and grasping their mutual effects, and being able to design and/or accompany coordinated planning and implementation processes for new projects within a given system. IV.1 Being able to perceive and analyse AI systems from the perspective of integrating them in a larger social system (of an organisation, of society). IV.2 Being able to consider AI technologies and applications from the perspective of system architecture and system design, and to derive appropriate actions based on this. Area 2: Learning to act autonomously with and for AI Competences necessary to act as an individual in a sovereign and responsible manner in an AI-permeated world and to use AI concepts and tools for one's own objectives in a responsible, productive and reflective way. I. Decisionmaking competence Being able to recognise the need for decisions in AI-related situations and to evaluate alternative choices to make a decision and take responsibility for it. II. Ethical competence Being able to perceive AI-related issues as ethically relevant (perception), to formulate and test premises in relation to AI-relevant issues (evaluation) as well as the ability to test the application conditions of AI-related conceptions and their alternatives (judgement). III. Learning competence Being open, able and willing to learn about AI issues and through the use of AI tools and applications. IV. Reflection competence Being able and willing to recognise the underlying behaviours, thought systems and value systems related to topics connected with AI, and to holistically assess how they inform actions and decisions. V. Selfdetermination Being able to act productively in the tension between external determination and self-determination brought about by data and AI algorithms, and to create self-determined spaces for one's own needs-oriented development. VI. Selfcompetence Being able to use AI tools to support one's own personal and professional development, and to deal with self-organisation, time management as well as cognitive load management with a high degree of personal responsibility when using AI tools. Area 3: Co-creation with and through AI Competences that support the ability to act in relation to AI-issues affecting the social, organisational and institutional environment.These include, for example, the ability to design alternative "AI futures", to help shapingshape the social impact of AI in a critical and reflective manner, to work and cooperate with others, and to communicate, criticise and reach consensus in manners appropriate to a specific situation, also in intercultural contexts. I. Future and design competence Being open, courageous and creative to embrace the new; being willing to change and to look forward in order to further develop and transform existing AI-related concepts in the direction of new, unprecedented visions of the future. II. Cooperation competence Being able to work in interdisciplinary and interorganisational teams on projects and plans relating to AI, also across cultures, to overcome existing differences and find common ground. III. Communication competence Having discourse, dialogue and strategic communication skills, in order to being able to communicate successfully relating to AI subjects in different contexts, in a situationally appropriate manner. Conclusions This paper summarises the interim results of a work in progress that aims at systematically developing a Future Skills framework, suited for helping individuals, organisations and educational institutions to build the action-oriented competences needed for a future world that will be permeated by AI on all levels, in all fields of private and working life. Six different lists of all in all 160 granular competence inventory items, representing elements of holistic competences, were identified, analysed, evaluated and set in relation.From this pool relevant elements were selected and included in a set of Future Skills-Profiles, thus creating a context in which the separate, more granular concepts of "skills", "competences" and "literacies" of workers, students, citizens and consumers become recognizable as part of one big picture. Table 1 : Competence/Literacy approaches for AI-related competences
3,455.2
2023-10-27T00:00:00.000
[ "Computer Science" ]
Cold atom guidance in a capillary using blue-detuned, hollow optical modes We demonstrate guiding of cold 85Rb atoms through a 100-micron-diameter hollow core dielectric waveguide using cylindrical hollow modes. We have transported atoms using blue-detuned light in the 1st order, azimuthally-polarized TE01 hollow mode, and the 2nd order hollow modes (HE31, EH11, and HE12), and compared these results with guidance in the red-detuned, fundamental HE11 mode. The blue-detuned hollow modes confine atoms to low intensity along the capillary axis, far from the walls. We determine scattering rates in the guides by directly measuring the effect of recoil on the atoms. We observe higher atom numbers guided using red-detuned light in the HE11 mode, but a 10-fold reduction in scattering rate using the 2nd order modes, which have an r^4 radial intensity profile to lowest order. We show that the red-detuned guides can be used to load atoms into the blue-detuned modes when both high atom number and low perturbation are desired. Introduction Atom guides using hollow optical fibers (HOFs) have continued to be of interest for potential use in nonlinear optics and optical switches [1,2], atom transport [3], and atom interferometry [4]. Guiding is enabled through the optical dipole potential: The force on an atom exposed to an off-resonant, spatially-varying intensity distribution is attractive (repulsive) when the laser is tuned below (above) the atomic resonance. This has led to numerous optical guiding schemes tailored for particular applications [1, [4][5][6][7][8][9][10]. Broadly speaking, red-detuned guides are simpler to create, but the high field confinement leads to higher photon scattering rates and level shifts; blue-detuned guides require beam shaping, but confine atoms to the low intensity regions of the beam and can significantly reduce photon scattering and other perturbations [11][12][13][14][15] necessary for sensitive measurements [16,17]. Atom guidance in HOFs has been demonstrated using both red-and blue-detuning, and each technique has benefits and disadvantages. Red-detuned guidance in a capillary has been done with both hot [4,5] and cold atoms [1, 2,8], and is relatively easy to align. Recently, atoms have been guided in photonic crystal fibers (PCF) [1,2] in which the small mode-field area leads to strong atom-photon coupling and large optical depths. However, the highintensity guides should be extinguished during the experiments to avoid energy level shifts during which atoms can escape. Blue-detuned, evanescent field guiding has been demonstrated in capillaries [6,7], but is inefficient since most of the guide laser power remains in the glass. Furthermore, because the evanescent field is at a submicron distance from the fiber wall, the field must be strong enough to overcome the attractive van der Waals force [6]. In this paper, we demonstrate atom guidance using our recent proposal [18] to use a higher order, blue-detuned hollow beam in a hollow waveguide, which both efficiently uses the guide light and provides a perturbation-reduced environment for the atoms. We compare guidance using the first three optical modes -the fundamental HE 11 mode, the azimuthallypolarized TE 01 mode, and the second order family of hollow modes. While we observe the highest atom number guidance using red-detuned light, we show that the blue-detuned beams guide atoms with a 10-fold reduction in the recoil scattering rate and that the blue-detuned guides can be loaded from a red-detuned beam inside the capillary for dark confinement with high atom flux, which may be useful for extending measurement time windows for tightly confined atoms [1]. Plots also show the optical potential of the n = 0 (red), n = 1 (black) and n = 2 (blue) modes in units of the Doppler temperature T d = ħ/2 for  = 1 nm and 100 mW of input power. The gray shaded area in (c) represents the glass region of the capillary (core diameter = 100 m). Experimental setup The experimental layout is shown in Fig. 1(a). A source magnetooptical trap (MOT) is situated 1.5 cm above the tip of a hollow, 3-cm-long, 100-micron-diameter hollow rod. Transported atoms are captured in a detection MOT 11 cm below the source MOT. The two MOTs are independently controlled, having separate anti-Helmholtz coils and laser beams, though because of their close proximity to one another, we use both continuous and pulsed bias coils as needed. The guide laser beam passes upward through the hollow rod. Atoms from the source MOT are loaded directly into this beam during the molasses stage, which cools the atom sample to ~10 K. In this work, we guide atoms through a hollow optical waveguide using the fundamental and first two higher order optical waveguide modes. The solutions to these modes are well known [19] and are only briefly described. The guide beams are derived from diode lasers operating within a few nanometers of the 85 Rb D 2 line at 780.24 nm. The intensity distributions of the three modes considered in this paper are [19]: where J n (x) is the n th order Bessel function, a is the radius of the capillary core, r is the radial coordinate, and P 0 is the input power. For n > 0, these are hollow intensity distributions. The u n are the arguments producing the first finite zeros of the n th Bessel function. Throughout the paper, unless otherwise specified, we refer to the beams that produce these profiles by the value, n, of the subscript in Eq. 1. Experimental images at the tip of the guide output are shown in Fig. 1(b), and their cross sections are in Fig. 1(c). To lowest order, the radial intensity profile is quadratic for n = 0 and n = 1, and quartic for n = 2, scaling as r 2n for n > 0. From the Virial Theorem, the time-averaged potential energy is U avg = K avg /n, where K avg is the time-averaged kinetic energy. For a given ensemble temperature, therefore, anharmonic profiles provide a reduced perturbation environment for low scattering rates [12,[15][16][17]. For a laser detuning larger than the hyperfine splitting, the optical potential can be described by [20]: where  is the detuning from the D 2 transition and  LS is the fine structure splitting.  MHz is the natural linewidth, and I S = 2.5 mW/cm 2 is the saturation intensity for off-resonant, polarized light [21]. With 100 mW of power inside the guide, the peak intensities of I 0 (r), I 1 (r), and I 2 (r) are 4.7 kW/cm 2 , 2.66 kW/cm 2 , and 2.61 kW/cm 2 . However, because the divergence increases for higher order modes, the beam diameters increase with n at the MOT [Fig 1(d)], resulting in lower peak intensities and trap depths (0.52 kW/cm 2 , 0.14 kW/cm 2 , and 0.10 kW/cm 2 for n = 0, 1, 2, respectively). For  = 1.0 nm and P 0 = 100 mW, the potential is high enough (0.08 T d for n = 2) to capture a significant thermal fraction of atoms from the MOT, which was cooled to ~10 K (≈ 0.07 T d ) during the molasses stage. However, the MOT size is significantly larger than the beam diameters so only a small fraction of the MOT atoms are loaded into the beams. For near-resonant light, spontaneous Raman scattering is significant. When  <<  LS , an approximate form for this scattering rate is [20]: where I AVG is the time-averaged intensity sampled by the atoms. Blue-detuned traps can make I AVG small compared to the peak intensity depending on the trap potential form. In Ref. [14], the harmonic blue-detuned trap had a scattering rate reduced by 50 over a comparable reddetuned trap, and Ref. [15] used a box-like potential to achieve a reduction of 700. The fundamental HE 11 mode in Eq. 1, n = 0, is formed simply by spatially filtering a laser beam with single mode optical fiber which is a close approximation to the HE 11 hollow fiber mode, but the two higher order modes require further beam shaping. To produce I 1 (r), we use the TE 01 cylindrical waveguide mode, which is azimuthally-polarized and is generated as described in Refs. [22,23]. Briefly, because this mode is closely matched to the first excited mode of a solid core optical fiber, we can use the output of a few mode optical fiber in which the TE 01 mode has been preferentially excited. This mode selection is done by passing a Gaussian beam through a vortex phase plate with a 2 azimuthal phase winding. When this modified beam is coupled into Corning HI-1060 fiber with cutoff wavelength of 980 nm, the fundamental HE 11 mode is eliminated by the phase winding of the input beam, and the correct cylindrical vector beam can be selected by adjusting the polarization with an inline polarization controller [22]. The next higher order family of modes with n = 2 (HE 31 , EH 11 , and HE 12 ) has an intensity profile proportional to I 2 (r) in Eq. 3. The exact profiles could, in principle, be generated in a similar manner to the TE 01 mode using a different solid core optical fiber with larger cutoff wavelength, but mode selection from the solid fiber becomes more difficult as the core size grows. Alternatively, one could use subwavelength grating structures [24] if exact polarization profiles are needed. In this work, we generate a beam with the approximate intensity profile using a phase plate with a 4 phase winding. While this has a spatially uniform polarization profile, the overlap integral with this family of modes is calculated to be 68%. The remaining 32% overlap occurs with other higher order hollow modes of the guide. For coaxial alignment into the rod, modes that are not hollow are only excited through aberrations of the input beam. The large core size of the hollow waveguide demands proper mode matching to eliminate speckle and excitation of other modes, so the incident beam size of the three input beams is carefully adjusted to have the correct size at the hollow rod tip. We use a Pentax C60812 8-48mm zoom lens to collimate the output of the light delivery fibers. The beams are focused by a 200mm achromat, mounted outside the chamber, into the bottom of the hollow rod, and to account for slight variations in focal position for the three beams, the axial lens position is adjusted by a micrometer. The fundamental beam has the highest optical transmission of ~80%, while for n = 2 we obtain a ~45% throughput. The attenuation lengths for I n (r) are  0 = 48.2 cm;   = 30.9 cm; and   = 10.6 cm, giving calculated transmissions of 0.94, 0.91, and 0.75, respectively. Our transmission of I 2 (r) is significantly less than 0.75, most likely due to the overlap with higher order modes that will have even shorter attenuation lengths, and to larger coupling losses at the input, which are expected because the beam diameter is larger. Significantly longer attenuation lengths can be achieved by metal-coating the interior wall of the optical guide [18]. We note that the ends of the rod are polished and coated with aluminum so that light that has escaped from the core cannot interfere with the loading process. Experiments The guiding beams are kept on during the MOT loading and molasses stages. We consider time t = 0 to be the end of the molasses stage when the atoms first begin freefall into the guide. Our signal is the fluorescence from atoms captured in a second MOT below the capillary, and is proportional to the guided atom number. Guiding as a function of detuning In this section, we compare atom guiding through the hollow rod with red-and blue-detuned light using the three intensity profiles of Eq. 1. For n = 1 and n = 2, we have used bluedetuned guiding, which should show greatly reduced scattering compared to the red-detuned guiding of n = 0 when the trap depth is high. The qualitative differences between the guides and the effects of scattering are shown in Fig. 2, which plots the guided atom number as a function of . We show two different guide powers for each of the three modes. (b) blue-detuned TE 01 mode; (c) blue-detuned n = 2 mode. We have used two different guide powers for each case as indicated. Arrows indicate  g , the detuning at which the peak scattering force equals the force of gravity. Signals are normalized for each beam type independently; relative atom numbers are discussed in the text. Note that the detuning values for n = 0 (red-detuning) are negative. The figure shows normalized atom flux for the n = 0, n = 1, and n = 2 beam profiles. For each case, the effects of photon scattering are apparent, but the magnitude of the effect varies. In this study, the relevant parameter is the intensity of the beam inside the capillary core, so we have kept this parameter approximately the same for the three beam shapes. This does, however, lead to significantly different beam intensities at the MOT [see Fig. 1(d)], leading to variations in atom flux between the beam types, so they are plotted separately. Our guide laser beam propagates upward through the detection chamber into the source chamber. Thus, for sufficiently high scattering rates that occur at very small , the atoms cannot propagate through the capillary, either because they scatter enough photons to boil out of the guide beam potential, or because photon pressure from off-resonant scattering overcomes gravity. This pressure can be useful for manipulating atom velocities inside optical guides: Atom levitation due to guide radiation pressure was observed at very small detunings in Ref. [13], and additional near-resonant beams have been suggested for controlling atom motion inside PCF [2]. For a particular mode, the atom flux is determined by the depth of the optical potential at the source MOT and photon scattering effects. Without photon scattering, one would see increased atom flux for smaller  because the trap depth would increase; however, as shown the atom flux is reduced due to increased photon scattering. The effects of the scattering force are clearly observed using red-detuning, shown in Fig. 2(a). At high power (56 mW), no atoms are guided through the capillary until ||  0.6 nm. We define  g as the detuning at which the peak scattering force equals gravity, indicated by the arrows in Fig. 2. For red-detuned guiding, atom transport is observed when || =  g . When 5.6 mW of guiding power is used (black curve), the atoms are again guided when | =  g , which is reduced to 0.2 nm at this power. Since atom transport begins at | =  g for reddetuning, it is clear that the atoms are primarily in the regions of peak intensity. For bluedetuned guiding in Figs. 2(b) and 2(c), however, atom transport occurs for | <  g , with the effect for n = 2 being more pronounced. This shows that the time-averaged intensity sampled by the atoms is lower. Quantitative measurements of the scattering rates are shown in section 3.2. The combined effects of trap depth and scattering lead to an intensity-dependent maximum of the atomic flux. We note that because the guiding beam is directed upward, the detuning at which atoms begin to be guided is larger than if it were directed downward because in the latter case the scattering force would be in the same direction as gravity. Reduced atomic flux can also occur due to heating atoms over the potential barrier, but as discussed in Section 3.2, the heating for  >  g is insignificant over our 3 cm guide length. For similar intensities at the MOT, the red-detuned n = 0 mode guides ~10 6 atoms through the capillary, approximately 5x more atoms than the blue-detuned n = 2 mode, and ~10-15x more atoms than the n = 1 mode. There are two main reasons for this difference. First, for a blue-detuned guide, the relevant parameter is the minimum peak intensity of the guide boundary -aberrations in the beam may lead to "leaky" pathways for the atoms to exit. For a red-detuned beam, however, the atoms can remain bound in the high intensity portions of the beam. Second, loading into optical traps is generally more favorable for red-detuned guides, and experiments with optical traps have observed that atoms are loaded with higher density if the trap is red-detuned [11,25] but these density enhancements are not observed with blue-detuned light. As we suggested in our original proposal [18], if guided atom flux is most important, it is best to use red-detuned guidance, but as we show in the next section more quantitatively, the blue-detuned guidance offers much better reduction of photon scattering. The blue-detuned guides can also be loaded from the red-detuned HE 11 mode, as shown in Sec. 3.3, so that increased atom flux and low scattering are achieved. Photon scattering rates Photon scattering rates in optical potentials are often determined experimentally through stateselective detection: Atoms are first optically pumped into the lower hyperfine manifold, and their relaxation rate into the upper hyperfine manifold is measured [14,15,26]. Although we could perform a similar spectroscopic measurement within the waveguides, the unidirectional optical guide makes it straightforward to determine the effective force on the atoms by simply measuring the time dependence of the atom flux into the detection chamber, because the upward radiation pressure slows the atoms. This technique has previously been used for small detunings with high scattering rates [13,27] and gives an accurate measurement of the recoil photon scattering rate, whereas the spectroscopic measurement only measures the spontaneous Raman scattering rate. The recoil scattering rate scales with 1/  ; for  <  LS , the spontaneous Raman scattering rate also scales with 1/  , but for  >  LS scales with 1/ 4 [26]. Typical time-domain curves of guided atom signal through the capillary are shown in Fig. 3. Here, we extinguish the laser guide with variable delay from t = 0 to t = 200 ms; atoms that exit the hollow rod prior to the shutoff are captured in the collection MOT, while those remaining in the guide are lost by hitting the glass walls. Since the end of our guide is 46 mm below the source MOT, the atoms with zero downward velocity at t = 0 will exit the capillary at t ≈ 97 ms; near this shutoff time, the integrated atom flux increases most quickly. The curves in Fig. 3 depend strongly on . In particular, as  decreases, the increased scattering force causes atoms to take longer to fall through the guide into the detection chamber. The shape of this integrated atom signal depends on the initial MOT distribution and the temperature, but to calculate the scattering force, we have assumed a point source of atoms with a Maxwell-Boltzmann velocity distribution along the capillary axis: dv kT Because the position at a later time is simply y = y 0 +vt -0.5gt 2 , we can solve for v to find the atom flux through the end of the capillary as a function of t and fit the results to our data. We note, however, that any approximate functional form, applied consistently to these data, results in similar arrival times. The integrated flux of the velocity distribution fits well to the following model: where A and B are constants, t 0 is the travel time, and  is the characteristic width of the integrated flux curve. Extracting the arrival times, t 0 , for each of the beam types as a function of , we can determine the average acceleration of the atoms. The difference between this acceleration and gravity, g, is the deceleration caused by photon scattering,  sc v r , where v r = 5.88 mm/s is the recoil velocity of 85 Rb and  sc is the scattering rate. In Fig. 4, we have plotted  sc for the different beam types. The curves follow the expected 1/ 2 proportionality shown in Eq. 3, as indicated by the dotted line fits. If we write I AVG from Eq. 3 as I AVG = I max , where I max is the peak intensity of the beam, we find the relative reduction of the average intensity, of the red-detuned n = 0, and blue-detuned n = 1 and n = 2 modes to be 0.44, 0.064, and 0.041, respectively. Thus, the blue-detuned n = 2 mode has >10x lower scattering rate than the reddetuned n = 0 mode for our guide parameters. We also show the curve for  = 1.0, which is the scattering rate at peak intensity (black solid line). Measuring scattering rates at small detunings through the recoil force is not new [13,27], but we note that this technique appears to be quite effective at detecting low scattering rates at much larger detunings as well -the atoms are only in the capillary for less than 100 ms, so at large detunings with scattering rates near 100 s -1 , only ≈10 scattering events occur. We note that this measurement assumes a constant scattering force throughout the capillary, which of course is not valid for very low scattering rates when only a few photons are scattered during transit. While the atoms are in the guide, they are heated by approximately T r  sc , where T r = 350 nK is the recoil temperature increase on each scattering event. Because the time inside the guide is only ~0.1s, the atoms are heated by only a small fraction of the potential depth (several hundred microKelvin) even at small detunings. It should be possible to use higher n values to guide atoms with lower scattering rate [12,20], although the attenuation length may become too short unless the capillary is coated [18]. In our case with a vertical capillary, the optical potential does not support the atoms against gravity, so we would expect higher scattering rates if the capillary is oriented horizontally. A horizontal orientation might be desirable, however, for increased interrogation time. Fig. 4. Scattering rate versus detuning for the fundamental red-detuned beam (red circles), TE 01 blue-detuned mode (blue squares), and second excited mode (black diamonds). Fits are shown as dashed lines. The solid black curve is the calculated scattering rate at the peak intensity. Red-detuned loading of the blue-detuned guide. For highest atom flux, red-detuned guiding is most efficient. Unfortunately, to perform experiments on atoms in a perturbation-reduced environment, one must extinguish the guide light during the measurement time. This allows only a brief window during which the measurement can be performed before atoms are lost [1] and will also lead to measurementtime broadening of spectral features. To increase the measurement time, we consider transferring atoms from the red-detuned n = 0 guide into a copropagating n = 2 blue-detuned guide to provide increased optical densities, a reduced perturbation environment, and longer measurement times. When the blue-detuned beam is not present (red points) the atoms quickly escape to the capillary walls. When the blue-detuned beam is present to support the atoms (blue points), the atoms can be later recaptured by the red-detuned guide even for long extinction times. In Fig. 5, we measure the confinement time with and without the blue-detuned beam by a release-and-recapture method: At the time when the atoms enter the capillary (t  60 ms), the red-detuned light is extinguished for a brief period, T off , and then turned back on, and recaptured atoms make it through the capillary and are detected as before. Without any bluedetuned guide present, this signal decays with a 1/e time constant of 1.5 ms due to atoms that strike the capillary wall. If we turn the blue-detuned beam on when the red guide is shut off, the atoms cannot escape and are recaptured by the red-detuned beam. For T off < 1 ms, there is an initial large loss of signal due to the size of the blue-detuned beam: Any atoms outside the peak-peak diameter of ~60 m are lost (see Fig. 1c). After this initial loss, the remaining atoms are confined in the blue-detuned guide with gradual loss until they exit the capillary. We note that for these experiments the relative guided atom number for the n = 2 beam in the absence of the red-detuned n = 0 beam was near 0.1, as indicated by the dashed black line in Fig. 5, so the short time enhancement (T off < 5 ms) due to the blue-detuned beam is about 5x. We note that because hollow beams have been successfully propagated through PCF [28, 29], it might be possible to use them to extend measurement times on atoms confined to PCF. During T off , the atoms continue falling in the blue-detuned hollow mode and sample more of the hollow beam. If the mode quality had deteriorated and developed potential minima along the capillary length, we would have expected the guided atom number to drop at the T off values corresponding to these locations. We did not observe this, and because the output mode quality was also good, the mode quality was likely good throughout the capillary. Conclusions We have guided cold atoms using the first three optical modes of a 100-micron-diameter capillary over a distance of 3 cm. Specifically, using time-of-flight measurements, we have observed a 10x reduction in photon scattering using the second excited, blue-detuned hollow mode compared with red-detuned guiding in the fundamental mode. We have also shown that red-detuned loading of a blue-detuned hollow mode can be useful for improved measurement time in perturbation reduced environments with increased atom flux. These results should be of interest for low power nonlinear optics, especially when extended to PCF confinement. This work was supported by the Office of Naval Research and the Defense Advanced Research Projects Agency. We gratefully acknowledge helpful discussions with Guy Beadie and technical support from Barb Wright and Gary Kushto.
6,076
2012-05-17T00:00:00.000
[ "Physics" ]
Qubit transformations on Rashba ring with periodic potential A spin-qubit transformation protocol is proposed for an electron in a mesoscopic quantum ring with tunable Rashba interaction controlled by the external electric field. The dynamics of an electron driven around the ring by a series of Landau-Zenner-like transitions between a finite number of local voltage gates is determined analytically. General single-qubit transformations are demonstrated to be feasible in a dynamical basis of localized pseudo-spin states. It is also demonstrated that by the use of suitable protocols based on changes of the Rashba interaction full Bloch sphere can be covered. The challenges of a possible realization of the proposed system in semiconductor heterostructures are discussed. Introduction The spintronics, a promising new branch of electronics based on electron's spin as the information carrier instead of its charge, has emerged in the last few decades. The use of spin promises several important advantages in information processing, most notably longer coherence times and lower power consumption compared to classical electronic devices [1][2][3]. What is even more important is that the spintronic devices are among the most promising candidates for the realization of quantum computers with spin states being used as qubits [4]. To avoid the use of the magnetic field for spin manipulation, the spin-orbit interaction (SOI) [5,6] might be used to control electron's spin. Rashba type SOI [7], emerging as a consequence of structural inversion asymmetry of the effective potential in the semiconductor heterostructure, seems especially promising for this task since its magnitude can be artificially controlled by applying the external electric field perpendicular to the plane of the heterostructure [8,9]. Potential use of this phenomenon was first demonstrated by SOI field effect transistor, proposed by Datta in 1990 [10], followed by several other proposals for two-dimensional spintronic devices [1,2,[11][12][13][14]. For the use in quantum computation, the spin transformation would ideally be applied to a single-electron qubit, trapped in a quantum dot, with its position determined by an external electric potential [15]. Spin transformation for an arbitrary motion of an electron in one dimension system can be expressed analytically [16,17] which also allows for exact analysis of errors in qubit transformations due to the noise in driving fields [18] and the effects of finite temperature [19]. Note, however, that since the Rashba spin rotation axis in this system is perpendicular to the direction of electrons' motion, one-dimensional motion provides only a limited range of possible spin transformations [15]. This limitation is removed by allowing the electron to move in two dimensions [20,21]. The system of electron on a quantum ring with the Rashba coupling is particularly convenient in this regard since it allows for the study of spin transformations in a two-dimensional system using effectively one-dimensional Hamiltonian [22]. As shown in Ref. [23], the motion of the electron around the ring with the Rashba coupling, tuned using external gate voltage, can be used to realize an arbitrary single-qubit transformation in the qubit basis of Kramers states. However, the authors assumed that the position of external potential can be shifted for an arbitrary azimuthal angle, which is usually not the case in realistic spintronic devices, where the potential is typically defined using fixed external voltage gates, applied to the surface of the semiconductor, as shown in figure 1. The minima of the potential can, therefore, occur only at specific positions. To describe more realistic devices, this limiting factor should be taken into account. The goal of this paper is to analyze the transformation of electron's spin state when transferred from the site of one voltage gate to the site of its neighboring gate. In the case of equidistant gates, forming a periodic potential, this can be done analytically. As we show in this paper, the spin rotation is directly related to the spin-dependent part of the hopping parameter, coupling the neighboring Wannier states in the corresponding tight-binding model of periodic gate potential. To find an explicit analytic form of hopping terms, we first calculate the Bloch functions on the ring, characterized by specific site-dependent Rashba-induced spin orientation, and their energies. Corresponding Wannier states and their nearest-neighbor hopping Hamiltonian, obtained by Fourier transformation of Bloch states and energies, are further transformed by local spin rotations to obtain a basis of localized states, resembling the pure spin state of the electron, trapped at the site of each voltage gate. The hopping terms between the states of this so-called spin Wannier basis is then expressed analytically by spin-rotation matrices, allowing a simple analysis of spin transformations accompanying electron transition. The results are verified by numerical calculation of spin rotation during the slow transition of the electron between gates, showing that the use of Wannier hopping terms indeed results in correct spin transformations. An analytic expression for the hopping term is then used to determine the parameters of the system, allowing for the arbitrary single-qubit transformation of an electron as a result of its transition around the ring. The paper is organized as follows: the model describing the electron on the ring is introduced in Section 2 and the Bloch states on the ring are derived by analytical solving the Schrödinger equation in Section 3. In Section 4 the Wannier states on the ring are introduced and in Section 5 transformed into spin Wannier basis. These finally enables the analysis of qubit transformations, which is done in Section 6, and Section 7 is devoted to conclusions. Model The Hamiltonian, governing the electron on the ring in presence of Rashba coupling and external potential, is given by [22] with parameters where periodic angular coordinate ϕ ∈ [0, 2π] describes the position of the electron. R denotes the ring radius, m the electron effective mass in a semiconductor, α R the Rashba coupling, φ magnetic flux through the ring and φ 0 magnetic flux quantum. Pauli operators in rotated spin frame are defined as where σ x,y are ordinary Pauli matrices. In our model, V (ϕ) is a periodic potential with the period ϕ a = 2π/N , described as a sum of N potential wells W (ϕ), shifted to have minima at ϕ = nϕ a , Coefficients a n describe the depth of the potential at each site and can be varied externally by the voltage applied to each gate. These allow the transfer of the electron around the ring. To keep the electron located at site n, the depth of the potential well on this site, a n , should be set to sufficiently large value while all other coefficients should be set to 0. To transfer the electron to the neighboring site, n ± 1, coefficients a n±1 should be increased, respectively, while a n is simultaneously set to 0. Schrödinger equation The main goal of this paper is to calculate analytically how the spin orientation of the electron changes during this process. As we show later, this information is encoded in the hopping terms for an electron between gate positions, which can be extracted from Bloch states ψ js (ϕ) with their energies E js , obtained for the case of equal binding potentials on all gate sites on the ring, a n = 1. The Schrödinger equation for Bloch states is where half-integer index j is used to denote the rotation symmetry of the wavefunction and s = ± 1 2 is a pseudo-spin index. The symmetry properties of ring Hamiltonian equation (1) lead to an ansatz for Bloch function, derived in Appendix A, with u j (ϕ) being periodic function of ϕ, u j (ϕ + ϕ a ) = u j (ϕ). To find an exact form of periodic function u j (ϕ) and spinor χ * s for the case of Rashba Hamiltonian equation (1), we transform it with a set of unitary transformations, given in Ref. [25] where α = (−α, 0, 1) is an effective Rashba field and σ = (σ x , σ y , σ z ) is the vector of standard Pauli operators. The transformation does not affect the periodic potential V (ϕ) and the resulting Hamiltonian is independent of spin with spin-orbit energy E SO = − 1 4 α 2 . As explained in Appendix A, this spinindependent form allows one to seek the Bloch states in a manner very similar to the case of electron on a one-dimensional straight wire with periodic potential, i.e. using the ansatz ψ ks (ϕ) = e ikϕ u k (ϕ)χ * s . The Bloch states of original Hamiltonian are then obtained by inverse transformation ψ ks (ϕ) = U † ψ ks (ϕ). The values of k and eigenspinors χ * s are determined by applying non-trivial periodic boundary conditions, ψ ks (ϕ) = ψ ks (ϕ + 2π), resulting in [25] The eigenproblem has two solutions, one for each pseudo-spin state s. Both can be compactly written as a spin transformation of standard basis spinors, quantized along the z-axis, denoted χ s , using an operator of spin rotation around the y-axis, U y (ϑ α ) = exp −i ϑα 2 σ y , Applied to boundary conditions equation (10), the spinors equation (11) determine the allowed values of k, which also depend on pseudo-spin s. When applied in ansatz equation (9), these results lead to the Bloch functions of the Rashba ring Hamiltonian equation (1) being expressed analytically as What is important is that the periodic part of the Bloch function u js (ϕ) can be directly related to the function u k (ϕ) for the case of one-dimensional system, by substituting k → j − φ m − sφ α , given that the periodic part of Hamiltonian V (ϕ) is the same in both cases. Note that when exponent e ijϕ is combined with spin rotation U † z (ϕ), the result is indeed compatible ansatz equation (5), derived in Appendix A. The energy of one-dimensional Bloch state in the limit of strong periodic potential (tight-binding limit) is parametrised as E k = E 0 − 2t 0 cos (kϕ a ), with mean band energy E 0 and bandwidth 4t 0 determined by detailed shape of the potential [24]. The transformation between the one-dimensional and the ring Hamiltonian allows the energy of the electron on the ring to be obtained by a simple substitution introduced above, k → j − φ m − sφ α , into the expression for E k , resulting in energy depending on both angular momentum j and pseudo-spin s, Since both Bloch states of equation (13) and energies equation (15) on the ring closely resemble their one-dimensional counterparts, their transformation to Wannier states and their corresponding Hamiltonian is obtained by a simple transformation, presented in the next section. Wannier states As explained in the Introduction, the spin transformations, accompanying the electron's transition between sites on a ring, will be expressed in terms of nearest-neighbor hopping terms. These are obtained by the Fourier transformation of Bloch states into the basis of localized Wannier functions [24], Note that since summation is taken over half-integer j values, the phase coefficients e −in(j− 1 2 )ϕa are such that j − 1 2 is an integer, as is usual for the Fourier transformation. We used the fact that transformations U † z and U † y do not depend on s, so the envelope function w ns (ϕ), describing the charge density of the wavefunction, is a Fourier transformation of u js (ϕ), The expectation value of spin of the Wannier function is mostly determined by spin rotations U z (ϕ) and U y (ϑ α ). If the periodic potential is strong, functions w ns (ϕ) are strongly localized around positions ϕ = nϕ a and expectation values of spin can reliably be approximated by This leads to very intuitive interpretation of the Wannier states and their spin properties. The electron in the Wannier state |φ ns is localized around the position nϕ a with spin tilted from z direction towards the centre of the ring for s = 1/2 and from −z direction away from the centre for s = −1/2, as shown in figure 2. The matrix elements of Hamiltonian H in the Wannier basis H mnss = φ ms | H |φ ns are obtained as the Fourier transformation of energy E js , Since j only appears in cosine terms in E js , the transformed Hamiltonian can be exactly evaluated, |φ n,↑ |φ n,↓ with pseudo-spin dependent hopping term The Hamiltonian H in the basis of the Wannier states therefore correspond to a tightbinding model with spin dependent hopping term t s , Spin Wannier basis Application of hopping terms t s in equation (21), although simple, is not the best way to study spin transformations. Since t s couples states |φ ns with a non-trivial spin properties equation (17), the interpretation of the effect of hopping on electron's spin orientation is more complicated. This issue is tackled here by introducing a basis of localized states with uniform spin orientation, as follows. Since the spin properties of Wannier functions depend on the strength of the Rashba coupling |φ ns , these states are not the best choice for the analysis of spin transformations of the electron. It is more convenient to construct a new basis states as a local superposition of Wannier states at the same site n, so-called spin Wannier basis, with spin properties independent of spin-orbit coupling, resembling pure spin states. We construct these states in a way that their expectation values of spin are as close as possible to the values for pure spin states, as explained in Appendix B. To emphasize that this basis resembles pure spin states, we sometimes use arrows ↑ and ↓ as the pseudo-spin index s instead of ± 1 2 , respectively. The coefficients of linear superposition of such states can then be directly related to the direction the vector of spin expectation values on the Bloch sphere, θ and χ ψ| s |ψ ≈ 2 (sin θ cos χ, sin θ sin χ, cos θ) , which significantly simplifies the analysis of spin transformations and makes the states φ ns a suitable qubit basis. The coefficients c nss are determined by projecting the original Wannier states to the basis of pure spin states, as show in Appendix B. In the limit of strongly localized states |φ ns , the coefficients simplify to where the matrix U can be expressed with spin rotations U z (ϕ) and U y (ϑ α ), introduced in the Hamiltonian transformation equation (6), Even though this result is not exact, these coefficients represent a good approximation of pure spin states even for the case of shallow potential wells, as is demonstrated numerically in figure B2 in Appendix B. Since the spin Wannier state φ ns is a local superposition of original Wannier states |φ ns , with the same n, the Hamiltonian in this basis will still have a form of nearest neighbor hopping, but with coupling termst nss being position-dependent and also mixing the pseudo-spin states, Hopping termst ± nss are calculated by transforming t s equation (20) with the matrix U n equation (27) where Although not obvious at first glance, the Hamiltonian equation (28) is Hermitian when applied to the basis of states φ ns with appropriate periodic boundary conditions on a ring. The hopping termst ± nss are quite complex, but still expressed in analytical form, comprising three spin-rotation matrices. In contrast to t s , describing the transformation of pseudo-spin states |φ ns with relatively complex spin properties (see Fig. 2), the interpretation of termst ± nss is much more direct, describing real spin rotations, expressed in spin Wannier basis φ ns . Consequently, this allows a much simpler analysis of spin rotations, accompanying electrons movement between voltage gate sites, and also a construction of general single-qubit transformations. This will be further explored in the next section by the introduction of suitable qubit basis and demonstration of system capabilities in performing controlled qubit transformations. Qubit transformations We define qubit basis as Wannier pseudo-spin pair on the site n = 0, We also define the Bloch sphere, corresponding to this basis, defined by polar and azimuthal angles Θ and Φ, which correspond to the qubit state Single qubit transformation is achieved by transferring the electron around the ring by controlled changes of gate potentials at different sites. To transfer the electron from one site to its neighboring site, we slowly decrease the depth of potential well on the first site and increase the depth of the potential on the site onto which we want to transfer the electron. Such charge transfer has already been demonstrated experimentally for N = 4 sites [26]. From mathematical perspective, this results in a Landau-Zenner-like transition of the electron from the superposition of spin Wannier states φ ns on the initial site to the superposition of spin Wannier states φ n+1,s on the final site, as analysed in Appendix C. As in the case of the Landau-Zenner transition, the probability of finding the electron on the initial site will drop to zero only in a case of slow change of the local potential. Even in this limit, however, the resulting transition is not trivial, since the coefficients of the spin superposition change during the transition. The change is described by the hopping term for spin Wannier basis equation (29). If the electron is initially in a state on-site n |ψ init = s c s φ ns , the Landau-Zenner transition from site n to n + 1, denoted T n→n+1 , will result in the final state (see Appendix C) with new coefficients d s calculated from the hopping termt + nss : Note that the 2 × 2 matrixŨ is unitary, as is seen from equation (29), which means that each transition can be seen as a rotation on the Bloch sphere. Note that the transformation of coefficients depends on the strength of the Rashba coupling α, determining the axis of spin rotation U † α in hopping term equation (29). The sequence of Landau-Zenner transitions equation (33) between neighboring sites can bring the electron around the entire ring, resulting in the final state being a superposition of the same spin Wannier states φ N +n,s = φ ns as the initial state The coefficients describing the final state are calculated as with transformation U f ull being a product of spin transformation for each transition between neighboring sites. Since the Rashba coupling α i can be adjusted between two consequential Landau-Zenner transitions, this gives a wide range of parameters that can be tuned to achieve desired qubit transformation. Using the definition oft + nss equation (29) and allowing m revolutions of the electron around the ring with N sites, the qubit transformationŨ f ull can be written in a simplified manner (the rotations U † z cancel out) using only spin transformations U † α , where each factor corresponds to an electron's transition between sites at the Rashba coupling strength α i , with i = 0, 1, ..., m × N − 1. Note that the phase factor (−1) m (arising from U † z (2π) = −1) depends on the number of electron's revolutions around the ring, but does not physically affect the spin transformation. By using the qubit states |ψ Q equation (31) as the initial state |ψ init equation (32), the final state |ψ f in equation (36) is also a qubit and the transformation equation (39) therefore represents a controlled qubit transformations. It is instructive to see it as a combination of rotations on the Bloch sphere, spanned by qubit basis, where each transition of the electron, described by transformation U † αn (ϕ a ) = exp 1 2 iϕ a α n · σ , causes a rotation around axis α n = (−α n , 0, 1) by the angle χ n = ϕ a 1 + α 2 n . The result is very similar to the one found in Ref. [23], but in the present case, the shifts in electrons position ϕ a are fixed and the strength of Rashba coupling during each transition can be tuned. Note that the present case is much closer to the model of a possible realistic device, where the electron would be transferred between the potential minima, defined by potential gates at fixed positions. To verify that the described procedure can really be used to realize a qubit gate, we performed comprehensive numerical calculations, similar as in Ref. [23]. We describe the total qubit transformation with angles on the Bloch sphere Θ and Φ, corresponding to the final qubit state, obtained from initial state |0 by applying transformation U f ull . The transformation is determined by a set of Rashba parameter values α i , which can take values between intrinsic, non-amplified value α in and amplified value α max = K α α in with K α depending on the material used. As in Ref. [23] we choose the ring size R in such a way that α in = 1/ √ K α and α max = √ K α (see equation (1)), providing the maximal angle between rotation axis corresponding to these two values of α. For each number of sites on a ring N , the number of revolutions m and maximal amplification factor K α , parameters that are determined by device architecture and material, a set of numbers [α 0 , ..., α N ×m−1 ] determines the qubit transformation, parametrized by Θ and Φ. If we can for each pair of Θ and Φ find a set [α 0 , ..., α N ×m−1 ], this means that any qubit transformation can be achieved. As an example of spin rotation, we performed the Z-gate qubit transformation, corresponding to Θ = π and arbitrary Φ. This transformation can be realized on a ring with N = 6 sites with m = 1 revolution of the electron around the ring and Rashba amplification factor K α = 5. The transformation is schematically presented in figure 3. Figure 4(a) shows the coverage of the Bloch sphere for m = 1 electron revolution with black part showing the surface available at the Rashba amplification factor K α = 2, dark blue at K α = 3, medium blue at K α = 4 and light blue at K α = 5. The qubit transformations, corresponding to white part of Bloch sphere on figure 4(a), can only be achieved at amplification factors K α > 5, which is difficult to obtain in realistic devices. The same diagram for m = 2 revolutions is presented in figure 4(b). We see that in that case, any qubit transformation can be obtained even at lower amplification factor K α = 4. The dependence of achievable qubit transformations on parameters N , m and K α is further explored in figure 5, which shows the percentage of the Bloch sphere that can potentially be covered at specific values of the parameters. We see that the number of revolutions of the electron around the ring is far more important than the number of sites. For m = 2 revolutions, arbitrary single-qubit rotation can be achieved (fully covered Bloch sphere) with amplification factor K α ≈ 4, while for N = 4 and m = 3 the factor K α can be as low as 3. Discussion and conclusion The results presented here indicate that well-controlled arbitrary transformations of qubits, defined as localized pseudo-spin states of electron on a ring, could be achieved in a quantum ring system where the position of the electron is controlled by a finite number of voltage gates. The efficiency of such an approach, however, depends on several parameters. As discussed in the previous section, the number of shifts of electrons position depends strongly on the maximum amplification factor of the Rashba coupling, achievable in specific material by an external electric field. In simple III-V semiconductor heterostructures, the amplification factors of about K α = 2 are feasible [27,28], which would lead to a larger number of electron revolutions around the ring. In more exotic systems, for example, InAs nanowires [29], a much larger amplification factor of K α = 6 was measured, however, it is not clear whether such a system is suitable for construction of the quantum ring considered in our study. The time efficiency of the proposed transformation is to a large extent determined by the size of the ring used. At realistic values of Rashba parameters, the radius of about 100 nm is required [23], resulting in characteristic energy of electron being about ∼ 100 µV and characteristic time τ 0 = / ∼ 10 −11 s. As shown in Appendix C, the effective Landau-Zenner transitions are achieved at transition times of few tens of characteristic times, which still allows for several thousand electron transitions during spin relaxation time of 100 µs, typical in semiconductor heterostructures [30]. Note, however, that the Landau-Zenner type transition was chosen in our study due to its simplicity to demonstrate the spin transformations during electrons revolution around the ring. In realistic applications, more efficient and faster ways of electron transport would most likely be applied, which are more demanding for theoretical description but are based on the same phenomena as discussed in this paper. Several other aspects should be taken into account when designing real devices, such as effects of temperature and most importantly the effects of local gate potential, used for the electron transport, on the magnitude of Rashba coupling, which might have an important effect on the spin properties of pseudo-spin states used as qubit basis. Although these effects might change the detailed behaviour of the analyzed system, its ability to performing spin transformations, presented in the paper, would probably not change significantly. where u k (x) is a periodic function of x, u k (x + x 0 ) = u k (x + x 0 ). The symmetry of electron states on the Rashba ring, described by Hamiltonian equation (1) is a bit more complicated, since it comprises both translation in azimuthal angle by ϕ a and spin rotation around the z-axis by the same angle [25]. The transformation T rot (ϕ a ), corresponding to this symmetry, is generated by the operator Similarly to the one-dimensional system, the transformation should only change the phase of the ring Bloch function ψ j (ϕ), This is indeed true if the ring Bloch function is written as an ansatz, similar to its one-dimensional counterpart equation (A.4), with function u j (ϕ) being periodic in ϕ, u j (ϕ + ϕ a ) = u j (ϕ). Note that since T rot is a spin operator, the Bloch function is accompanied by some spinor χ * s , describing the spin part of the wavefunction, with pseudo-spin index being s = ± 1 2 . The periodic scalar function u j (ϕ) depends on half-integer quantum number j, which is related to the total angular momentum of the electron. As shown in Section 3, the spin-dependent ring Hamiltonian equation (1) can be transformed into simplified form using a set of spin transformations U from equation (6), U = U α U z U φ . Since the spin part of the symmetry transformation T rot is already applied to the transformed Hamiltonian H equation (7) in form of a rotation U z = exp i ϕ s z , H is invariant under ordinary one-dimensional translation operator, similar to equation (A.3), T (ϕ a ) = exp (−iϕ a p ϕ / ). This means that H can for all practical purposes be treated as a Hamiltonian of one-dimensional system H 1D equation (A.2) and the Bloch states of this transformed Hamiltonian will therefore take a form similar to onedimensional Bloch state equation (A.4) but with added spin part χ * s . This form differs from equation (A.7) since k in the exponent is a number instead of spin operator. However, once transformed with inverse trasformation U † equation (6), the function takes a form of ansatz equation (A.7) with correct symmetry properties. As for one-dimensional case, the function u k (ϕ) is periodic and determined solely by detailed shape of periodic potential V (ϕ) [24], while the spinors χ * s and allowed values of k are determined by the periodic boundary conditions of original Bloch functions, ψ js (ϕ) = ψ js (ϕ + 2π) [25]. Appendix B. Properties of Wannier spin basis To calculate the coefficients c nss , transforming Wannier states |φ ns into spin Wannier basis φ ns , we first construct the basis of pure spin states |η ns , localized at the sites of potential wells, η ns (ϕ) = z n (ϕ)χ s (B.1) with orbital part z n (ϕ) being arbitrary normalized function, strongly localized around coordinate ϕ = nϕ a , and spin part being pure spinor χ ↑ or χ ↓ , quantized along z-axis. We want the spin Wannier basis φ ns to resemble these states, When the definition of Wannier states equation (16) is used in the equation, we get If we assume strong periodic potential, than w ns (ϕ) is narrowly spread around ϕ = nϕ a . The integration in equation (B.4) therefore results in elimination of orbital parts of wavefunctions and substitution ϕ → nϕ a in spin rotations. Also since w ns (ϕ) and z n (ϕ) are generally not orthonormal, the coefficients must be renormalized. This leads to The approximations are rewarded with the fact that the expression is simple and independent of the details of the periodic potential used. In order to demonstrate that the coefficients result in a sufficiently good basis functions, we calculate numerically Bloch functions and Wannier functions for the case of periodic potential constructed as a sum of N = 6 potential wells of Gaussian shape, The potential V (ϕ) is characterised by the potential depth V 0 , corresponding to an integral of the potential over one potential minima, V 0 = π −π W (ϕ)dϕ, and its width σ. Figure B1 shows a plot of real and imaginary part of both spin components of both spin Wannier states,φ 1↑ (ϕ) andφ 1↓ (ϕ), on site n = 1, for potential strength V 0 = 10 and Rashba coupling α = 1.5, calculated numerically on a grid with N grid = 240 sites. As we can see, for both functions one spin component is dominant and the other one is negligible, which is what we expect from spin basis. This is the case even though the width of the functions is quite large compared to the inter-site spacing, which indicates that the choice of coefficients equation (B.5) gives good results even when the assumptions taken in their derivation are not fulfilled. Spin Wannier states on figure B1 are also compared with the bound stateη ns (ϕ) in a single Gaussian potential well equation (B.7) of the same depth and width, which is relevant for the transition of electron between sites, further discussed in Appendix C. To verify that the spin properties of spin Wannier basis φ ns correspond to criterion equation (23), we numerically calculate the expectation values of all three spin components s = φ ns s φ ns ≡ ( s x , s y , s z ) . To compare the spin properties of spin Wannier basis with that of pure spin state, we calculate the normalized length of the vector s and the cosine of the angle that vector s spans with the z-axis: For pure spin state, both values are unity. Numerical calculated values of both quantities for a state φ 1↑ at same N , σ and N grid as used for figure B1 are plotted in figure B2 as a function of potential strength V 0 for various values of α. As seen in figure B2, in the absence of SO coupling, numerically calculated L and cos Θ are both 1, , which indicates that in this limit, spin Wannier basis states φ ns are actually pure spin states. When the Rashba coupling is present, the parameters are no longer exactly one, but quickly approach this value when potential is increasing, indicating that spin Wannier basis, obtained with coefficients c nss equation (B.5) is indeed a very good approximation for pure spin states. Appendix C. Landau-Zenner transitions Here we discuss the procedure of transferring the electron between two neighboring ring sites by changing the depth of local potential minimum. As we see in figure B1 the spin Wannier basis functions φ ns , calculated with coefficients equation (B.5), are in fact very similar in shape to the bound states of the electron in the potential, consisting of only one potential well, labelled |η ns . We therefore assume for the rest of the discussion that the spin Wannier states and bound states are equivalent and that φ ns is also a stationary state of the potential with single potential minima at site n. The Landau-Zenner transition between neighboring potential minima is realized in the following manner. We assume the initial potential on a ring to be a single potential minimum at site n, V (ϕ, t = 0) = W (ϕ − nϕ a ). We then start to slowly decrease the depth of the potential at site n and increase the depth at site n + 1, If voltage change rate β is small β V 0 , this results in slow transition of electron from the superposition of spin Wannier states on site n to the superposition of states on site n + 1 [31], The probability of finding the electron on site n or n + 1 depends on magnitude of c s (t) and d s (t), P n (t) = s |c s (t)| 2 , P n+1 (t) = s |d s (t)| 2 , (C.5) and during slow transition, the value P n will change from 1 to 0 and P n+1 from 0 to 1. What is important for the spin transformation is the relation between coefficients of state in spin Wannier basis before (c n ) and after (d n ) electron transition. The lowest order term of time evolution operator T (t) = exp(−i Ht ), coupling the states φ ns and φ n+1,s , is proportional to hopping matrixt + nss equation (29). The state after the Landau-Zenner transition is also normalized, which leads us to the prediction that the coefficients of the final state in spin Wannier basis are related to initial coefficients as with inter-site distance ϕ a = 2π/6, with the initial depth of the potential well V 0 = 15 and width σ = 0.1 on a computational grid of N grid = 90 sites. The rate of potential change was set to β = ω0 40 , where ω 0 = / is a natural frequency of the system. Time dependency of local potential on sites n and n + 1 is shown on inset figure in panel (b) in red and blue, respectively. From figure C1 it is evident that the numerical results agree very well with the theoretical prediction, from which we conclude that the equation for coefficients of state in spin Wannier basis after the Landau-Zenner transition equation (C.6) is indeed a good approximation for the analysis of spin transformations.
8,295
2020-04-08T00:00:00.000
[ "Physics" ]
AC conductivities of a holographic Dirac semimetal We use the AdS/CFT correspondence to compute the AC conductivities for a (2+1)-dimensional system of massless fundamental fermions coupled to (3+1)-dimensional Super Yang-Mills theory at strong coupling. We consider the system at finite charge density, with a constant electric field along the defect and an orthogonal magnetic field. The holographic model we employ is the well studied D3/probe-D5-brane system. There are two competing phases in this model: a phase with broken chiral symmetry favored when the magnetic field dominates over the charge density and the electric field and a chirally symmetric phase in the opposite regime. The presence of the electric field induces Ohm and Hall currents, which can be straightforwardly computed by means of the Karch-O’Bannon technique. Studying the fluctuations around the stable configurations in linear response theory, we are able to derive the full frequency dependence of longitudinal and Hall conductivities in all the regions of the phase space. Introduction The AdS/CFT correspondence is a duality between low-energy effective theories of string theory and supersymmetric gauge theories. It was originally conjectured as an equality between Type IIB Supergravity on AdS 5 × S 5 background and the supersymmetric N = 4 SU(N c ) Super Yang-Mills quantum field theory in the limit N c → ∞ and λ → ∞ with λ = g Y M N 2 c [1][2][3]. Since the quantum field theory is in a strong coupling regime when the gravity one is a low-energy effective theory, the correspondence is also a strong/weak coupling duality. For this reason it has been proven to be a formidable tool to evaluate relevant physical quantities for strongly coupled field theories by means of the gravity dual ones. A (2+1)-dimensional semimetal is an example of a physical system for which the AdS/CFT correspondence seems to be particularly well suited. This can be motivated using graphene as a representative of semimetals. Although some of its properties can be studied through perturbative approaches, there are some theoretical evidences, like the small Fermi velocity (v F ∼ c/300), and some experimental ones [4,5] which suggest that interactions in graphene may be strong. If this would be the case an accurate description of the physics in graphene would require a non perturbative approach and in this scenario the AdS/CFT correspondence represents the best analytical tool at our disposal. The study of Dirac semimetals with holographic techniques can be approached using either bottom-up (see for instance [6][7][8] for recent applications) or, as we do in this paper, top-down models, based on D-branes constructions. In particular we consider the well studied D3/probe-D5-brane system, where the D5-probes intersect the D3-branes on a (2+1)-dimensional defect, as depicted in figure 1. This turns out to be a good holographic model to describe the physics governing charge carriers in graphene, as can be seen by considering the field theory dual of the system, which consists in fundamental matter particles living on the (2+1)-dimensional defect and interacting through N = 4 Super Yang-Mills degrees of freedom in 3+1 dimensions [9][10][11]. Taking zero asymptotic separation between the D3 and D5-branes corresponds to having massless fundamental particles on the defect. This is exactly what we want for graphene, where charge carriers are known to be massless at the kinetic level. Thus in the dual string theory picture we can interpret the (2+1)-dimensional brane intersection as the holographic realization of the graphene layer. The geometry of the D5-brane probes at the boundary is fixed to be AdS 4 × S 2 . If no external scale is introduced, it turns out that the whole geometry of the D5-brane worldvolume is actually given by AdS 4 ×S 2 and this gives a global SO(3)×SO(3) symmetry to the theory. When an external magnetic field B is turned on, the D5-brane geometry changes: the probe brane pinches off before reaching the Poincaré horizon (Minkowski embedding) and the SO(3) × SO(3) symmetry is broken to a SO(3) × U(1). In the dual field theory this can be viewed as a chiral symmetry breaking due to the formation of a fermion-antifermion condensate [12,13]. The introduction of either finite charge density ρ or finite temperature T opposes this condensation, giving rise to a more interesting phase diagram, with a transition from the phase with broken symmetry to the symmetric one as the ratios ρ/B or T /B increases [14]. At zero temperature the chiral symmetry breaking transition happens at ρ/B = √ 7 and it turns out to be a BKT phase transition [15]. For small T it is of second order nature [16] with changing ρ/B and for small ρ it is of first order with changing T /B. When the charge density is small but finite the D5-brane geometry still breaks the chiral symmetry but in a different fashion compared to the zero charge case, since this time the D5-brane worldvolume reaches the horizon (Black Hole embedding). This can be simply understood in the holographic picture where charge carriers are represented by F1-strings, which, having higher tension with respect to D5-branes, pull the latter down to the horizon. JHEP12(2018)109 The D3/probe-D5-brane setup was also used to model double monolayer semimetal systems formed by two parallel sheets of a semimetal separated by an insulator [17][18][19]. In this case one has to consider two stacks of probe branes (a stack of D5 and one of anti-D5) to represent holographically the two semimetal layers. The presence of the two layers introduces another parameter in the model, namely the separation between them, and a new channel for the chiral symmetry breaking, driven by the condensation between a fermion on one layer and an antifermion on the other one. The aim of this paper is to derive the AC conductivity matrix for a (single layer) (2+1)dimensional semimetal, such as graphene, using the holographic D3/probe-D5-brane model. In particular we will consider the D3/probe-D5 system with mutually perpendicular electric and magnetic field at finite charge density. The presence of the electric field E is necessary in order to have non trivial Ohm and Hall currents. When E is different from zero, the on-shell action for the probe branes becomes generally complex at a critical locus, usually called singular shell, on the brane worldvolume and in order to avoid this one has to turn on the Ohm and Hall currents and suitably fix their values in terms of the parameters of the system (e.g. E, B, ρ, . . . ) [20]. The same system we consider, also with finite temperature, was studied before in [21] and the values of the DC currents were derived imposing the reality condition on the on-shell action. The holographic derivation of the AC conductivity matrix for systems involving probe Dp-branes, similar to the one we are considering, was addressed by several papers in literature. For example, in [22] probe flavour Dp-branes in the context of a neutral Lifshitzinvariant quantum critical theory were considered and the AC conductivity with non trivial charge density, temperature and electric field and vanishing magnetic field was obtained. The authors of ref. [23] studied probe Dp-branes rotating in an internal sphere direction and derived the AC conductivity of the system considering nonzero electric field and charge density and vanishing temperature. The results of [22,23] are both compatible with a finite temperature regime, as suggested for instance by the presence of a finite peak at low frequency. In [23] this is a consequence of the fact that there is an induced horizon by the rotation of the Dp-branes and therefore there is an effective nonzero induced Hawking temperature proportional to the frequency of rotation. In the system we consider we expect to find, at least in some regimes, a similar physics since when the singular shell is outside the Poincaré horizon it plays the role of an induced horizon, resulting in a finite effective temperature. The strategy we use to evaluate the AC conductivity matrix is the following. We focus on the linear response regime and we fluctuate gauge and scalar fields upon a fixed background. Then we solve the equations of motion of the action which rules the dynamic of the system i.e. the DBI action. We obtain the equations of motion for the gauge field fluctuations A (1) a (t, r) = e −iωt a a (r) and we solve them numerically. The AC conductivities in the linear response regime can be evaluated using the Kubo formula where G R j i j j is the retarded current-current Green's function. Using the holographic dictio- nary this can be computed as in terms of the r-dependent part of the gauge field fluctuations, a i (r). The paper is structured as follows. In section 2 we will describe in detail the holographic model we consider. We will show its action, discuss how the currents are naturally fixed by reality conditions which must be imposed on the on-shell Routhian and we show the phase diagram of the system. In section 3 we derive the effective action for the fluctuations for the D3/D5 system in a very general framework, considering both scalar fields and gauge field fluctuations. Section 4 is devoted to the computation of AC conductivity matrices for all the relevant phases of the system. For each of these phases we show some plots of Ohm and Hall conductivities. We conclude with section 5 where we discuss the obtained results. The holographic model The holographic model we consider is the D3/probe-D5-brane system. In this section we briefly summarize the setup and the allowed configurations for this system. These will constitute the background configurations around which we fluctuate in order to study the conductivities. D-brane setup We start by considering a stack of N D3-branes, which as usual we replace with the AdS 5 × S 5 geometry that they generate in the near horizon limit. In the coordinate system we use, the AdS 5 × S 5 metric reads where d 2 Ω 2 = dθ 2 + sin 2 θdϕ 2 and d 2Ω 2 = dθ 2 + sin 2θ dφ 2 are the metrics of two 2-spheres, S 2 andS 2 . The AdS boundary is located at r = 0 and the Poincaré horizon at r = ∞. We now embed N 5 D5-branes as probes in this background. We choose σ a = (t, x, y, r, θ 1 , ϕ 1 ) as D5 worldvolume coordinates and we also allow the D5-branes to have a non-trivial profile along ψ. The choice of the embedding is summarized in table 1. The dynamics of the D5-branes in the probe approximation regime is governed by the DBI action JHEP12(2018)109 where T D5 = (2π) 5 g s α 3 −1 is the D5-brane tension and γ ab is given by g ab being the induced metric on the D5-brane worldvolume and F = dA being the field strength of the U(1) gauge field A living on the D5. Note that we do not include the Wess-Zumino term in the action since it will not play any role in our setup. With our ansatz for the embedding the induced geometry of the D5-brane turns out to be In order to have a finite charge density, an external magnetic field orthogonal to the defect and a longitudinal electric field we make the following choice for the worldvolume gauge field (in the A r = 0 gauge) (2.5) The A t (r) term is the one responsible for the finite charge density, E and B are constant background electric and magnetic fields along the x and z directions respectively. 1 The two functions A x (r) and A y (r), as we will shortly see, are in general necessary in order to have a physical configuration; indeed they encode the information about the optical and Hall currents. Plugging the induced metric (2.4) and the worldvolume gauge field (2.5) into the DBI action (2.2) and integrating over all the worldvolume coordinates but r we get S = −N 5 dr L ψ(r), ψ (r), A a (r); r , L ψ(r), ψ (r), A a (r); r = sin 2 ψ(r) r 4 1+ B 2 −E 2 r 4 1+r 2 ψ 2 (r) (2.6) We immediately see that A t , A x and A y are cyclic coordinates and thus their conjugate momenta, that represent the charge density ρ and the currents j x and j y respectively, are constant. They turn out to be The presence of cyclic coordinates simplifies considerably the problem since we can immediately solve the relations (2.7) for the gauge field functions A a (r). It is also useful to JHEP12(2018)109 consider the Routhian (density) R, i.e. the Legendre transformed Lagrangian with respect to the cyclic coordinates, which it is given by The equation of motion for the only non trivial variable ψ is then simply given by the Euler-Lagrange equation for the Routhian. We could think of the conserved momenta ρ, j x and j y as parameters for the various physical configurations of the system, just like the external fields E and B. However, as we will see in the next subsection, this is only true for the charge density ρ, since the currents are actually subject to physical constraints that uniquely fix their values in terms of the other parameters. The currents If we take a careful look at the expression (2.8) for the Routhian we notice a potentially critical issue. The square root term ξχ − a 2 seems quite dangerous since it can become imaginary for certain regions of the brane worldvolume. Indeed from eqs. (2.9)-(2.11) we get that (2.12) We see that near the boundary ξχ − a 2 sin 4 ψ, i.e. it is positive, and the Routhian is real. However moving toward the Poincaré horizon this term may change sign. If we want to have a physically acceptable configuration we have to avoid this. Now we examine the conditions that are needed in order for the Routhian to stay real: we will distinguish two cases, E > B and E < B. Currents for E > B. When E > B it is simple to understand what can cause problems to the Routhian. Indeed from the definition of ξ in (2.9) we see that in this case there exists a zero of ξ for a finite positive value of r = r s , given by The locus of points in the brane worldvolume with r = r s is usually called singular shell. In general it is quite obvious that when ξ is zero the combination ξχ−a 2 becomes negative and this results in an imaginary Routhian. However, as pointed out by Karch and O'Bannon in ref. [20], we can prevent this problem by requiring that both χ and a also have a zero JHEP12(2018)109 at the same point r s . 2 Imposing this condition fixes the values of the currents j x and j y to the following expressions (2.14) Currents for E < B. When the electric field is smaller that the magnetic field the singular shell coincides with the Poincaré horizon. Nevertheless, also in this case, in order to fix the currents we can look at the sign of ξχ − a 2 in eq. (2.12). In particular we observe that this is positive near the boundary while it is negative near the Poincaré horizon where the r 8 contribution dominates. It is easy to check that in order for ξχ − a 2 to be always positive we have to choose the currents so as to cancel this r 8 contribution. In this way we obtain the following values for the currents D5-brane configurations In order to build all the possible configurations for the D5-brane embeddings we have to explicitly solve the equation of motion for ψ coming from the Routhian (2.8). We look for solutions that have the following asymptotic behavior near the boundary In principle also a term c 1 r could be present in this expansion but we discard it since c 1 would correspond to the mass of the fermions in the dual defect theory and in real graphene this is zero. The modulus c 2 is instead proportional to the chiral condensate, c 2 ∼ f f . Setting c 2 = 0 gives the trivial constant solution ψ = π/2. This solution corresponds to the chirally symmetric configuration, which we denote χ S . Solutions with c 2 = 0 represent instead configurations with spontaneously broken chiral symmetry, χ SB . The solutions can be classified in black hole (BH) embeddings and Minkowski (Mink) embeddings, according to whether or not the brane worldvolume reaches the Poincaré horizon [25,26]. Minkowski embeddings are those for which the worldvolume pinches off at some finite radius r 0 , i.e. ψ(r 0 ) = 0. For such particular configurations the arguments of the previous subsection do not apply. Indeed in this case the singular shell does not actually exist, since r s > r 0 . Thus the on-shell Routhian is always real and we do not need to impose any physical condition on the currents; in this case the currents can be safely set to zero. Table 2 summarizes the values of the currents for all the possible D5-brane embeddings. Note that Minkowski embeddings are possible only for neutral configurations, ρ = 0 [25,26]. This is due to the fact that in the string picture charge carriers are represented by F1-strings stretching from the D3-branes to the D5-branes. Since F1-strings have greater tension than D5-branes they eventually pull the D5-worldvolume to the Poincaré horizon giving rise to BH embeddings. JHEP12(2018)109 Mink embeddings BH embeddings BKT Phase diagram for the D3/D5 system: the blue region covers the chirally symmetric phase, χ S , and the red region the spontaneously broken phase, χ SB . Phase diagram In order to derive the phase diagram for the system we have to compare the free energies of all the possible solutions in some thermodynamical ensamble, in order to determine which configuration is energetically favored. We choose to work in the ensamble where the density ρ, the magnetic field B and the electric field E are kept fixed. With this choice the right quantity that defines the free energy is the on-shell Routhian. In the explicit computations of the solutions and their free energies it is actually convenient to reduce the number of relevant parameters (i.e. the dimension of the phase space) from three to two. This can be done, without loss of generality, thanks to the underlying conformal symmetry of the theory. We choose to measure everything (ρ and E for instance) in units of magnetic field B. The results of the analysis on the thermodynamics of the phases can be found in [21]. 3 They are summarized by the phase diagram in figure 2. The two competing phases are the chirally symmetric one χ S (blue region) and the chirally broken one χ SB (red region). JHEP12(2018)109 Analyzing the phase diagram through a vertical slicing we see that when E < B, while increasing ρ, the system undergoes a BKT transition at ρ/B = √ 7 from the χ SB phase to the χ S one. When E > B instead only the trivial ψ = π/2 solution is allowed and thus the system is always in the symmetric phase χ S . In the non-symmetric region we have also to distinguish the zero density slice from the finite density area, since in the former the D5-brane configurations are Minkowski embeddings while in the latter BH embeddings. The fluctuations In this section we review how to introduce the fluctuations for the D3/D5 system and we show their equations of motion. We will do this by deriving the effective action for the fluctuation fields [27,28]. At first, the effective action and its equations of motion will be constructed for a generic setup of the D3/D5 system and we will eventually specialize it to the case of interest. The effective action for the fluctuations and the open string metric As we discussed in the previous section, in the low energy limit, the dynamic of the D3/D5 system is encoded in the DBI action showed in eq. (2.2). We use the static gauge where the embedding functions X µ are split in the following two groups X a = σ a a = 0, 1, . . . , 5 Exploiting the absence of mixed terms G aI in the background AdS 5 × S 5 metric (2.1), we can simply write the pull-back metric tensor g ab as The embedding functions Z I and the gauge fields A a can be written as sums of background terms and small perturbations where is just a small constant parameter controlling the perturbative expansion. The background functions Z (0)I and A The strategy to build the effective action for the fluctuations is to expand the Lagrangian density up to the second order in We start by considering the expansion of the pull-back metric (3.2) and the field strength ab , In this way we obtain that the terms in the expansion (3.4) of the Lagrangian are given by 4 with and ab . (3.9) Clearly, in order to obtain the effective action for the fluctuations, the quantity we are interested in is just L 2 . 5 Now we want to express this Lagrangian in terms of the so-called open string metric, s ab , which represents the effective geometry seen by open strings in the presence of external fields [29,30]. The inverse open string metric s ab (s ab s bc = δ c a ) can be defined as the symmetric part of the inverse γ ab matrix with s ab = s ba and θ ab = −θ ba . This equation can be inverted and it can be shown that provides the definition of the open string metric as a combination of the pull-back metric and the gauge fields. With our choice for the D5-brane embedding (see table 1) the worldvolume coordinates are σ a = (t, x, y, r, θ 1 , ϕ 1 ). However we will not consider fluctuations along the S 2 wrapped by the D5-branes. This means in particular that A (1) θ 1 = A (1) ϕ 1 = 0 thus the indices in the gauge field fluctuations effectively vary only along (t, x, y, r). Given that and using JHEP12(2018)109 eq. (3.8) and eq. (3.11) we can write the effective action S eff ∼ L 2 as 6 where the Levi-Civita symbol is defined as txyrθ 1 ϕ 1 = − txyrθ 1 ϕ 1 = 1. Note that the last term of S eff is a topological term that appears only if there are two non-vanishing components of θ ab with all different indices in the subset a, b = (t, x, y, r). We can now plug the pull-back metric components (3.6) into the effective action in order to write it as a sum of kinetic terms, mass terms and interaction terms for the fluctuating scalar fields and gauge fields where the coefficients P, Q, R, S, T are ,d (s ad s bc −s ac s bd +s ab s cd +θ ad θ bc −θ ab θ cd ) , ,d (s cd θ ab −s bd θ ac +s ad θ bc −s ac θ bd +s ab θ cd ) , (s cd θ ab −2s bc θ ad +2s ac θ bd ) . JHEP12(2018)109 From the Lagrangian (3.14) we obtain the general equations of motion for both the embedding functions and the gauge fields The coefficients (3.15) are very complicated in general, however when we specialize them to the case under consideration many simplifications are possible. First of all, since we consider background solutions with a non trivial transverse profile of the D5-branes only along ψ, we will also consider only one scalar perturbation field along the same direction, i.e. Z (1)I = ψ (1) δ Iψ . With this assumption, and using the background specification of section 2, we obtain that the non vanishing components of the coefficients (3.15) are just (3.18) To simplify the notation we denote the background profile ψ (0) simply as ψ. The conductivities In this section we show the results for the Ohm and Hall conductivities obtained from the holographic D3/probe D5 model introduced in section 2. Notice that the DC conductivities are already known, since by definition they can be simply calculated from the currents j x and j y : Actually, using the currents determined in section 2.2, what we obtain is the full non-linear DC conductivity tensor. In this section we will instead focus on the linear response theory, that allows us to derive the frequency dependent conductivities. As a first step we solve the equation for the gauge field fluctuations A a with the following (zero momentum) ansatz We also fix the gauge choosing a r = 0. Then the conductivities σ ij are obtained through the Kubo formula JHEP12(2018)109 where G R j i j j is the retarded current-current Green's function. Using the holographic dictionary this can be computed as According to what we saw in section 2 for the currents, we distinguish two cases, E > B and E < B. Conductivities for E > B As showed in the phase diagram of figure 2, when E > B there is only one stable configuration for every value of the charge density, namely the chirally symmetric one, with ψ = π 2 . The coefficients (3.18) of the effective action for the fluctuations become extremely simple in this case and the antisymmetric tensor θ ab , whose non vanishing components are where r s is the singular shell radius introduced in eq. (2.13) and r ρ is defined as Notice that the open string metric (4.6) is a black hole metric and its horizon radius exactly coincides with the singular shell radius (2.13). The Hawking temperature of this black hole geometry is given by and it represents the effective temperature felt by open strings. Thus, even though we considered a zero temperature background, the presence of the electric field induces an effective thermal heat bath. JHEP12(2018)109 Although the theta tensor (4.7) has apparently enough non zero components to give rise to a topological term, it turns out that when these components are plugged into (3.13) they yield Q = 0. Thus the effective action for the fluctuations is just given by the Maxwell action. Nevertheless, due to the form of the open string metric (4.6), without vanishing components in the 4-dimensional (t, x, y, r) sub-manifold (unless for zero density), the equations of motion for the gauge fluctuations are still quite complicated. We can simplify them slightly by making a change of coordinates that kills the mixed radial components s tr , s xr , s yr of the open string metric, in such a way that the latter becomes 7 (4.10) Now we have all the ingredients to write down the equations of motion for the gauge fields fluctuations A (1) a using the ansatz (4.2) and the gauge choice a r = 0. The a t component can be easily decoupled and one is left just with the equations of motion for a x and a y . In the near-horizon limit, r → r s , both these equations become Performing a Frobenius expansion we get that the correct behavior near the open string metric horizon is where T eff is the effective temperature in eq. (4.9). Therefore, near the singular shell we can write the solution as (4.13) where the first term takes into account the right infalling behavior near the singular shell while χ is a regular function which can be expanded analytically in powers of (r − r s ). In particular we can express the near-singular shell shape of χ i as follows where the coefficients c x 1 , c x 2 , . . . , c y 1 , c y 2 , . . . can be easily determined as functions of ω, r s , ρ and of the two moduli c x 0 , c y 0 . 7 The change of coordinates is of the form dt → dt + ftr(r) dr , dx → dx + fxr(r) dr , dy → dy + fyr(r) dr , with the three f functions fixed in such a way to get rid of the mixed radial components of the open string metric. It turns out that this change of coordinates does not affect the computation of the conductivities since the behavior of the f functions near the boundary is of order O(r 4 ). Thus we can safely proceed with this transformed metric. JHEP12(2018)109 DC conductivities. The DC conductivities can be easily extracted from the equations of motion in an analytical way, since they only require the knowledge of the solution for the fluctuations up to the linear order in ω in the small frequency limit. Then the strategy is to expand the functions χ x and χ y in powers of the frequency ω as follows The solutions for the χ (k) i functions can be obtained analytically. Imposing regularity at the singular shell and using the holographic formula (4.4) at the leading order in ω we found the following results for the DC conductivities (4.16) It is straightforward to check that these conductivities are in perfect agreement with the expressions of the currents in eq. (2.14). Indeed we can extract the linear conductivities from the latter as follows: we add a small perturbation ε along the j = x, y direction to the background electric field, E → E + εĵ (ĵ unit vector pointing along j) and then, according to eq. (4.1), we read the conductivities σ ij as the coefficients of the linear term in ε of the current j i . AC conductivities. In order to compute the full frequency dependent conductivity we have to solve the equations for the fluctuations and then plug the solutions in the formula (4.4). Though linear, these equations cannot be solved analytically so we used a numerical technique. 8 The boundary conditions of the differential equations are fixed at the singular shell using (4.13) and (4.14). In the following we show some plots of the conductivities computed with our model in the E > B sector. Without loss of generality, the magnetic field B has been set to 1 in all plots. Figure 3 shows only the real part of the conductivities, since the imaginary one can be straightforwardly determined by means of the Kramers-Kronig relation, relating the real and the imaginary parts of the retarded Green's function as follows where P denotes the principal value of the integral. Nevertheless in figure 4 we show some examples of Im[σ xx (ω)] for completeness. From figure 4 we observe that the imaginary part of the conductivities goes to zero not only in the high frequency limit, but also in the low frequency one, ω → 0. This is also true in general for all the other cases. Looking at the plots in figure 3 we can immediately note that all the real parts of the conductivities go to a constant in the high frequency limit ω → ∞, i.e. This is a standard behavior of the (2+1)-dimensional systems and it is consistent since the conductivities are dimensionless in this case. It can be easily checked that the low frequency trend of the conductivities is consistent with the DC values determined by the Karch-O'Bannon method. Even if the brane system is translationally invariant, we do not see the presence of the delta function Drude peak, as it can be argued from the imaginary part of the conductivities, which in fact goes to zero in the low frequency limit. 9 Furthermore we recall that the analytical property of the sum rule implies that the following relation must hold if there is no delta function peak [31]. We checked that this relation is fulfilled for all the longitudinal and transverse conductivities we have considered. As it is well known, the reason why we do not see the delta function Drude peak is that we are studying the system in the probe approximation limit which introduces dissipation without breaking the translational invariance: in this regime the gluon sector of the 3 + 1 dimensional Super Yang-Mills theory plays the role of the lattice in solid state physics and, due to its large density, it can absorb a large amount of momentum standing basically still [20]. Note that as the ratio E/B → 1 the value of the DC conductivity grows and consequently we see the emergence of a Drude-like peak at small frequency. This peak eventually becomes a delta function, δ(ω), when E = B and the effective temperature felt by open strings becomes zero. This delta function is not related to momentum conservation, but to an additional conserved quantity that probe branes have at zero temperature only, the charge current operator [32]. When the (effective) temperature is small but finite the weak non-conservation of this current is then responsible for the appearance in the conductivity of the Drude-like peaks we observed. The peaks are indeed associated to poles in the Green's function, whose presence is a general feature in low energy effective theories with approximately conserved operators (quasihydrodynamics), as pointed out recently in [33]. JHEP12(2018)109 Very similar results for the AC conductivity have been found in [22] considering non vanishing temperature, charge density and electric field in the context of a neutral Lifshitzinvariant theory and in [23] studying rotating Dp-branes at zero temperature which induce an effective horizon. In both of these papers the authors found the same standard behavior at large frequencies and a finite peak in the limit ω → 0. Here we managed to obtain a very similar physics in the context of the D3/probe-D5-brane system. This is possible because, although we are considering zero temperature, there is an effective temperature induced by the singular shell, which plays the role of an horizon. Conductivities for E < B When E < B the open string metric takes the form where The open string metric has no finite radius horizon and indeed we know that the singular shell is located at the Poincaré horizon. The antisymmetric θ ab tensor is given by Differently from the previous case, now the θ ab tensor gives rise to a non vanishing topological term, Q = 0, in the effective action for the fluctuations. In this regime the D3/D5 system has two stable phases, the chirally symmetric and the chirally broken ones (see figure 2). So, we have to further distinguish between these two cases. Symmetric phase (ρ > √ 7 B.) When the charge density ρ is above the threshold value √ 7 B the system is still in the chirally symmetric phase just as for E > B. Thus, also in this case, we have that the gauge fluctuations decouple from the scalar ones (the coefficients of the effective actions (3.14) are those shown in eq. (4.5)). So the effective action we need to consider in order to study the gauge fluctuations is given by The equations for the a x and a y fluctuations in the r → ∞ limit are where again the χ i admit a power series expansion near the Poincaré horizon. We will use the form of the solutions given in eq. (4.25) in order to fix the boundary conditions in the numerical integration of the equations of motion. DC conductivities. Also in this case the zero frequency results for the conductivities can be extracted analytically. They turn out to be They are again consistent with the expressions for the currents (2.15). AC conductivities. In the following we show some plots of the conductivities for E < B and ρ > √ 7 B. Without loss of generality, the magnetic field B has been set to 1 in all plots. Looking at the plots in figure 5 we immediately recognize for the real parts of the longitudinal conductivities the same standard large frequency behavior of the E > B case, i.e. we have that they are constant in the limit ω → ∞. In the low energy limit instead they vanish as they should, in order to be consistent with the DC conductivities. At JHEP12(2018)109 intermediate frequencies they exhibit a finite peak which becomes larger when the charge density increases. We do not show the plots of the transverse conductivities since they do not depend on ω, so they are identically equal to their DC value, σ xy = −σ yx = ρ B . As in the previous case, it is possible to obtain the imaginary part of the conductivities using the Kramers-Kronig relations (4.17). However we report the imaginary part of σ xx as an example in figure 6. We find for the imaginary parts a standard behavior, very similar to the one we have obtained for the E > B case. Indeed, they vanish in both the low frequency and high frequency limits and they have finite peaks at intermediate frequencies. We have very similar plots for Im[σ yy ], while the imaginary parts of the transverse conductivities vanish for every frequency. This is consistent with the fact that their real parts are just constant. From the plots in figure 5 we notice that when the electric field is small (e.g. E = 0.1) the Ohm conductivities σ xx and σ yy are almost equal, while they become clearly different for higher values of the electric field (e.g. E = 0.8). This is consistent with the fact that the background electric field E is what actually breaks the rotational symmetry on the 2-dimensional semimetal sheet. Non symmetric phase (ρ < √ 7 B). As we see from the phase diagram in figure 2, when E < B and ρ < √ 7 B the system is in the chirally broken phase. Therefore in this case we have to deal with background worldvolume configurations for the probe D5 with non-trivial profile along ψ. These can be determined by solving (numerically) the equations of motion of the DBI action (2.2). The fact that ψ is not constant makes the computations much more involved. Indeed, when ψ = π/2 the gauge sector does not decouple anymore from the scalar one, as we can see from the action for the fluctuations (3.14) along with (3.18). We have then to consider the whole action with scalar fluctuations ψ (1) (r) only along ψ, which, to simplify the notation, we denote simply as Ψ. Then the effective action for the fluctuations S eff JHEP12(2018)109 assumes the following expression When the charge density is less than √ 7 B but finite the D5-branes have black hole embeddings, namely they do reach the Poincaré horizon. From the r → ∞ limit of the equations of motion derived from this action, we find the following behavior for the gauge and scalar fluctuations near the Poincaré horizon where the functions χ i (r) and χ Ψ (r) admit analytical expansions near the Poincaré horizon. We shall use these expansions to fix the boundary conditions in the numerical integration of the equations of motion. When the charge density vanishes the D5-branes configurations are Minkowski embeddings. In this case the boundary conditions for the fluctuation fields have to be fixed at the point where the D5-brane worldvolume pinches off. AC conductivities. In the following we show some plots of the conductivities for E < B and ρ < √ 7 B. Without loss of generality, the magnetic field B has been set to 1 in all plots. We start with the case of finite charge density, i.e. 0 < ρ < √ 7 B. The behavior of the real part of the longitudinal conductivities for small and large frequencies is the same as for the symmetric phase (E < B) case. Indeed, they again approach to a constant in the high frequency limit and they go to zero as ω → 0 consistently with the vanishing DC conductivities. At intermediate frequencies we notice the presence of some peaks, which become narrower and higher as the charge density goes to zero. For the real parts of the transverse conductivities we observe instead a different behavior with respect to the one seen for the symmetric phase. Indeed in this case they are not just trivially constant, but they vary with the frequency. They start from the DC value, have extremal points for intermediate frequencies and become constant in the high frequency limit. Again the smaller the charge density the higher are the amplitudes of the peaks. For completeness figure 8 shows some examples of the imaginary part of the longitudinal σ xx conductivity. From these plots we observe the same behavior of the symmetric E < B case for these imaginary parts. The same happens for the other longitudinal conductivity σ yy . For the transverse conductivities we have a different situation with respect to the symmetric case, as it happens for the real parts. Indeed, the imaginary parts are not zero, but they vary with the frequency in a way very similar to the longitudinal case. Also in this case we see that as the value of the background electric field approaches zero the system tends to recover the 2-dimensional rotational symmetry: indeed for small E we have σ xx σ yy and σ xy −σ yx . At zero charge density, where the background configurations for the D5-branes are Minkowski embeddings, all the real parts of the conductivities identically vanish, except for the presence of delta function peaks in the longitudinal conductivities, that can be identified looking at their imaginary parts. In figure 9 we show, as examples, two plots of the imaginary part of the longitudinal conductivity. Note that when ρ = 0, σ xx = σ yy and that the Hall conductivities are identically zero, so the conductivity matrix still has a rotational symmetry, even in presence of the electric field. JHEP12(2018)109 The behavior of the conductivities in figure 9 confirms our previous observation that the peaks in the real part of the conductivities tend to become delta functions in the zero charge density limit. Discussion We used the D3/probe-D5-brane system as a top-down holographic model for a Dirac semimetal like graphene. In particular, we considered the system at finite charge density ρ and in the presence of mutually orthogonal electric and magnetic fields at zero temperature. The phase diagram, depicted in figure 2, shows two stable phases for the system: the one with broken chiral symmetry, favored when E < B and ρ < √ 7 B, and the chirally symmetric one, favored in the remainder part of the phase space. Studying the fluctuations JHEP12(2018)109 around stable background configurations we were able to compute the AC conductivity matrices for the system. All the conductivities derived in our model have the expected behavior in the small and high frequency regimes. Indeed in the ω → 0 limit we recover exactly the DC values that can be obtained using the Karch-O'Bannon method to fix the currents. In the high frequency limit the real part of the conductivities goes to a constant; this is a standard behavior of any (2+1)-dimensional systems where the conductivity is dimensionless. The imaginary parts go to zero both in the low and high frequency limits. When E > B the system is in the metallic phase. The real part of the conductivities stays finite in the low frequency regime: this is evident from the plots in figure 3, and it is also confirmed by the vanishing of the imaginary parts of the conductivities at zero frequency, since a delta function peak would appear as a divergence in the imaginary part. Moreover, it is worth noticing that as E → B, a Drude-like peak does emerge at small frequencies. This is the expected behavior in probe brane systems [22,23,34] and it is due to the fact that in this limit the effective temperature felt by open string excitations tends to zero and the system approximately recovers the conservation of the charge current operator [32]. It is possible to compare the behavior of the conductivities we found with some experimental measures performed on graphene or similar materials. For example, it is found that for high quality graphene on silicon dioxide substrates, the AC conductivity in the THz frequency range is well described by a classical Drude model [35]. The assumptions behind this model are that there must be an electric field E which accelerates the charge carriers and that the scattering events are instantaneous and isotropic. Under these hypotheses, the conductivity of high quality graphene should look as in figure 13 (c-e) of [35]. Very similar experimental results have been found also in [36] (see figure 2 reported there). This experimental picture is compatible with what we found for the AC conductivity in the case E > B. In most of our plots, the similarity with the Drude model and with the experimental measurements is striking (even if we start to see deviations when the charge is high). In the E < B, ρ > √ 7 B case we obtained a trivial frequency dependence for the transverse AC conductivities, which therefore are fixed only by Lorentz invariance [24,37]. In the chirally broken phase, namely when E < B, ρ < √ 7 B, a peculiar and particularly interesting behavior in the conductivity does emerge. From the plots in figure 7 we clearly notice the presence of some peaks in the conductivity which become sharper as the charge density decreases and eventually turn into delta functions when ρ is exactly zero. These peaks can be interpreted as resonances that appear when the system is (almost-)neutral and that are otherwise concealed by the presence of the charge density. These resonances are related to the chiral condensates, which indeed are present only in this region of the phase space. It would be worth to further investigate if this interpretation is indeed correct. If this should be the case, the presence of the peaks would be a remarkable outcome of our model since it would show that the effects of the chiral condensates can be observed in the optical conductivities.
10,840.2
2018-12-01T00:00:00.000
[ "Physics" ]
Mathematical modelling and deep learning algorithms to automate assessment of single and digitally multiplexed immunohistochemical stains in tumoural stroma Whilst automated analysis of immunostains in pathology research has focused predominantly on the epithelial compartment, automated analysis of stains in the stromal compartment is challenging and therefore requires time-consuming pathological input and guidance to adjust to tissue morphometry as perceived by pathologists. This study aimed to develop a robust method to automate stromal stain analyses using 2 of the commonest stromal stains (SMA and desmin) employed in clinical pathology practice as examples. An effective computational method capable of automatically assessing and quantifying tumour-associated stromal stains was developed and applied on cores of colorectal cancer tissue microarrays. The methodology combines both mathematical models and deep learning techniques with the former requiring no training data and the latter as many inputs as possible. The novel mathematical model was used to produce a digital double marker overlay allowing for fast automated digital multiplex analysis of stromal stains. The results show that deep learning methodologies in combination with mathematical modelling allow for an accurate means of quantifying stromal stains whilst also opening up new possibilities of digital multiplex analyses. Introduction The rise of digital pathology and image analysis in recent years has opened up the possibility of semi-automatic and automatic methods to be developed, allowing for relevant immunostains to be detected and inform treatment, diagnosis etc. 1,2 Use of both mathematical modelling 3,4 (using methods such as variational segmentation and clustering) and deep learning 5,6 (using convolutional neural networks (CNNs)) can provide effective pipelines for assessing stains and segmenting regions of interest in microscopy images.The 2 methods are often studied and applied independently due to large differences in how they operate. Variational methods (mathematical models) typically segment images using models which specify pixels/regions of interest based on analytically defined criteria for example: intensity, shape, smoothness of the region, etc.Though offering an explainable robust framework for segmentation, histological images often need inhomogeneous, irregular regions segmented, and thus applications have been limited. 3,4e introduction of deep learning methods in recent years has helped to improve automated histopathology image analysis beyond previous methods.Deep learning methods have been demonstrated to be more effective than classic machine learning methods in segmenting histological images, 7,8 and clustering methods have shown potential to separate tissue micro-environment components like immune cells and cancer-associated fibroblasts. 9hile the predominant focus of published literature is on epithelial immunostain evaluation, 3,8,9 tumour-associated stromal stain analyses are challenging as the compartment is morphologically complex, including muscles, vessels (small and large), and acellular components which may be stained alongside the stromal cells.As the stromal cells are mainly spindle shaped, there is inherent variability in their size which adds to the complexity of assessment.To accurately assess a stromal stain expression pattern is therefore difficult; yet, given the importance of tumourassociated stroma in terms of the functional biology of tumours, 10 more and more stromal stains will need to be accurately assessed. This current study, conceived as a methodology development initiative, addressed both mathematical (variational) models and deep learning (artificial intelligence)-based automated assessment of stromal stains.Alpha-smooth muscle actin (SMA) and desmin, being the 2 most common stromal immunohistochemical stains currently used in a clinical setting, were chosen as exemplars.They signify cellular stromal content and the pathologists' evaluate skeletal muscle and smooth muscle differentiation, respectively, from these 2 stains.Also functionally, these markers have been found to be expressed in colorectal cancer-associated fibroblasts 11 or desmoplastic tumoural stroma. 12Tissue microarrays (TMAs) are powerful economising tools that allow for the study of multiple tissue samples simultaneously, and it is therefore no surprise that they have been incorporated into digital pathology methodologies, including assessment strategies for prognostication. 13,14In the current study, TMAs stained with SMA and desmin were assessed singly and with dual digital overlay as proof of principle that stromal stains may be efficiently assessed using such techniques on tissue cores.Colorectal cancer (CRC) tissue was chosen as an exemplar, as it poses a significant health burden, being a leading cause of mortality throughout the world, with 1.9 million new cases diagnosed each year and a 5-year survival rate of 50%. 15The principal cause of death in patients with CRC is metastasis to the liver or lungs occurring in 25% of patients at diagnosis. 16It has been shown that the tumour stroma plays a vital role in the process of epithelial-mesenchymal transition (EMT), a crucial process in invasion and metastasis of CRC.Tumours with high stromal content have been shown to be associated with poor prognosis and stromal stains such as SMA have previously shown their ability to detect cancer-associated fibroblasts (CAFs). 17Stromal immunostains may therefore have potential utility in determining patient outcome.This paper therefore addresses the need for novel methods for the automated detection and quantification of stromal stains to serve as an adjunct tool for helping pathologists with their assessments. Tissue cores and staining Anonymised tissue cores (in the form of a TMA) were provided from a random cohort of colorectal cancer cases from Nottingham University Hospitals NHS Trust [Ethics approved by Health Research authority, East Midlands -Leicester Central Research Ethics Committee, REC reference: 23/EM/0079; IRAS project ID: 313393].The tumour cores were selected for each case from 3 different tumoural areas: luminal, central, and peripheral, to account for tumoural heterogeneity.These cores were stained with clinical grade antibodies SMA and desmin and counterstained by haematoxylin [Ventana Benchmark Ultra].Digital images of SMA and desminstained TMA cores were obtained using a DP200 scanner (Roche) (×40 magnification).Digital images of colorectal tissue cores from the Human Protein Atlas (proteinatlas.org) 18,19stained with Vimentin and SMA were also assessed.In total, 6 cores with Vimentin and 12 cores with SMA were available. Manual annotations To inform both methodologies, 113 cores were manually annotated using the hand-guided "Wand-Tool" in QuPath under consultant pathologist guidance.All cores were annotated including their stromal compartments, with care taken to highlight large blood vessels and muscularis if present.This was done as the SMA and desmin stain were specifically assessed in the stroma, excluding confounding staining in these anatomical structures.These annotations formed the basis of both methodologies tested. Manual assessment of stain Where manual assessment of stain was performed, histopathologists used an eye-estimate of the percentage of stromal area stained with SMA and desmin (mentally accounting for area of muscle or large vessels); a stain intesnity of 0, 1, and 2 was allocated on manual judgement.A Hscore was produced and median cutoff was generated to classify for "high" or "low" stromal staining.Chi-squared analysis was used to assess significant correlations between manual assessment versus automated assessment as outlined later.Adjusted residuals signify where the significance in tables arise from. Stroma region detection by deep learning The detection of the stromal region was done using deep learning methods as manual annotations were available.Stain segmentation was not achievable via deep learning due to lack of such training data. Deep learning method A U-Net model 20 (shown in Fig. 1) was applied for the stromal region segmentation task.Each tissue core was an RGB image, which was then pre-processed and fed into the U-Net.The final layer of the network was a softmax function, with the models output being a predicted mask map U k k 0,1, ,K , where K is the number of classes.In this case, K 3 and the 3 classes were "background", "stroma", and "muscle".These classes account for acellular stroma, cellular stroma, and muscularis propria/muscle in vessel walls.The involvement of the extra class (i.e., muscle) was under the consideration of the highly visual similarity between stroma and muscle regions.The loss function was the commonly used cross-entropy loss. For pre-processing, the images of cores cropped from 40× magnification of various sizes were first downsized to 256×256 for both training and testing.Random rotation and flipping operations were adopted on images for data augmentation during training. Stain detection by mathematical modelling For the task of stain segmentation, an unsupervised variational method was used as exhaustive manual annotation of the stained region necessary for a deep learning method was unfeasible.The adopted method is the region-based convex relaxation variant of the Mumford-Shah method, as proposed and studied 21 for segmentation of multi-channel images.Denote a d-channel colour image as f f 1 , , f d with different colour channels f i Ω R for i 1, , d, where Ω⊂R 2 is an image domain.The method utilises 3 stages: the convex relaxed Mumford-Shah model in the first stage, lifting the image into a larger image space by combining another colour representation in the second stage, and a thresholding strategy to obtain segmentation in the final stage. The first stage was achieved by minimising the following functional, as in Xiaohao et al., 21 for each channel f i separately to achieve a smooth image u u 1 , , u d : For TMA cores, images are given as RGB and so d 3. In the second stage, dimension lifting was performed by using the Lab colour space.The 3 channels in the Lab colour space are: perceived lightness (L), green-red colours (a), and blue-yellow colours (b).The Lab space was designed so that a numerical change is proportional to a similar perceived change in colour.It is noted in George 22 that the Lab colour space is better suited for image segmentation rather than RGB for certain challenging tasks.For the particular case of stain detection in cores stained by SMA and desmin, the stains are coloured brown, and in particular, the b channel in the Lab space segments brown colours well. The dimension lifting was done simply by concatenating the RGB channels of the restored image u with the transformed image into the Lab space.Let u′ u′ 1 , u′ 2 , u′ 3 be the Lab transform of the RGB image u.After concatenation, the vector valued image u * with d 6 is used in the third stage to achieve stain segmentation by thresholding.Fig. 2 shows a typical image of a SMA-stained tumour core and each of the 6 channels utilised. The third stage achieves segmentation by thresholding.Threshold values can be set manually or found automatically using the k-means algorithm.K-means is an unsupervised clustering method which partitions a set of pixels into K clusters based on their intensities.Similarly, coloured pixels will be grouped into the same cluster. For this particular application of stain detection, we addressed the challenges arising from the tasks of SMA and desmin stain detection.Threshold values were determined for both cores separately, and in addition, a method was developed which grades the intensity of the SMA staining based on a heatmap.Heatmaps were generated using the intensity of certain channels of u * , the output from the variational model.Finally, in the mathematical modelling, an image registration method is applied, which aligns an SMA image with its desmin counterpart, allowing for the region of double staining to be identified. SMA stain detection While k-means is a popular choice to cluster an image, segmenting the SMA stain using k-means is not sufficient: objects of a similar intensity were also identified in the same cluster as the brown stain (such as muscle, blood vessels, and fibroblasts).Changing the number of clusters in the k-means algorithms hinders performance further, as the staining and similarly coloured unwanted objects were not distinct enough to warrant a new cluster.Therefore, segmentation of the stain only is achieved using 2 steps: an initial k-means run on the whole 6-vector image, followed by a second k-means run on just the b channel in the Lab space. First, the k-means algorithm is performed on the whole image u * , so that the image domain is split into 3 regions, Ω Ω 1 ∪ Ω 2 ∪ Ω 3 .The final cluster, Ω 3 , is a good approximation of the cellular stroma, and as such is later used in results to provide ratios of stained cellular stroma. However, to achieve stained segmentation only, Ω 3 is further refined using the k-means algorithm again with 3 clusters restricted to the domain Ω 3 , but the image used is the b channel only of the 6-vector image, i.e. u′ 3 .The resulting cluster containing the stain only is defined as Ω SMA A second run of k-means on the b channel is effective at partitioning miscellaneous objects from the stain. To ensure an automatic method, both Ω 3 and Ω SMA are taken as the clusters from the respective run of k-means with the largest value in the b channel. Desmin stain detection For desmin-stained images, the process was slightly different, as the desmin staining is sparse to absent in the cores.The steps included running the variational model (2.5), lifting into Lab space, running k-means on the 6vector image u.However, the refinement of the k-means result was done by simple thresholding, unlike in the previous SMA case, where k-means was run twice.The k-means algorithm assumes that the size of clusters are of roughly similar, and so k-means would not effective separate the stain from other objects, as the stain is not large enough to be a distinct cluster.Moreover, unlike SMA images, the stain in desmin images is rather distinct, and so, a simple operation like a predefined threshold value is effective.Therefore, the domain containing desmin stain only, Ω Desmin is found by thresholding the image in the domain Ω 3 .Further detail can be found in the results section, in which further development of the model is discussed. Grading SMA stains As well as assessing the spatial stain distribution, the strength of the stain was also assessed based on the intensity value of the pixels in the segmented stromal area.In SMA images, a darker colour implies a stronger stain while a lighter colour is representative of a fainter stain.A method was developed to class stains into 3 grades by thresholding a heatmap. To construct the heatmap, the output of the variational model in Lab space was used, but scaled by a factor of 10 in the a and b channels to obtain a vector valued image w u′ 1 , 10u′ 2 , 10u′ 3 These 2 channels show a distinction between different shades of brown, allowing for the differentiation of intensity values.Then, the original Lab image u′ u′ 1 , u′ 2 , u′ 3 and scaled Lab image w are used as input into the MATLAB function imcolordiff, which calculates the colour difference between images.The output of the function gave , a heatmap.Most SMA images produced a heatmap with an approximate maximum value of 1.25, though in some cases, a max of 1.4 was noted. Classifying different grades was done by selecting thresholds applied to , on the domain Ω SMA .Denoting the 3 grades as Ω Gi , i 1, 2, 3, the 3 sets defining 3 grades were defined as: Alignment of SMA and desmin cores by image registration With both SMA and desmin cores segmented, the level of double biomarker positive stroma (i.e., the region of the stroma that is stained by both biomarkers) can be assessed.Simple overlaying is not sufficient as the SMA and desmin images are not usually aligned. Image registration methods are used to align 2 images.The aim of image registration is to find a deformable transformation y x R 2 R 2 which maps an image T to a fixed image R, with T, R ∈ Ω⊂R 2 , such that T y x ≈R x .The transformation is usually written as y x x φ x , where φ x φ 1 x , φ 2 x is the displacement vector field.In the case of mapping SMA images to desmin images, the core aim is to map segmented stain from the SMA image using an appropriate map, such that the mapped SMA stain is aligned with the segmented desmin stain.This allows for the comparison of regions where staining is positive for both images.To do this, a variational registration model is implemented, given by the following: where MI is the Mutual Information (MI) similarity measure, Pluim Josien et al. 23 defined as: where p T , p R are the probability distribution functions (PDFs) of grey values in T and R, and p T,R is the joint PDF of grey values. After finding a transformation from the SMA image to the desmin image, assessing the regions of double staining is simple.The transformation was applied to the SMA stain segmentation, and the double-stained region is determined by the intersection of the transformed segmented SMA stain and the segmented desmin stain. Results Combining outputs from both the mathematical model (MM) and the deep learning method (DLM) leads to interesting results, based on their ability to segment out stroma from epithelium (DLM), as well as their ability to identify positive stromal cells (MM) and remove large vessels or muscularis components from the stromal compartment assessment (DLM).In order to identify and segment the stromal region in each TMA core, the DLM is solely utilised due to availability of training data of stromal regions for DLM and MM's inability to differentiate between cells.The regions within the stromal compartment which had taken up the SMA stain were then identified using the MM due to DLM's inability to function without training data, and MM's ability to differentiate contrast in colours within the stromal region, without training data.Finally, the MM was used to identify cases with both SMA and desmin positivity. Development of method Due to the nature of stromal stains, such as SMA or desmin, the complexity of accurately identifying and quantifying the speicfic stromal cell component requires a multi-layered approach.For the segmentation of SMA, running k-means only once would not be sufficient.The difficulty of using the k-means algorithm to partition the SMA images was due to its tendency to cluster the brown staining with unwanted similar colours, such as intense haematoxylin-stained immune cells. An example of the standard k-means output is shown in Fig. 3 on a given SMA image.The image domain is clustered into 3 clusters with the k-means algorithm, such that Ω ∪ 3 i 1 Ω i .The cluster containing the stain (Ω 3 ) contains additional unwanted objects that differ in colour value only slightly, as shown in Fig. 3e, in which the mask of the cluster multiplied with the RGB image is shown.The difference between the stained pixels and other pixels becomes more obvious when examining the b channel in the Lab space, as shown in Fig. 3f, where the stained pixels have a larger intensity in this channel.Therefore, including a second k-means run on the b channel only allows for the differentiation of the stromal stain from other objects.Fig. 4 shows the refinement of Ω 3 from Fig. 3, where the unwanted objects are in one cluster (refined cluster 1) and the stained pixels are in another cluster (refined cluster 2, i.e., Ω SMA ).Note that Ω 3 and Ω SMA are detected automatically by taking the cluster from the respective run of k-means with the largest value in the b channel. For desmin stained detection, the k-means algorithm was not suitable to further partition the image domain as k-means assumes the size of each cluster to be relatively equal.Due to the low levels of desmin staining observed in the TMA cores, the required cluster containing the stain would also need to be small to accommodate the level of desmin stain.As a result, the k-means output on desmin images tended to group brown staining with unwanted objects.Moreover, the intensity of staining in desmin images in the b channel of the Lab space is rather distinct, and therefore simple thresholding is a suitable solution to acquire Ω Desmin To formally define this, the stain on the desmin image, Ω Desmin , is found by thresholding the b channel over the domain Ω 3 after an initial k-means run: where ρ 0 65.In principle, the tolerance for choosing ρ for this particular application of desmin staining is wide, as the stains are distinct enough. The typical k-means output for desmin images is shown in Fig. 5, in which the cluster containing the stain (Ω 3 ) contained both stained pixels and many unwanted objects.To refine the segmentation, the b channel (displayed in Fig. 5f) is thresholded to obtain Ω Desmin .An example of refining the initial k-means cluster in this way can be found in Fig. 6. To conclude the development of the methods, a summary of the overall method to achieve segmentation of both the stroma by deep learning and stain segmentation by mathematical model involving: segmentation of both SMA and desmin cores, and alignment of the 2 cores via registration is shown in Fig. 7. Analysis of stromal segmentation Statistical comparisons between average scores for manually annotated stromal segmentation and automated DLM stromal segmentation (cases n 35, cores n 84, a subset of the in-house dataset) are presented in Table 1 with a chi-squared analysis.It was found that the 2 methods were significantly correlated with each other p ≤ 0 001 , whereby both methods would correctly identify cases as either low or high percentage stroma.Only 4 cases differed on final outcome. Moreover, 4-fold cross-validation was adopted for the evaluation, where the 113 images were split into a training set and test set with a ratio of 75:25 in each of the 4 independent experiments.Further, 20% of the training set were randomly selected as the validation set.The mean values and standard variances of the corresponding performance metrics are reported in Table 2. The DLM and manual segmentation of the stromal region were also compared with a trainee histopathologist's manual scoring of the percentage stroma (cases n 32, cores n 59, a subset of the in-house dataset) presented in Table 3 with a chi-squared analysis.Both methods displayed significant association with the histopathological assessment, however the deep learning method correctly categorised a further 2 cases compared to the manual segmentation method. To clearly determine stromal stain expression, the DLM was designed to exclude regions of muscle and large blood vessels where possible.Fig. 8 is representative of no muscle regions in the cores, whilst Fig. 9 is representative of substantially large muscularis regions in the cores.It is observed that the trained model can recognise the stromal regions with high accuracy.However, as seen from Fig. 9, in some difficult cases, the model still had a tendency to mistakenly recognise muscle regions as stromal regions.This is mainly due to the high morphological similarity between stromal cells and muscles fibres. Analysis of SMA stained stroma Using the SMA segmentation result, 2 scores were produced: The first score, denoted as SMA 1, was the percentage of stain taken up with respect to the area of the entire stromal compartment.This stromal compartment score is segmented using DLM, and the stained region segmented and quantified using the MM.The second score, denoted as SMA 2, is a percentage of stain taken up with respect to the area of the cellular stroma, which is more likened to the way in which a histopathologist would quantify a stromal stain.The cellular stroma region is taken as Ω 3 , as defined in Section 2.5.1, which is the initial output of the k-means algorithm before refinement.Some examples can be found in Fig. 10, and quantitative results for the SMA scoring using these 2 methods can be found in the first and second column of Table 4, respectively. Additionally, in Fig. 11, the associated heatmaps generated by the MM are shown, allowing for the classification of the stain into intensity grades 1-3.This allows for an image-based H-score to be calculated, capturing both the intensity and the percentage of SMA/desmin positivity from the segmented stromal cells.Thus, offering an effective automated means to accurately quantify stromal biomarker expression.The scores for intensities 1-3 (as well as the negative area denoted as intensity 0) are shown in Table 4, in which the number reflects the percentage of stromal stain categorised into the respective grade with specifically for the area of the stromal cells within the stromal compartment.Subsequently, the image analysis based H-Score is calculated and displayed in the following column. Analysis of desmin-stained stroma Quantitative scoring on the cores stained by desmin is done by the MM only.Fig. 12 shows some examples on some cores.Quantitative results for the scoring of these cores can be found in the second last column of Table 4.This score is a percentage of stain taken up with respect to the area of the entire stromal compartment, defined exactly the same as the first SMA score.In order to generate this score, the stromal region of the core must be found.To acquire this, the SMA core is registered to the desmin core, and the DLM stromal segmentation output is also registered to provide the stromal segmentation of the desmin core. Overlay of SMA and desmin-positive stroma Analysing the region of double positivity requires first alignment of the SMA image to desmin image, as in general, the 2 images do not coincide with each other.As described in Section 2.5.4,this is achieved by registering the SMA image to the desmin image.Some sample results are displayed in Fig. 13, in which the SMA and desmin images are displayed in the first 2 columns, and the registered SMA image is shown in the third column.With the same transformation, the segmented SMA stain (as shown in Fig. 10) is moved and displayed where it coincides with the desmin stain (as shown in Fig. 12), which is displayed in the final 2 columns.Quantitative results for the scoring of these cores can be found in the final column of Table 4.This is a score of the percentage of stain taken up in both cores with respect to the entire stromal compartment. Bland-Altman Bland-Altman plots are provided to demonstrate the differences between scores provided by the MM and 2 histopathologists.Plots comparing both methods of SMA scores are given (firstly, as a percentage of the stromal compartment and secondly, as a percentage of stromal cells).In Fig. 14, the Bland-Altman plots for the first method of scoring SMA cores are shown, as well as show the histogram of the scores.It is noted that the manual scores by the histopathologists have a tendency to underestimate "low" scores and overestimate "high" scores, and therefore further Bland-Altman plots are constructed in Figs. 15 and 16, in which the data has been split into "low" and "high" according to the median cutoff of the histopathologists' scores in Fig. 15, and the median cutoff of the MM method in Fig. 16.Similar plots for the second method of scoring SMA cores are shown in Figs. 17 and 18.Finally, the Bland-Altman plot for scoring on desmin cores is shown in Fig. 19. Human Protein Atlas The proposed method was applied to CRC TMA images from the Human Protein Atlas. 18,19Two independent stromal stains were analysed: 12 cores stained by SMA and 6 cores stained by vimentin.Cores were assessed digitally using the MM and manually by a histopathologist.Assessment of stains from manual assessment and proposed model correlate well, though correlation of stroal detection was not as strong.Some example stain segmentations of SMA are shown in Fig. 20 and of vimentin in Fig. 21.A Bland-Altman plot of the H-Score comparison is shown in Fig. 22. Discussion In histopathology practice, in contrast to epithelial stains, which are relatively easier to annotate and asssess both manually and digitally, stromal stains present unique challenges.These may be due to stromal composition, cellular or acellular; stromal cell irregular morphology or apparent interdigitation/overlap.This study therefore aimed to develop an efficient automated flowthrough to analyse stromal staining which abrogate the aformentioned challenges.The holistic approach involved both assessment of the percentage stromal area as well as the percentage of stromal cells that were stained.Confounding staining in smooth muscle fibres of the muscular bowel wall and muscular vessel walls was accounted for. Two approaches for the automated determination and quantification of stromal stains within CRC were developed.In order to determine whether the 2 methods were comparable to the current clinical standard assessment, 2 histopathologists (1 at an early training stage (HP1) and 1 a specialist (HP2) examined the stained cores. The DLM tackled the problem of stromal detection, and was found to be successful at determining high and low percentage stroma comparable with the pathologists' manual estimation.Knowing the methods were comparable to clinical estimations of percentage stroma, quantification of stromal stains followed using the MM, which relies on the segmented stromal compartment from the DLM, was similarly comparable to manual scoring, as corroborated by the chi-squared tests.As demonstrated in the Bland-Altman plots, of particular interest is the potential unconscious bias of scoring to extremes from the histopathologists' scores.It is noted that there may be a tendency to underestimate scores with low staining, and overestimate cores with high staining, as depicted by the histograms of each histopathologist.This is an advantage of the MM, as it is objective and has inherent quantitative accuracy. The SMA scores obtained by the MM discussed so far are representative of the area taken up by the stain with respect to the entire stromal compartment, cellular and acellular.The MM is also capable of giving a score of the area stained with respect to the cellular stroma only.This latter method is representative of how a stromal stain such as SMA would be quantified to help clinicians and avoid over-or underestimation.The struggle to mentally account for acellular components may also be relieved. In contrast to SMA stain which has a wider natural range of variability between tumours, desmin, as a stromal stain in colorectal cancer, is skewed towards the underexpressed range.Here also, the MM proves more accurate and inherently outperforms manual scorers in the sub 1% range.Quantification via pixel intensity is therefore likely to be more accurate in quantifying scantly stained stromal stains at relatively fast speeds. In the rapidly transforming field of digital pathology, segmentation of epithelium and stroma have been attempted by several studies using mathematical modelling, including a level-based active contour method and clustering. 4However, use of a simple piecewise-constant intensity distribution assumption is likely to lead to poor performance on complex images.Other researchers 3 were able to differentiate between tumour and nontumour epithelium in WSIs of CRC by training a self-organising map.This approach was effective, but if it needs to be colour-independent, as when assessing stromal stains, it would perhaps struggle at quantification. In contrast to previous mathematical models, the MM presented in this paper is novel in that it was primarily built to assess the staining in the stromal compartment as opposed to concentrating on epithelial stains.As the starting point to the analysis, the use of a relaxed Mumford-Shah variational model allows for piecewise-smooth intensity distributions, meaning a more sophisticated segmentation output is produced when compared with a piecewise-constant assumption.Moreover, the MM method utilises the 6 colour channels jointly and separately where appropriate, improving on results where colour channels are treated individually. The methods used in this paper avoid pre-selecting tumour regions before algorithm application and/or selecting a stromal hotspot. 24,25n contrast to the proposed method, the aforementioned studies all require user interaction when processing new unseen data, and are therefore Fig. 6.Output of the refined clusters on the image stained by desmin from Fig. 5.The top row shows misc objects and the bottom row shows the stain.not fully automated.In particular, the DLM is able to provide a segmentation of tumour and stromal regions on SMA-stained TMA cores automatically without any user interaction.Tumour and stromal regions are also segmented on desmin images using the same DLM by registering the 2 cores together, thereby moving the stromal region segmented on the SMA image to the desmin image.Fig. 17.The first row shows the histogram of the scores provided by: histopathologist 1 (H1), histopathologist 2 (H2), and the MM for SMA staining as a percentage of the stromal cells.In the second row, Bland-Altman plots display the discrepancy in SMA scoring comparing the MM with scoring H1 and H2. While there exist automated methods of tumour epithelium segmentation, 3,14,26,27 these works typically focus on the sole task of segmentation rather than quantifying immunostain expression.The main novelty of the proposed work is in its ability to both segment out the stromal compartment, and to produce multiple H-scores for multiple stromal stains without the need to alter the DLM.With the stromal and epithelium regions detected by the DLM, and the stained regions detected by the MM, the proposed method can produce scores for the ratio of stained stroma with respect to the total stromal area.The ability of the MM to automatically categorise regions into grades based on the intensity of the stain, providing an image-based H-score, parallels commonplace methods like H-scores done by practicing pathologists, with the advantage of providing objectivity in assessment.Further to this, the MM is also able to detect only the stromal cells within the region, which gives the ability for the proposed method to produce a score for the ratio of stained stroma with respect to the area of the stromal cells.While the presented results achieve this on SMA and desmin staining, there is no limit to the number of stromal stains this method could quantify at a single time.With minor alterations to the clustering method to account for the differences in stromal stain expression, it would be possible to detect the stained regions with very similar methods. Due to the complexity of multiple pathways involved in cancer, multiple factors contribute to cancer progression.IHC multiplexing has been an innovative tool to extrapolate data regarding several protein interactions within tissues, however this is an intricate and often costly process.Therefore, the ability to carry out digital multiplexing of stromal stains is extremely desirable.To the best of our knowledge, the use of registration techniques to align multiple versions of a TMA slide to facilitate a digital multiplexing method have not been published.Aligning 2 TMA slides via registration, using a rigid transformation with normalised crosscorrelation as the similarity measure has been attempted previously, 28 but did not incorporate the additional information of the stain segmentation as in the proposed method.It is therefore an advantage of the MM that precise regions of double staining can be detected. The proposed registration method seems to be rather robust in aligning the 2 images, which allows for accurate merging of the 2 stained segmentations.This would prove extremely useful in novel immunohistochemical studies to help elucidate pathways and perhaps produce tools for multi-overlay image panels to help predict operative functional pathways in tumours, which in the long run, could alter patient stratification and prognosis. To provide further validation of the methodology, we applied the MM to open access images of CRC cores stained with SMA and vimentin, with vimentin being another mesenchymal stain used commonly in clinical practice.The results highlighted the ability of the model to function regardless of a users' staining protocol or the reagents used to stain tissue.This is often a pitfall of many automated quantification methods as changes in staining intensities lead to discrepancies between results, and may inhibit the algorithm from working correctly.The accurate quantification of a third party core stained with vimentin also emphasises the methodologies' ability to segment and quantify any stromal stain.However, a potential limitation of the proposed workflow is that a limited set of data was available for training, and so tuning the DLM was infeasible.Therefore, a larger training set would be desirable to further improve the accurate segmentation and quantification of stromal stains. Conclusions In summary, the combination of the DLM and MM provides a framework to accurately quantify stromal stains.Starting off with segmenting the stromal compartment as well as the stained region of the stroma, the method developed allows for objective accurate quantification of stromal immunostains.This will help both clinicians and researchers to assess the prognostic implications better and help understand the contribution of the mesenchymal microenvironment to tumour development and progression.The MM uses image registration to register 2 cores stained by different markers (SMA and desmin) in order to detect regions of double staining.In future, such multiplexing with accurate quantification procedures, will help pave research for understanding the functional pathways activated or inactivated together in the tumour-associated stromal compartment.Fig. 20.An example of a result of CRC tissue stained with SMA, (https://www.proteinatlas.org/ENSG00000107796-ACTA2/tissue/colon) from the Human Protein Atlas. 19g. 21.An example of a result of CRC tissue stained with vimentin, (https://www.proteinatlas.org/ENSG00000026025-VIM/tissue/colon) from the Human Protein Atlas. 19 Fig. 1 .Fig. 2 . Fig. 1.The architecture of U-Net for image segmentation.C represents the number of channels. Fig. 3 . Fig. 3. Mathematical output (b)-(d) of the 3 clusters on the given SMA image (a).In addition, (e)-(f) show the RGB image, and the b channel from the Lab colour space of the stain cluster. Fig. 4 . Fig. 4. Output of the refined clusters on the image stained by SMA from Fig. 3 (SMA).The top row shows misc objects and the bottom row shows the stain. Fig. 5 . Fig. 5.The k-means output (b)-(d) of the 3 clusters on the given desmin image (a), the RGB image of Ω 3 (e) and the b channel from the Lab colour space. Fig. 7 . Fig. 7.An overview of the mathematical method. Fig. 8 . Fig.8.Examples of segmentation results on cores without significant muscle regions.Those in the upper row are the original images, while those in the lower row are the segmentation results.The orange colour represents predicted stroma regions found using the DLM, and the blue colour represents the brown stainings found using the MM. Fig. 9 . Fig.9.Examples of segmentation results on cores with significant muscle regions.Those in the upper row are the original images, while those in the lower row are the segmentation results.The orange colour represents predicted stroma regions found using the DLM, and the blue colour represents the brown stainings found using the MM. Fig. 10 . Fig. 10.A compilation of results from the mathematical model detecting SMA staining.The first column shows the original image, the second column shows the binary segmented stain, the third column shows the segmented stain in RGB, and the final column shows the segmented stain overlaid on the original image. Fig. 11 . Fig. 11.Heatmaps from the images shown in Fig. 10.The first column displays the original image, the second column shows the resulting heatmap, and the third, fourth, and fifth columns show the stain designated as grades 1, 2, and 3, respectively. Fig. 12 . Fig. 12.A compilation of results from the mathematical model detecting desmin staining.The first column shows the original image, the second column shows the binary segmented stain, the third column shows the segmented stain in RGB, and the final column shows the segmented stain overlaid on the original image. Fig. 13 . Fig.13.A compilation of double stain analysis.Results for SMA and desmin stain segmentation can be found in Figs.10 and 12respectively.In the first column, the SMA image is shown, in the second column, the desmin image is shown, and in the third column, the registered SMA image to be aligned with the desmin image is displayed.In column 4, the binary region of double staining is displayed, and in the final column, the double stained region is overlaid onto the original desmin image. Fig. 14 . Fig. 14.The first row shows the histogram of the scores provided by: histopathologist 1 (H1), histopathologist 2 (H2), and the MM.In the second row, Bland-Altman plots display the discrepancy in SMA scoring comparing the 2 methods with scoring H1 and H2.The MM correlates more with H2. Fig. 15 . Fig.15.Bland-Altman plots of SMA scoring based on splitting the data into "low" and "high" according to the median cutoff of the respective histopathologists' scores.Examples of cases scored as "low" by the respective histopathologist on the left, and similarly on the right display plots scored as "high". Fig. 16 . Fig.16.Bland-Altman plots of SMA scoring based on splitting the data into "low" and "high" according to the median cutoff of the MM method.On the left examples of cases scored as "low" by the MM, and similarly on the right display plots scored as "high". Fig. 18 . Fig.18.The first row shows Bland-Altman plots of SMA scoring stromal cells based on splitting the data into "low" and "high" according to the median cutoff of the respective histopathologist.The second row shows Bland-Altman plots of SMA scoring on stromal cells based on splitting the data into "low" and "high" according to the median cutoff of the MM. Table 1 Manual stromal segmentation versus automated stromal segmentation cross-tabulation. Table 2 Mean values and the standard variances of the stroma segmentation performances of DLM.Four-folder cross-validation is adopted.There are 3 categories for the outputs. Table 3 Manual and deep learning stromal segmentation versus histopathologist's manual assessment cross-tabulation. Table 4 Quantitative scores for cores shown in Figs.10, 11, 12, and 13.Including the first method of SMA scoring, negative scores as well as the 4 intensity grades of SMA stromal stain (G0, G1, G2, and G3), the associated H-Score, the second method of SMA scoring, the desmin score, and the double positive score.
9,490.6
2023-11-01T00:00:00.000
[ "Medicine", "Mathematics", "Computer Science" ]
Deep Learning-Based Intrusion System for Vehicular Ad Hoc Networks The increasing use of the Internet with vehicles has made travel more convenient. However, hackers can attack intelligent vehicles through various technical loopholes, resulting in a range of security issues. Due to these security issues, the safety protection technology of the in-vehicle system has become a focus of research. Using the advanced autoencoder network and recurrent neural network in deep learning, we investigated the intrusion detection system based on the in-vehicle system. We combined two algorithms to realize the efficient learning of the vehicle’s boundary behavior and the detection of intrusive behavior. In order to verify the accuracy and efficiency of the proposed model, it was evaluated using real vehicle data. The experimental results show that the combination of the two technologies can effectively and accurately identify abnormal boundary behavior. The parameters of the model are self-iteratively updated using the time-based back propagation algorithm. We verified that the model proposed in this study can reach a nearly 96% accurate detection rate. Introduction In recent years, intelligent vehicles, a fusion of Internet technology and the machinery manufacturing industry, have resulted in the development of comprehensive information services for travel and daily commutes [Unluturk, Oguz and Atay (2015); Contreras-Castillo, Zeadally and Guerrero-Ibañez (2017)]. Whether intelligent network vehicles can achieve a high security and complete availability of their information is crucial for the development of intelligent vehicles. Historically, the computer systems in cars have been isolated from the outside world, and so the safety of these systems has been ignored. However, in recent years, hackers have proven that cars that utilize networked computing platforms can be compromised [Sedjelmaci, Senouci and Abu-Rgheff (2014); Li and based anomaly will predict this situation as abnormal behavior, so that IDS will produce a high false alarm rate. Therefore, we must consider the following factors when designing an IDS model for automobiles: (1) The designed IDS model can achieve a lower communication load and consume less storage space when deployed on platforms with limited computing and storage resources. It can adapt to the characteristics of a strong dynamic topology and high real-time processing power in automobile communications. (2) It is necessary to consider the complex communication topology of the vehicle. For example, when the vehicle encounters non-malicious abnormal behavior, it should realize this behavior and categorize it correctly through autonomous learning. (3) For the known types of attacks, we can achieve a higher alarm rate. While in the face of non-malicious abnormal behavior, we can identify the behavior through learning independently to achieve a lower false alarm rate. We needed to solve the problems related to the existing IDS, which only performs efficient detection for specific types of attacks [Tyagi and Dembla (2014); Sun, Yan, Zhang et al. (2015)], and improve the detection efficiency of the IDS on the vehicle system. This study presents a new IDS model based on an in-vehicle system using an advanced autoencoder and recurrent neural network in deep learning. Because of the limited computing and storage capacity of the vehicle system and the need to avoid the running burden on the vehicle system, it is not suitable to use higher dimensional data in vehicle information systems. We used an advanced autoencoder network and the corresponding sparse term constraints to reduce the dimension of the data on the CAN bus. The advanced autoencoder can learn other data representations at a high dimension by supervised learning. The data, after performing the dimension reduction using the advanced autoencoder network, contain all the information so that it can be regarded as data with the noise removed. We used the matrix operation and activation functions to complete the data recovery. The data processed by the advanced autoencoder not only do not result in the loss of useful information but also eliminates the invalid noise of the data so that the data can be sent to the classifier in a lower dimension for learning after being processed, reducing the cost of the model training inference and storage, and correspondingly improving the performance of the model [Li (2019)]. We took into consideration the poor performance of the embedded hardware in a vehicle system and maximized the use of the vehicle terminal hardware resources; we used the recurrent neural network combined with the SoftMax classifier to achieve the classification of the feature data. This method ensures a shorter processing time and maximizes the classification ability of the model without adding too much of an extra burden. The data on the CAN bus is equal time sequence data. However, the recurrent neural network has a strong ability to process sequence data and includes a simple global shared parameter mechanism [Yang, Wu, Wang et al. (2018)]. In addition, it has a weak dependence on the data context background in the training model, so the training time is shorter than that of a traditional Convolutional Neural Network (CNN) and some Recurrent Neural Network (RNN) variant networks, such as the Long Short Term Memory (LSTM) [Yuan, Zhang, Shi et al. (2019)]. The trained model achieved a higher accuracy and lower false alarm rate compared with the traditional IDS model, even in the face of non-abnormal exceptional circumstances, because the recurrent neural network can learn the sequential characteristics of the behaviors of the vehicle data effectively when training the model. The trained IDS model can also learn pre-order features autonomously to correctly identify the data characteristics of the driving environment. To solve the problem that the recurrent neural network may encounter during the training period, it is easy to cause the gradient to disappear with the increasing depth of the model. We introduced the timebased back propagation (BPTT) algorithm to complete the training process of the whole model. In terms of the overall training efficiency, unlike the traditional CNN or RNN variants, which have more door unit and parameter training, it is difficult to adapt to the limited storage and computing resources of the automobile. The IDS model proposed in this study can complete the training of the model in a short time. The model proposed in this study can achieve a higher efficiency than the traditional intrusion detection model in terms of its detection accuracy because it considers the mechanical characteristics and timing of the vehicle. The rest of study is organized as follows: the second part elaborates on the relevant background knowledge of the Internet of Vehicles and the related work of deep learning in IDS. The third part describes the relevant methodology of this article, and how the autoencoder and recurrent neural network are applied to the IDS of the vehicle system. In the fourth part, we use real cars to evaluate and compare the performance of our proposed models with traditional models. Finally, we summarize the problems raised and explain future directions of this work. Background knowledge Here, we introduce the basics of the Internet of Vehicles and deep learning. The architecture of the internet of vehicles The Internet of Vehicles refers to the use of a new generation of mobile communication technology to achieve a full-scale network connection within the vehicle, vehicle-toperson, vehicle-to-vehicle, vehicle-to-road, and vehicle-to-service platform. A new format for automotive and transportation services was built by improving the level of automotive intelligence and enhancing self-driving capabilities. It can provide users with intelligent, comfortable, safe, energy-saving, and efficient comprehensive services by improving traffic efficiency and boosting the driving experience of cars [Li, Zhong, Chen et al. (2019)]. The topological diagram of the Internet of Vehicles communication is shown in Fig. 1. The main communication entities include the Road Side Unit (RSU), the intelligent connected vehicle, the pedestrians on the road, and the official traffic authority for road communication. Each intelligent connected vehicle is composed of several devices, such as the Telematic Box (T-Box), Electronic Control Units (ECU), and GPS, which can help the vehicles to make contact with other entities in the communication scene. For example, the T-Box can complete the corresponding communication process through the built-in communication module and special automobile SIM card, combined with the Dedicated Short-Range Communications (DSRC) protocol in 802.11p or the Long Term Evolution Vehicle. In an intelligent connected vehicle, the bus network in the vehicle is formed using the bus communication protocol to connect the nodes of the vehicle ECU. Fig. 2 shows a simplified version of the intelligent network communication schematic diagram. ECU is not only the core component of the whole vehicle communication but also the essential communication unit of the in-vehicle communication. The node receives different message information from the bus to complete the specified command action [Li, Zhong, Chen et al. (2019)]. If different ECU nodes need to communicate with each other, they need to implement the bus protocol in the vehicle. The most famous bus protocol used in the vehicle is the CAN protocol. The CAN is a standard for the in-vehicle internal bus system, which can provide enough communication information for the ECU. The CAN bus is a reliable and economical serial bus of the vehicle network [Seo, Song and Kim (2018)]. When communicating, each ECU node sends data to the bus in a competitive way. At this time, the priority of the bus access control is obtained according to the priority domain in the data frame. ECU nodes with a low value in the arbitration field will get the priority to send the data, and the other nodes will wait for the bus to be idle and compete again. This way of broadcasting improves the real-time performance of data communications, which is why Robert Bosch GmbH6 introduced the CAN protocol in the 1980s. In modern cars, there are more than 50 ECUs, which make the communication speed between ECUs reach 1 Mbit/sec [Martinelli, Mercaldo, Orlando et al. (2018)]. However, the vehicle network uses a broadcast for the corresponding communication, and the communication process is often not authenticated, which makes it easy for attackers to access the CAN bus. Research status of deep learning in intrusion detection In the current application scenarios of the Internet of Vehicles, the in-vehicle IDS is usually deployed in the form of hardware or software on the vehicle for use. By completing the corresponding analysis process by collecting data from each ECU node or CAN bus, it can ensure the safety of drivers and passengers by timely notifying the emergency response background system in case of any abnormal behavior [Gao, Li, Xu et al. (2019)]. However, in the early stage of the development of the Intelligent Connected Vehicle (ICV), the ECU installed on vehicles only had a small storage space and low computing power because of the limited technology of a single-chip microcomputer. When facing a network attack, it is challenging to apply the existing IDS model directly to the vehicle system of an ICV. Many scholars have proposed an IDS based on the in-vehicle system to address the fact that the existing IDS cannot be directly used in the vehicle system. Liu et al. [Liu, Li and Man (2015)] presented an anomaly-based intrusion detection model. In this model, the network layer and MAC address layer are detected to analyze the normal behavior characteristics of the mobile nodes, and the outlines are detected by data mining. In order to verify their work, used NS2 to verify this model. The proposed method can achieve anomaly detection with a high efficiency. However, with an increasing number of detection nodes, the overall detection efficiency of the model decreases. Besson et al. [Besson and Leleu (2016)] developed a distributed vehicle intrusion detection model based on AWISSENET. This model searched the trusted services and related paths. The proposed IDS model was tested on a heterogeneous test platform, and the experiment showed that the proposed model can be applied to different wireless networks. Lauf et al. [Lauf, Peters and Robinson (2010)] presented an anomaly intrusion detection model based on Vehicle Ad Hoc Networks (VANET). In this model, the contextual background of the interactive nodes on the network application layer is detected to enable the learning of the attack behaviors' characteristics. The model also uses the density function and the global behavior maximum function to realize the detection of abnormal behavior. The model was experimentally verified to have a relative improvement in the optimization of computation compared with the anomaly intrusion detection model using only a traditional context-based approach. However, when presented with some unusual traffic scenarios, the model still has a high false alarm rate. A distributed IDS for wireless sensor networks based on reputation detection was proposed by Banković et al. [Banković, Moya, Araujo et al. (2010)]. In this model, each node in the network is assigned a reputation by the trained model to realize the evaluation and detection of malicious nodes. The corresponding experiments showed that the proposed intrusion detection system separated the malicious nodes from the network and inhibited the spread of malicious activities. However, the model still has a high alert rate in detecting node malicious refreshing node reputation. Cho et al. [Cho, Hong, Lee et al. (2013)] developed a local IDS model for wireless sensor networks. The model reduces the dimension of the data coding by introducing the Bloom filter proposed by Howard Bloom to reduce the storage and calculation requirements of the model. Experimental results show that the proposed method can detect potential DoS attacks by simulating the corresponding wireless communication environment. Many scholars have found that using cryptography or the network layer related protocols to implement IDS for malicious behavior detection often emphasizes feature engineering and feature selection, which cannot effectively solve the problem of the massive intrusion data classification in the actual network application environment [Yin, Zhu, Fei et al. (2017)]. It is difficult to achieve the self-directed learning of attack features to identify the corresponding attacks in the face of a changeable attack strategy [Mershad and Artail (2012); Dong and Wang (2016); Yin, Zhu, Fei et al. (2017)]. With the successful application of machine learning and deep learning in many fields, scholars in various countries are now combining these technologies with intrusion detection systems to achieve the efficient detection of external attackers and internal malicious behaviors. Medhat et al. [Medhat, Ramadan and Talkhan (2015)] presented an intelligent intrusion detection model based on a wireless sensor network. This model combines supervised learning with unsupervised learning to train the IDS model: supervised learning is used to train sensor nodes; unsupervised learning is used to train base station nodes and convergence nodes. During this period, a series of rules learned will form a decision binary tree to judge normal and abnormal behavior. The experimental results show that the model can achieve a lower time complexity and more accurate detection of the attack data. Ronak et al. [Ronak, Ganesh, Akshay et al. (2016)] provided a distributed intrusion detection system for wireless sensor networks based on the Naive Bayes and Apache mahout. The proposed model can detect multiple attacks autonomously and has a strong robustness. Peraković et al. [Peraković, Periša, Cvitić et al. (2017)] developed an intrusion detection model based on an artificial neural network. This model uses a supervised learning method to realize the corresponding learning of traffic labels to complete the recognition of abnormal traffic. However, this model only has obtained an 82% accuracy due to some similarities in the values of legitimate traffic and UDP DDoS attack parameters. Anzer et al. [Anzer and Elhadef (2018)] proposed an anomaly intrusion detection model base d on deep learning. The model can model the network traffic data using a fully connected neural network. Experiments show that the model proposed in this study can achieve more efficient learning of feature data compared with some traditional machine learning algorithms, such as Adaboosting, random forest, and SVM. It also has a higher accuracy and lower false alarm rate in the detection rate compared with some anomaly intrusion detection models based on traditional deep learning [Kwon, Kim, Kim et al. (2017)]. Pavani et al. [Pavani and Damodaram (2013)] proposed a neural network based on the multi-layer perceptron to detect malicious behavior in Mobile Ad Hoc Network (MANET). Their experimental results showed that the proposed model can effectively detect the grey hole and black hole attacks. Leinmüller et al. [Leinmüller, Held, Schäfer et al. (2014)] proposed an intrusion detection system based on the Intelligent Proportional Overlapping Score (POS). Their IDS model uses the information received from the trace file of VANETs to reduce additional features to improve the performance and safety of the autonomous vehicle. The features extracted from the tracking files are helpful to distinguish normal vehicles from abnormal vehicles. The model also uses an artificial neural network to realize the learning of audit data and the detection of attack behavior. Experiments showed that their proposed model can effectively detect black hole attacks by learning the data characteristics of the trace files. However, these technologies have many limitations, such as the need to be able to interact with a higher level of human experts, the demands for a large amount of expert knowledge in the processing of data, or the need for the differentiation of the data and operation mode to achieve accurate recognition needs [Tan, Li, Xia et al. (2019)]. They are not only a labor-intensive and expensive processes, but they are also error prone [Zhao, Yan, Chen et al. (2019)]. Using a large amount of training data results in too much system overhead, which may become a challenge to deploy in an Internet of Vehicles environment with heterogeneous characteristics and highly dynamic environment. Deep learning, as an advanced subset of machine learning, can overcome some limitations of shallow learning. Preliminary deep learning research has proved that its superior hierarchical feature learning can improve or at least match the performance of shallow learning technology [Hou, Saas, Chen et al. (2016)]. Abnormal behavior will change with time, and intruders will adjust their network attacks to avoid existing intrusion detection solutions [Kumar and Venugopalan (2017)]. Deep learning can analyze the network data at a deeper level, and identify any abnormal data and related patterns quickly. Moreover, artificial intelligence has a good black box feature. Therefore, it is difficult for attackers to manipulate the internal structure of the detection system. Deep learning has been used for intrusion detection with the Internet of Things, especially in the Internet of Vehicles. The basic idea is to use deep learning to realize the learning of vehicle boundary behavior features, and then to design the corresponding classifier based on these boundary features. Using this classifier, we can achieve the efficient classification and anomaly detection of an entity's behavior data. Due to the limited computing and storage capacity of traditional automobile ECUs, many advanced and efficient algorithms in deep learning cannot be directly applied to the automobile [Kang and Kang (2016)]. However, the development of intelligent vehicle information systems has improved the calculation and storage capacity of the on-board ECU, and the efficiency in processing realtime tasks has also been greatly improved [Johansson, Törngren and Nielsen (2015)]. The recurrent neural network and the autoencoder based on sparse item constraints serve as a deep development of deep learning. By learning the boundary features, a multilayer recurrent neural network with learning ability was established to predict and perceive the unknown behavior. The relevant optimization algorithm was used to optimize the parameters of the model. Through experiments, we found that the model designed by deep learning has a better robustness than the traditional IDS based on statistics or signatures [Qiao, Li and Chen (2018); Mohammadi and Namadchian (2017); Liang (2017)]. Research on intelligent vehicular intrusion detection system based on deep learning 3.1 Structural design of intrusion detection algorithm The detection flow chart of the in-vehicle intrusion detection model based on the advanced autoencoder and recurrent neural network is shown in Fig. 3. The proposed model consists of three steps: (1) data pre-processing, (2) the advanced autoencoder is used to extract the correlation features between the data, and (3) the combination of the recurrent neural networks and the SoftMax classifier is used to classify the corresponding data. Since the relevant experimental data collected from the vehicles were composed of continuous type data and discrete type data, we needed to normalize the data using some pre-processing methods. The specific data introduction is described in the fourth section. After the data pre-processing, the standard format data was used as the input of the sparse auto-encoder to complete the data feature extraction. We obtained data with high sparse characteristics. We used these data with great sparse features as the input of the recurrent neural network classifier, and then used the recurrent neural networks and SoftMax to learn and classify the corresponding feature data. Finally, we used the sorted results as the output to judge whether the relevant vehicle CAN bus data was abnormal. Methodology The autoencoder is a kind of deep learning network structure used to learn the coding structure of data. Its primary purpose is to learn high-dimensional complex data and extract a suitable coding expression mode to realize the dimension reduction processing and related feature learning of high-dimensional data [Yuan, Zhang, Shi et al. (2019)]. Fig. 4 shows the model diagram of the autoencoder. We can see that the network structure is composed of two parts: One is the data encoder represented by the function h = f (Wx + b), and the other is the decoder for data generation and reconstruction by the function x = g (W T h + b). We used the unsupervised learning algorithm to optimize the constraint weight matrix W and reconstructed the weight matrix W T to minimize the error between the input and output of the model, which makes x (i) = x ′(i) . In this work, we used the autoencoder with constraints on sparse regular terms to limit on the number of features extracted from data and complete the process of data dimension reduction. The sparse autoencoder was obtained by adding L1 regular term constraints to the fundamental loss function to achieve the effective extraction of features. The specific algorithm steps are shown in Algorithm 1. Figure 4: Simplified auto-encoder model Algorithm 1 Using the sparse autoencoder to extract the data features Input: (i=1, 2, …, n) Output: ′ (i=1, 2, …, n) Note: is the training times of the sparse auto-encoder, is the total number of times to train, and is the node number of layer l. 1: Coding: = + 2: While < ∶ 3: where ( , ) is the loss function of the traditional autoencoder, and the corresponding expression is , the second term in the expression, is a regular term, which can effectively avoid overfitting. However, due to the sparse encoder used in this study, the sparse constraints • ∑ ( ||�)] = were added to the right side of the equation. Using the KL distance to measure the difference between codes, the corresponding expression is as follows: is the average value of the output of the hidden layer node, and is the activation of the input vector ( ) to the hidden layer unit of j. 5: After using Algorithm 1 to perform multiple iterative trainings on the autoencoder, the optimal constraint weight matrix and reconstruction matrix were obtained. Using these 2 matrices minimizes the loss error between the data obtained after the dimension reduction and the original data. The low-dimensional representation of high-dimensional data features are obtained, and these data are used as the input data for subsequent recurrent neural network classifiers. The recurrent neural network is usually composed of three parts: input unit, hidden unit, and output unit. The essential aspect of the model is a one-way flow process from the input unit to the hidden layer unit, and then to the output unit after inputting the relevant time series data. In the hidden layer unit, the RNN has stored the data status information from the previous time. When the current information data stream enters the relevant hidden layer unit, the hidden layer unit can use the mixed calculation operation of the current data stream and the previously saved digital data stream to obtain the behavior state that may occur in the next state. Hence, we usually regard the hidden layer unit in the network as the storage in the whole network structure. It is used to store the state data of the previous part of the behavior and to calculate the state data of the next behavior. The corresponding network structure diagram of the current network is shown in Fig. 5. Input Layer Hidden Layer Output Layer Figure 5: Folded recurrent neural network structure From Fig. 5, we can see that introducing a ring structure helps "remember" the previous relevant information and apply it to the current output calculation. The structure of the RNN is different from a traditional CNN. The sequence result calculated by the corresponding hidden layer unit in the current layer is related to the output result of the hidden layer unit in the previous layer, and the neurons between each hidden layer unit have a specific information exchange process. We used the advanced autoencoder designed in the previous section to reduce the data dimensions to complete the learning of the corresponding features. Then we used the recoded data to train the RNN model and then used the relevant test data set to evaluate the accuracy of the model. From Fig. 3, we can also see that our RNN training process is divided into two corresponding stages: forward propagation training and back-propagation training. In this research, we used the BPTT algorithm to complete this process. Forward propagation is responsible for calculating the predicted value of samples under the corresponding weight matrix for a given sample. In contrast, back-propagation updates the relevant weight matrix using the differential to calculate the accumulated residual. Figure 6: Unfolded recurrent neural network structure Fig. 6 is a complete unfolded structure of the RNN. We separated the structure of the standard recurrent neural networks shown in Fig. 6 into the elemental composition of the following three elements: (1) A given series of training samples x i (where i=1, 2, …, n); (2) the hidden layer state unit sequence h i of the corresponding layer (where i=1, 2, …, n); and (3) a series of predicted output values y i (where i=1, 2, …, n). The other relevant parameters in the structure that participate in the calculation are as follows. U is the connection weight between the hidden layer unit at the previous time and the hidden layer unit at the current time. V is the link weight between the hidden layer unit of the corresponding layer and the output layer unit. W is the connection weight between the corresponding layer input unit and the hidden layer unit. For the RNN shown in Fig. 6, we used the BPTT algorithm to complete the corresponding training process. The specific operation process is shown in Algorithms 2 and 3. We used the following objective function to evaluate the loss on each of the input RNN model training samples (x i , y i ): f(θ) = L(y i , y ı � ) [Martens and Sutskever (2011)], where L can evaluate the actual deviation distance value between the actual label y i and the predicted value label y ı � . The function used to evaluate the loss used was the cross-entropy function L t = ∑ −y ı � T log (y t ) i=n i=1 . Algorithm 2 Forward Propagation Algorithm Input: (i =1, 2, …, n) Output: � 1: for i form 1 to n do: 2: = − + + 3: = ( ) is the hyperbolic tangent activation function used in this study 4: = + 5: = ( ) Algorithm 3 Back-Propagation Through Time Input:< , � >(i=1, 2, …, n) Output: = { ′ , ′ , ′ , ′ , ′ } 1: for i from 1 to n do: 2: However, although the parameters W, U, and b are shared, they not only contribute to the output of the time of t but also contribute to the input + of the hidden layer at the time of t+1. Therefore, when deriving the parameters W, U, and b, we need to start the derivation step-by-step from the back. 6: for i from 1 to n do: 7: Data set description In order to verify that the model proposed in this study can achieve the efficient detection of vehicle behavior, we cooperated with a domestic automobile information security laboratory and carried out the simulation experiments on real intelligent vehicles. There are usually two ways to collect the internal parameters of the vehicle: 1. Passively monitor the data on the Electronic Control Suspension (ECS) network through the OBD-|| diagnosis interface or obtain the data from the car through the OBD diagnosis interface using the standard communication protocol. 2. By using the CAN converter to access the vehicle CAN bus to complete the monitoring of the vehicle CAN bus. We chose the second approach to collect data by connecting the USB-to-CAN converter directly to the OBD-||. There are two reasons: 1. After directly using the USB-to-CAN converter to access CAN bus, we could easily collect the required data by passive monitoring. 2. When obtaining data through the OBD port, it is often necessary to send request parameters to obtain the corresponding data, which is a complicated process. Through statistical analysis, we found the propagation of various essential parameters on the bus is shown in Tab. 1. These parameters are sent when the car is working normally without any request process. After weighing, we chose to obtain data on the Engine Control Module (ECM) bus, because there are many important parameters of the sender on this bus. For example, the engine speed is directly calculated by the ECU and shared with other modules through this bus. Fig. 7 shows the environment configuration of the data collection from connecting the acquisition adapter to the bus where the engine ECU is located (Fig. 8). The adopted CAN acquisition equipment was a KvaserCAN Leaf LightV2 and the software for the data collection on the computer was the open-source software Vehicle Spy. We collected nearly 300,000 pieces of data related to the vehicle. We used Vehicle Spy to generate the corresponding data log file for the data collected on the CAN bus. Part of the data sample we collected is shown in Fig. 9. It can be seen from the figure that the collected data consist of a timestamp, data domain, data length, arbitration domain, description domain, and vehicle network status information. Because the arbitration domain and some other parameters involve confidential information being sent between the experimental center and the automobile manufacturer, we blurred some regions in the graph Data extraction and calibration We used the sklearn package in Python to initialize the data. It can be seen from Fig. 10 that most of the data transmitted on the engine's CAN bus were transmitted in the basic format. Messages with different IDSs send 1 or more parameters. For example, in the 8 byte data of a CAN data package, bytes 1 and 2 represent the high 8 bits and low 8 bits of data information, respectively, while bytes 3 and 4 represent the high 8 bits and low 8 bits of speed, respectively. Moreover, the range of these data may be from 0×0000 to 0×FFFF. Therefore, in order to get the real speed or other parameters, we needed to convert them, generally through Eq. (1). V is the actual value. X is the value transmitted through the can packet. B is the deviation. The parameters acquired by Vehicle Spy could not be directly sent to the network for training. In order to get the parameters that we needed, we also needed to carry out the corresponding data analysis process. We used the script written in the sklearn package in Python to extract the parameters of the CAN log of Vehicle Spy and converted them into the CSV files, as shown in Fig. 11. Data normalization The parameters in Fig. 11 (after column 3) vary from 0 to 1, which is because these parameters vary in different ranges. The standard method is data normalization. The purpose of data normalization is to standardize the data of different dimensions and units to solve the differences between the data indicators. After normalization, different pieces of data are at the same level, which is convenient for comprehensive comparative evaluation. There are many ways to normalize, such as the Z-score (Eq. (2)) and Min-Max (Eq. (3)): Because there was no special requirement for this application scenario, the Min-Max normalization method was selected because this method is the simplest. It is worth noting that when the Min-Max method is used to normalize the data, it is necessary to use the unified fixed maximum and minimum values. Otherwise, the prediction is not accurate due to the difference between the two values in the experiment. Data interpolation Because these parameters were transmitted serially via the CAN datagram, as shown in Fig. 11, data interpolation was also needed. There are many interpolation methods. We chose the forward interpolation method, as shown in Fig. 12. The horizontal axis in the figure is the time, and the vertical axis is the parameter. The italicized and underlined values in the figure show the actual received parameter value, and the rest are the values that were inserted using the forward interpolation method. Data sampling After analysis, we found that the received data had much redundancy after interpolation. The redundant data were due to the interpolation of the data. The channel arbitration mechanism of CAN had a certain randomness, so the received data were not evenly distributed in time. For these reasons, the data needed to be sampled. The sampling method adopted here was to round to the nearest 10 milliseconds, that is, each column of data was sampled in milliseconds. When sampling, the data closest to the whole number 10 was taken, so the amount of training data could be significantly reduced. Data characteristics and correlation analysis The characteristics of the data in the ICV and the corresponding correlation analysis are usually completed in the following two ways. (1) Analyze the data flow generated by the car under normal driving conditions. For example, the analysis process is completed by analyzing the data packet transmission frequency, state change rate, and CAN bus utilization rate under normal driving conditions. (2) With the help of the working principles of the car, the parameters of the car in different states and the correlations between them are analyzed (such as the relationships among the RPM (Revolutions Per Minute) and the speed of the car as well as the air intake of the car under the normal driving conditions). In the experiment, we used the Pearson correlation coefficient to analyze the collected data and find a group of data with the strongest correlation. As shown in Fig. 13, the overall trend of these data is relevant. Figure 13: Changing trend of the automobile parameters Through analysis, we can usually divide the data transmitted on a CAN bus into two categories according to the apparent characteristic relationships: the data with obvious mechanical rules, such as the pedal position of the car and the speed and acceleration of the car. Or the data without apparent rules, such as the information of the air intake of the engine and the status of the brake pedal. The reason these data do not have transparent characteristic relationships is that the set parameters corresponding to these data need to be adjusted by the driver according to the road conditions in real time. Different vehicle parameters with corresponding relationships will also present a corresponding normal threshold range under certain normal driving conditions, such as the driving speed of the vehicle, the automatic gear of the vehicle, and the speed of the vehicle engine under certain driving conditions. Their change rate is the upper limit of the average threshold range. Fig. 14 shows that there is a limited change rate between the speed of the car and the speed of the engine and the gear speed under normal driving conditions. However, when we tried to implement a replay attack or forge the driving state data to the vehicle CAN bus using Vehicle Spy, the change rate and the corresponding parameter threshold range among the three also changed. As shown in Fig. 15, the standard speed information and the replayed speed information are mixed due to the replaying attack on the engine speed of the vehicle, so that may lead to the phenomenon that the waveform of the image vibrates. As a result, we can speculate that at the inflexion point of the curve where the oscillation occurs is when the abnormality occurs. Although we can see from the figure that the replay speed does not lead to significant changes in other parameters, we cannot deny that the replay attacks or a forgery attack will not affect other parameters. When we replay the corresponding abnormal data at a higher frequency, we can easily see a black curve by setting the low-pass filter at the corresponding receiver, as shown in Fig. 16. When the engine speed is replayed at a higher frequency, the corresponding vehicle speed has a breakpoint, which also has a particular impact on the vehicle speed. Feature selection After analyzing the strong correlations of the data in Section C, we selected several auto parameters with a strong correlation as the data vectors of the whole model, such as the speed of the car, engine speed, engine intake pressure, automobile accelerator pedal position, and automatic transmission gear. The time correlation of these parameters and their relationship with each other have definite characteristics. In order to verify that there is a strong correlation between the parameters we selected, we chose Eq. (4) to calculate the covariance coefficient of the selected parameters, as shown in Tab. 2. We can see that these parameters are positively correlated: Experiment evaluation The purpose of this research was to improve the detection efficiency using the sparse featurebased autoencoder and the recurrent neural network, respectively, and improve the convergence speed of the whole network using the BPTT. We used the alarm rate and false alarm rate to evaluate the overall performance of the proposed model. The corresponding alarm rate and false alarm rate were calculated using Eqs. (5) and (6), respectively. The true positive (TP) is the number of records identified as abnormal, false positive (FP) is the number of records identified as normal, true negative (TN) is the number of records identified as normal, and false negative (FN) is the number of records identified as abnormal: We used the currently popular deep learning framework Keras to complete the training process of the model. We completed the corresponding experiment on a laptop. The experimental configuration included an ASUS fl8000u, core i7-8550u CPU, 16 GB memory, and a GPU that was not used for the acceleration process. In order to compare the scheme proposed in this study with the machine learning method [Medhat, Ramadan and Talkhan (2015) In the experiment, we mapped the 16-dimensional data features to 48-dimensional data features using the one-hot encoding coding technology, which is used as the input of the autoencoder. Therefore, the neural network classifier in this study has 48 input nodes and 2 output nodes. In order to get a better training process, we set the number of hidden layer nodes in the neural network to 40, 60, 80, and 100 and the learning rate is also setting to 0.01, 0.05, and 0.1 in the training process. Tab. 3 shows the classification accuracy and convergence time of the model under different parameters. The experimental results in Tab. 3 show that when the hidden layer node is set to 80 nodes and the learning rate is set to 0.1, and the model has achieved a high accuracy regardless of whether it is the test set or the training set. Although the methods discussed in this study spend more time on the training model, we tried to use the GPU or offline method to carry out the corresponding training process for the model [Yin, Zhu, Fei et al. (2017); Kang and Kang (2016)]. At the same time, Tab. 3 also shows the autonomous feature learning ability of the sparse autoencoder and the consideration of the recurrent neural network for data timing characteristics. We can realize the real-time detection of abnormal data at the millisecond level. The comparison results between the predicted engine speed value and the actual value using the proposed model are shown in Fig. 17. A small error is achieved between these values, and the accurate prediction of the data is realized. In order to make the error results in Fig. 17 easier to observe, Fig. 18 shows the variance sequence between the actual value and the predicted value about the engine speed. Therefore, using the advantages of deep learning in data set feature extraction and model classification, the model can significantly improve the accuracy of the detection efficiency and achieve a high accuracy and low false alarm rate. Figure 19: Influence of fake vehicle RPM data on vehicle speed Fig. 19 shows the signal graph generated when the vehicle driving data pass through the lowpass filter after the forgery attack on the vehicle in the normal driving state. We see that after injecting the RPM forged data into the CAN bus, the calculated results showed they would have an impact on the other vehicle parameters, as well as an abnormal performance on the other parameters' prediction. At this time, we can calculate the variance between the real value and the predicted value as a judgment value index. If the calculated variance is higher than a specific safety-critical value, such as the abnormal point in Fig. 19, we can determine the corresponding abnormal behavior. In this way, no matter which parameter of the vehicle is forged or attacked by the attacker, our model can detect the attack accurately with the help of the mechanical characteristics between the vehicle data. Based on these test results, we can take emergency response measures, such as informing the administrator to control the communication link of the vehicle, so as to ensure the external network security of the vehicle. Fig. 21. From the figure, we can see that the proposed scheme can achieve nearly 96% in terms of the TPR index, while only 2%-3% in FPF, which means that it is achieving a lower false alarm rate. The figure shows that the model proposed in this study achieves a higher detection efficiency and lower false alarm rate compared with the IDS based on deep learning proposed by its predecessors. Conclusion Although our model has achieved encouraging results, we acknowledge that it is not perfect, and there is room for further improvement. The proportion of the Internet of Vehicles in people's lives will increase with the continuous development and combination of information technology and automotive machinery technology. However, due to the special limitations of the Internet of Vehicles technology and the incompleteness of its existing security technology, government agencies and people of all countries should pay closer attention to the development trend of the Internet of Vehicles security issues. Therefore, based on the analysis of the current security problems of the vehicle network, we propose to use advanced autoencoder and recurrent neural networks to improve the detection rate of abnormal behaviors in the vehicle system. Through experiments that evaluated our proposed model using real ICV data, we found that the model can well classify vehicle behavior and improve the safety of the vehicle system to a large extent. In the future, we hope to further consider the use of other kinds of deep learning technology to ensure the safety of the in-vehicle system to find a more efficient solution, as well as to promote the use of these technologies with artificial intelligence methods in network security. We believe that this work can also improve the efficiency of network security problems.
10,256.4
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Study on the Collapse Process of Cavitation Bubbles Near the Concave Wall by Lattice Boltzmann Method Pseudo-Potential Model : In this paper, the lattice Boltzmann pseudo-potential model coupled the Carnahan–Starling (C-S) equation of state and Li’s force scheme are used to study the collapse process of cavitation bubbles near the concave wall. It mainly includes the collapse process of the single and double cavitation bubbles in the near-wall region. Studies have shown that the collapse velocity of a single cavitation bubble becomes slower as the additional pressure reduces, and the velocity of the micro-jet also decreases accordingly. Moreover, the second collapse of the cavitation bubble cannot be found if the additional pressure reduces further. When the cavitation bubble is located in di ff erent angles with vertical direction, its collapse direction is always perpendicular to the wall. If the double cavitation bubbles are arranged vertically, the collapse process of the upper bubble will be quicker, as the relative distance increases. When the relative distance between the bubbles is large enough, no second collapse can be found for the upper bubble. On the other hand, when two cavitation bubbles are in the horizontal arrangement, the suppression e ff ect between cavitation bubbles decreases as the relative distance between the bubbles increases and the collapse position of cavitation bubbles moves from the lower part to the upper part. Introduction Cavitation is common in hydraulic and marine engineering, such as in water turbines and ship propellers [1]. It may cause great damage on the surrounding structures when the cavitation bubbles collapse in the near-wall region [2]. Therefore, it is vital to reveal the process of cavitation-bubble collapse and the effect of its interaction with the structures. For one thing, it can reduce the damage to the physical structure; for another, it can also be used to clean the surface of the structures. The process of cavitation bubble collapse has been studied in the past years. Kornfeld and Suvorov firstly proposed the concept of micro-jet and believed that micro-jet was the main cause of structural damage [3]. There are many experimental results about the cavitation-bubble collapse near a solid wall. Naude' and Ellis discovered the micro-jet process in the experiment when the bubble in the near wall region [4]. Kling and Hammitt described the dynamic process of cavitation-bubble collapse in detail and the damage to the aluminum by experimental methods [5,6]. Vogel et al. utilized high-speed camera technology to accurately measure the micro-jet and the counter-jet velocity and carefully observed the evolution of the cavitation bubble in different distances between the bubble and the wall during the collapse [7]. Tomita and Kodama investigated a laser-induced cavitation-bubble collapse near a composite surface [8]. These experiments are mainly about the bubble collapse near a Pseudo-Potential LBM-MRT In the present study, the pseudo-potential LBM combining a multiple relaxation time (LBM-MRT) [15,24] and the external force scheme proposed by Li et al. Ref. [17] will be used. The governing equation [17,25] can be expressed as follows: where f α is the density distribution function, e α is the discrete velocity in the α direction, ∧ = M −1 ∧ M, is the relaxation matrix, M is the transformation matrix, ∧ is the diagonal matrix, f eq is the equilibrium distribution function, and S is the source term in the velocity space. For D2Q9 model, M is given by [26]: The diagonal matrix can be written as: Multiplying the both sides of the equation by M, the Equation (1) can be rewritten [27] as follows: I is the unit matric, m α = M αβ f β , m eq = M αβ f eq β , S = MS. The expression of m eq is given by: where ρ = α f α is the density and v = α f α e α /ρ + δ t F/2ρ is the macroscopic velocity. F is the total force, including the fluid-fluid interaction force, the solid-fluid interaction force, and the volume force. Because the key point of the present work is to study the interaction between bubbles, the fluid-solid interaction is not included; only the fluid-fluid interaction force is considered and is given by [28] as follows: where G is the interaction strength, φ(x) is the interaction potential [29]. ω is the weight coefficient. In the D2Q9 model, ω(1) = 1/3, ω(2) = 1/12: Energies 2020, 13, 4398 4 of 18 in which c = 1 is the lattice constant and c s = c/ √ 3 is the lattice sound speed. In the present study, the pressure p eos is calculated by Carnahan-Starling (C-S) equation of state, which is given by [30] as follows: R is the ideal gas constant, T is the temperature, b = 0.18727RT c /p c , a = 0.4963R 2 T 2 c /p c , T c is the critical temperature, and p c is the critical pressure. T c can be calculated by the parameter a, b and R. The source term S in the Equation (4) is given by [17] as follows with a parameter σ, which effects the thermodynamic consistency and the stability of the model: The streaming process is given by Model Validation Firstly, the verification of the model is conducted through the Laplace's law. The pressure difference between the interior and the exterior of the droplet could be obtained by the Laplace's law: where p in is the pressure in the droplet and p out is the pressure out the droplet. σ is the surface tension and R is the radius of the droplet. In the simulation, a droplet is initially suspended at the center of the computational domain with initial radius, RR. The domain has a 501 × 501 lattice mesh system. The saturated temperatures of the droplet are 0.5 T c , 0.6 T c and 0.7 T c , respectively. The periodic boundary is implemented for the four sides of the domain. The terminal value of the R at the equilibrium state is slightly different from the RR. A series of RRs is selected in the calculation, so a series of relationships can be obtained between R and ∆p. The results are shown in Figure 1. Secondly, this LBM-MRT will be validated by the cavitation-bubble collapse process in the near-wall region. In this case, the initialized density field can be given by: where ρ l , ρ g are the liquid and the vapor density, respectively. x 0 , y 0 are the coordinates of the bubble's center. RR indicates the radius of the bubble. W denotes the width of the interface, which is set as 5 in all cases. The units in the present study are all denoted by the lattice units. For example, the mass unit is mu, the length unit is lu, the time unit is tu, the corresponding speed unit is lu·tu −1 , the density unit is mu·lu −3 , the pressure unit is mu·lu −1 tu −2 , and the lattice speed is c = lu/tu. Secondly, this LBM-MRT will be validated by the cavitation-bubble collapse process in the nearwall region. In this case, the initialized density field can be given by: where l g  are the liquid and the vapor density, respectively. 00 x , y are the coordinates of the bubble's center. RR indicates the radius of the bubble. W denotes the width of the interface, which is set as 5 in all cases. The units in the present study are all denoted by the lattice units. For example, the mass unit is mu , the length unit is lu , the time unit is tu , the corresponding speed unit is -1 lu tu  , the density unit is -3 mu lu  , the pressure unit is -1 -2 mu lu tu  , and the lattice speed is c=lu/tu . In the numerical simulation, RR = 70 lu, the distance between the center of the cavitation bubble and the wall is 1.5 × RR. In the diagonal matrix: The parameters in the equation of state are set as follows: a = 0.2, b = 1, and T = 0.5 Tc. The initial pressure difference between the inside and outside of the cavitation bubble is ∆ = 0.00543 mu • lu −1 tu −2 . The above parameters are used for all cases in this paper. Figure 2 illustrates the comparison of the cavitation-bubble collapse process between the experimental [31] and the numerical results using the LBM-MRT. It can be seen that the simulation agrees well with the experimental results. The cavitation bubble shrinks under the effect of pressure difference in the initial stage. Obstructed by the straight wall, the speed of bubble bottom wall is small, and the lateral contraction is greater than that in longitudinal direction, shaping into an ellipse. As the pressure difference between inside and outside the bubble gradually increases, the depression appears and gradually deepens at the top of the cavitation bubble. Then, the first collapse occurs until the upper bubble wall contacts the lower bubble wall. In the numerical simulation, RR = 70 lu, the distance between the center of the cavitation bubble and the wall is 1.5 × RR. In the diagonal matrix: τ ρ = τ e = τ j = τ q = 1.0, τ ζ = τ j = τ q = 0.91, τ υ = 0.57. The parameters in the equation of state are set as follows: a = 0.2, b = 1, and T = 0.5 Tc. The initial pressure difference between the inside and outside of the cavitation bubble is ∆p = 0.00543 mu·lu −1 tu −2 . The above parameters are used for all cases in this paper. Figure 2 illustrates the comparison of the cavitation-bubble collapse process between the experimental [31] and the numerical results using the LBM-MRT. It can be seen that the simulation agrees well with the experimental results. The cavitation bubble shrinks under the effect of pressure difference in the initial stage. Obstructed by the straight wall, the speed of bubble bottom wall is small, and the lateral contraction is greater than that in longitudinal direction, shaping into an ellipse. As the pressure difference between inside and outside the bubble gradually increases, the depression appears and gradually deepens at the top of the cavitation bubble. Then, the first collapse occurs until the upper bubble wall contacts the lower bubble wall. Figure 3a illustrates the shape of the bubble near a concave wall in the collapse process by boundary integral method [9]. The distance between the bubble center and the wall is 1.7 RR, and the origin of the coordinate is at X* (= x/RR) = 0, Y* (= y/RR) = 0. It can be observed that the LBM results are basically consistent with the BIM results, which are similar as cavitation bubbles collapse near a flat wall. Figure 3a illustrates the shape of the bubble near a concave wall in the collapse process by boundary integral method [9]. The distance between the bubble center and the wall is 1.7 RR, and the origin of the coordinate is at X* (= x/RR) = 0, Figure 3a illustrates the shape of the bubble near a concave wall in the collapse process by boundary integral method [9]. The distance between the bubble center and the wall is 1.7 RR, and the origin of the coordinate is at X* (= x/RR) = 0, Y* (= y/RR) = 0. It can be observed that the LBM results are basically consistent with the BIM results, which are similar as cavitation bubbles collapse near a flat wall. Collapse Process of the Cavitation Bubble Near the Concave Wall After the model validation, the collapse process of a single cavitation bubble with different angles and additional pressure will be studied in this section. Moreover, double cavitation bubbles with horizontal and vertical layouts will be simulated. The evolution of the pressure field, characteristic point pressure, and micro-jet will be discussed. Evolution of the Single Bubble under Different Additional Pressures The simulated layout is shown in Figure 4. The pressure boundary condition [32] is used for the upper boundary, and the half-way bounce-back boundary [33] is adopted for the concave wall. In this section, the effect of additional pressure on the collapse process for a single cavitation bubble will be discussed. The bubble is located at λ = 1.6 × RR. Four cases will be studied, and the parameters are shown in Table 1. Collapse Process of the Cavitation Bubble Near the Concave Wall After the model validation, the collapse process of a single cavitation bubble with different angles and additional pressure will be studied in this section. Moreover, double cavitation bubbles with horizontal and vertical layouts will be simulated. The evolution of the pressure field, characteristic point pressure, and micro-jet will be discussed. Evolution of the Single Bubble under Different Additional Pressures The simulated layout is shown in Figure 4. The pressure boundary condition [32] is used for the upper boundary, and the half-way bounce-back boundary [33] is adopted for the concave wall. . Simulated layout for Cases 1-4 (RR is the initial radius of bubble, λ is the distance between the center of the bubble and the concave wall, Pv is the vapor pressure, P∞ is the ambient pressure, θ is the angel to the vertical, and P is the center point at the concave wall). Due to the irregular boundary at the bottom, some special treatments are needed to obtain the information at the bottom boundary. It should be illustrated that the concave wall in the simulation is not a smooth semicircle, which is composed by a series of polylines. The curvature is determined by a certain expression of coordinates, such as y = x 2 . A key problem for the fluid nodes that are near the boundary is the in all directions could not be known by streaming process. . Simulated layout for Cases 1-4 (RR is the initial radius of bubble, λ is the distance between the center of the bubble and the concave wall, Pv is the vapor pressure, P∞ is the ambient pressure, θ is the angel to the vertical, and P is the center point at the concave wall). In this section, the effect of additional pressure on the collapse process for a single cavitation bubble will be discussed. The bubble is located at λ = 1.6 × RR. Four cases will be studied, and the parameters are shown in Table 1. Due to the irregular boundary at the bottom, some special treatments are needed to obtain the information at the bottom boundary. It should be illustrated that the concave wall in the simulation is not a smooth semicircle, which is composed by a series of polylines. The curvature is determined by a certain expression of coordinates, such as y = x 2 . A key problem for the fluid nodes that are near the boundary is the f α in all directions could not be known by streaming process. For the convenience, three different kinds of nodes will be introduced in the discussion, which are fluid nodes, boundary nodes, and solid nodes. The detail has been shown in Figure 5. Simulated layout for Cases 1-4 (RR is the initial radius of bubble, λ is the distance between the center of the bubble and the concave wall, Pv is the vapor pressure, P∞ is the ambient pressure, θ is the angel to the vertical, and P is the center point at the concave wall). Due to the irregular boundary at the bottom, some special treatments are needed to obtain the information at the bottom boundary. It should be illustrated that the concave wall in the simulation is not a smooth semicircle, which is composed by a series of polylines. The curvature is determined by a certain expression of coordinates, such as y = x 2 . A key problem for the fluid nodes that are near the boundary is the in all directions could not be known by streaming process. For the convenience, three different kinds of nodes will be introduced in the discussion, which are fluid nodes, boundary nodes, and solid nodes. The detail has been shown in Figure 5. Unlike the fluid nodes, the boundary nodes could not get all populations in the streaming process. It is necessary to determine the unknown population firstly. For the center node, 4 , 7 , 3 could be known by streaming process according to Equation (10). However, 1 , 2 , 5 , 6 , 8 could not be known, because the solid nodes are not involved in calculation. Therefore, the additional constraints should be included in these nodes. For halfway bounce-back boundary, the unknown population f can be determined by the following: Unlike the fluid nodes, the boundary nodes could not get all populations in the streaming process. It is necessary to determine the unknown population firstly. For the center node, f 4 , f 7 , f 3 could be known by streaming process according to Equation (10). However, f 1 , f 2 , f 5 , f 6 , f 8 could not be known, because the solid nodes are not involved in calculation. Therefore, the additional constraints should be included in these nodes. For halfway bounce-back boundary, the unknown population f can be determined by the following: where i is the direction of unknown population and i is the opposite direction of i. f * is the population before streaming. the first collapse is about to occur. Obviously, with the decrease in the additional pressure around the bubble, the pressure at the top of the bubble also reduces, and the velocity of collapse becomes slower accordingly. In Case 1, the collapse process only takes about 900 tu. While in Case 4, the collapse process takes about 2000 tu. It is found that when the distance between the bubble and the wall is fixed, the pressure that is greater than a critical value is required to produce the second collapse. For example, there is no second collapse of the bubble for Case 4. The first and second collapses will generate pressure waves and both of them are much larger than the additional pressure. Generally speaking, as the additional pressure around the bubble decreases, the process of cavitation-bubble collapse is almost the same except for the increase in the collapse time. When ∆p is smaller than a critical value, the second collapse of the cavitation bubble cannot be found. 6 and 7 show the collapse process of the cavitation bubble under different additional pressures. Stage 1 refers to the beginning of the depress process, and Stage 2 refers to the moment that the first collapse is about to occur. Obviously, with the decrease in the additional pressure around the bubble, the pressure at the top of the bubble also reduces, and the velocity of collapse becomes slower accordingly. In Case 1, the collapse process only takes about 900 tu. While in Case 4, the collapse process takes about 2000 tu. It is found that when the distance between the bubble and the wall is fixed, the pressure that is greater than a critical value is required to produce the second collapse. For example, there is no second collapse of the bubble for Case 4. The first and second collapses will generate pressure waves and both of them are much larger than the additional pressure. Generally speaking, as the additional pressure around the bubble decreases, the process of cavitation-bubble collapse is almost the same except for the increase in the collapse time. When ∆ is smaller than a critical value, the second collapse of the cavitation bubble cannot be found. Figure 8 shows the evolution of the pressure at point P under different additional pressures. The pressure generated by the collapse spreads in all directions and will arrive at the concave wall finally. As the pressure around the bubble decreases, the collapse velocity reduces. Correspondingly, it will spend more time spreading to the wall for the pressure wave. Furthermore, the pressure peak decreases from 0.035 mu • lu −1 tu −2 in Case 1 to 0.018 mu • lu −1 tu −2 in Case 4. It reaches the peak almost instantaneously when the pressure acts on the wall and then begins to fall. finally. As the pressure around the bubble decreases, the collapse velocity reduces. Correspondingly, it will spend more time spreading to the wall for the pressure wave. Furthermore, the pressure peak decreases from 0.035 mu·lu −1 tu −2 in Case 1 to 0.018 mu·lu −1 tu −2 in Case 4. It reaches the peak almost instantaneously when the pressure acts on the wall and then begins to fall. Figure 8 shows the evolution of the pressure at point P under different additional pressures. The pressure generated by the collapse spreads in all directions and will arrive at the concave wall finally. As the pressure around the bubble decreases, the collapse velocity reduces. Correspondingly, it will spend more time spreading to the wall for the pressure wave. Furthermore, the pressure peak decreases from 0.035 mu • lu −1 tu −2 in Case 1 to 0.018 mu • lu −1 tu −2 in Case 4. It reaches the peak almost instantaneously when the pressure acts on the wall and then begins to fall. Figure 9 shows the evolution process of micro-jet when cavitation-bubble collapse under different additional pressures. In the present study, the measurement of the micro-jet by Plesset and Chapman is adopted [34]. This method can effectively avoid the effect of virtual velocity in the pseudo-potential model by treating the velocity of the bubble wall at the top of the cavitation bubble as the micro-jet velocity. When the additional pressure around the bubble decreases, the velocity of Figure 9 shows the evolution process of micro-jet when cavitation-bubble collapse under different additional pressures. In the present study, the measurement of the micro-jet by Plesset and Chapman is adopted [34]. This method can effectively avoid the effect of virtual velocity in the pseudo-potential model by treating the velocity of the bubble wall at the top of the cavitation bubble as the micro-jet velocity. When the additional pressure around the bubble decreases, the velocity of the bubble collapse also reduces due to the micro-jet's velocity decreases and the peak of the micro-jet decreases accordingly. However, the peak occurs during the depression of the bubble rather than the period of the first collapse. Moreover, there is a slight attenuation after micro-jet velocity reaches the maximum. In Case 4, due to the lower additional pressure around the bubble, the bubble collapse pattern is different from the other cases. For example, there is no extremum for the micro-jet. Energies 2020, 13, x FOR PEER REVIEW 10 of 20 the bubble collapse also reduces due to the micro-jet's velocity decreases and the peak of the microjet decreases accordingly. However, the peak occurs during the depression of the bubble rather than the period of the first collapse. Moreover, there is a slight attenuation after micro-jet velocity reaches the maximum. In Case 4, due to the lower additional pressure around the bubble, the bubble collapse pattern is different from the other cases. For example, there is no extremum for the micro-jet. Evolution of the Single Bubble in Different Angles with Vertical Direction In this section, the effect of the cavitation bubble angle on the evolution process of collapse will be discussed and the used parameters for the Cases 5 and 6 are shown in Table 2. Table 2. Parameters of single bubble in Cases 5-6. Evolution of the Single Bubble in Different Angles with Vertical Direction In this section, the effect of the cavitation bubble angle on the evolution process of collapse will be discussed and the used parameters for the Cases 5 and 6 are shown in Table 2. Figures 10 and 11 illustrate the collapse process when the cavitation bubble is in different angles with vertical direction. In the initial stage, the external pressure of the cavitation bubble is larger than the pressure inside the bubble. Therefore, the cavitation bubble shrinks overall under the pressure difference, and a low-pressure area is formed in the near-wall zone. The moving speed at the bottom of the bubble is lower than other positions on the interface. It evolves subsequently into an ellipse due to the blockage of the wall (t = 500 tu). After that, the pressure on the upper surface of the bubble continues to increase, causing the bubble to depress and forming a micro-jet. Unlike the straight wall, the bubble depression direction in Cases 5 and 6 is not vertical but perpendicular to the concave wall. Under the effect of the pressure difference, the bubble forms the first collapse at t = 910 tu, and the collapse pressure is generated. At this time, the cavitation bubble evolves into a ring shape. The collapse pressure is much greater than the surrounding pressure, and it could reach 0.045 mu·lu −1 tu −2 . The ring-shaped bubble finally collapses under the combined influence of the collapse pressure and the surrounding pressure, which is called the second collapse. The pressure caused by the second collapse is smaller than the first, which is only 0.03 mu·lu −1 tu −2 . The direction of the micro-jet velocity changes due to the blockage of the wall surface after the second collapse, resulting in the formation of a vortex around the ring-shaped bubble. Moreover, the vortex lasts for quite a long time, and its existence leads to a low-pressure zone. Evolution of the Double Bubbles with Vertical Arrangement In this section, the collapse process of double cavitation bubbles with vertical arrangement is studied, and the effect of the relative distance between the double cavitation bubbles on the collapse process is discussed. Figure 12 illustrates the computation layout. The parameters and the computational domain are the same as in the previous cases, some of which are shown in Table 3. Evolution of the Double Bubbles with Vertical Arrangement In this section, the collapse process of double cavitation bubbles with vertical arrangement is studied, and the effect of the relative distance between the double cavitation bubbles on the collapse process is discussed. Figure 12 illustrates the computation layout. The parameters and the computational domain are the same as in the previous cases, some of which are shown in Table 3. Figure 13 displays the cavitation-bubble's collapse for Cases 7-9. It can be observed that the upper bubble is far away from the rigid wall, so the effect of the wall on it can be negligible. The upper bubble shrinks under the surrounding additional pressure and then sags and collapses. This process is similar to the collapse of the single bubble near the rigid wall. As λ 2 increases, the pressure above the upper bubble increases, and the velocity of collapse also raises accordingly. In Case 9, no second collapse of upper cavitation bubble can be found. From these three cases, it can be seen that the effect of the lower bubble on the upper cavitation bubble is similar to the rigid wall. With the increase in λ 2 , the interval between the first and the second collapses of the upper bubble becomes shorter; eventually there is no occurrence of the second collapse. In Case 7, there is no obvious displacement at the top and bottom of the lower bubble, and the lateral velocities on both sides are relatively large. The lower cavitation bubble is elongated along the vertical direction. It seems that the influence of the upper bubble and the wall on it reaches approximate balance. The lower bubble tends to sag from the middle (t = 1110 tu), which is similar to the collapse of the bubble with both rigid walls on the top and the bottom boundary [35]. It can be observed from Case 7 that a vortex forms after the upper bubble collapses. and the pressure in the In Case 7, there is no obvious displacement at the top and bottom of the lower bubble, and the lateral velocities on both sides are relatively large. The lower cavitation bubble is elongated along the vertical direction. It seems that the influence of the upper bubble and the wall on it reaches approximate balance. The lower bubble tends to sag from the middle (t = 1110 tu), which is similar to the collapse of the bubble with both rigid walls on the top and the bottom boundary [35]. It can be observed from Case 7 that a vortex forms after the upper bubble collapses. and the pressure in the vortex zone is negative. In other cases, the pressure in that zone is relatively close to that in the surrounding environment. The reason for that is mainly because the relative distance between the bubbles is rather small in Case 7, and the upper bubble is obviously affected by the obstruction of the lower one. Therefore, the direction of the micro-jet changes accordingly, resulting in a vortex zone, which generates negative pressure later. After the upper bubble collapses, the collapse process of the lower one in each case is not the same. For example, the mutual effect between cavitation bubbles decreases as λ 2 increases. Besides, the shrinking center of the lower bubble shifts from its middle to the upper. Figure 14 shows the evolution of pressure at point P in Cases 7-9. It can be seen that the evolution of pressure at point P for Cases 7 and 8 is similar. For example, the peaks are all round 0.03 mu·lu −1 tu −2 , and the peaks appear at almost the same time. As the distance between the upper cavitation bubble and the wall increases, the time of arriving at the wall for the pressure wave becomes shorter as well. In Case 9, the pressure peak is rather smaller than that in other cases, which is only 0.018 mu·lu −1 tu −2 . It is found that the pressure waves generated by the collapse of two bubbles counteracts. It can be seen that when the relative distance of cavitation bubbles is large enough, the wall surface can be protected. Energies 2020, 13, x FOR PEER REVIEW 14 of 20 lu −1 tu −2 . It is found that the pressure waves generated by the collapse of two bubbles counteracts. It can be seen that when the relative distance of cavitation bubbles is large enough, the wall surface can be protected. Figure 15 shows the evolution of the cavitation bubble micro-jet in Cases 7-9. In the initial stage, the cavitation bubble is in the contraction stage, and the velocity of the bubble's upper surface is the same in all cases. It has a slight difference in the stage of the depression when a micro-jet generates. Around t = 850 tu, as the relative distance of the cavitation bubbles increases, the growth rate of the micro-jet velocity also increases, and its peak ascends accordingly. After the micro-jet velocity reaches the peak, a slight downward trend can be found, and that is similar to the single bubble. Figure 15 shows the evolution of the cavitation bubble micro-jet in Cases 7-9. In the initial stage, the cavitation bubble is in the contraction stage, and the velocity of the bubble's upper surface is the same in all cases. It has a slight difference in the stage of the depression when a micro-jet generates. Around t = 850 tu, as the relative distance of the cavitation bubbles increases, the growth rate of the micro-jet velocity also increases, and its peak ascends accordingly. After the micro-jet velocity reaches the peak, a slight downward trend can be found, and that is similar to the single bubble. Figure 15 shows the evolution of the cavitation bubble micro-jet in Cases 7-9. In the initial stage, the cavitation bubble is in the contraction stage, and the velocity of the bubble's upper surface is the same in all cases. It has a slight difference in the stage of the depression when a micro-jet generates. Around t = 850 tu, as the relative distance of the cavitation bubbles increases, the growth rate of the micro-jet velocity also increases, and its peak ascends accordingly. After the micro-jet velocity reaches the peak, a slight downward trend can be found, and that is similar to the single bubble. Evolution of the Double Bubbles with Horizontal Arrangement In this section, the collapse process of double bubbles with horizontal arrangement will be simulated, and the effect of the relative distance between the bubbles on the collapse process will be discussed. The sketch of computational layout is shown in Figure 16, and the used parameters are displayed in Table 4. Evolution of the Double Bubbles with Horizontal Arrangement In this section, the collapse process of double bubbles with horizontal arrangement will be simulated, and the effect of the relative distance between the bubbles on the collapse process will be discussed. The sketch of computational layout is shown in Figure 16, and the used parameters are displayed in Table 4. Figure 17 shows that the collapse process of double cavitation bubbles with horizontal arrangement. It can be seen that cavitation bubbles shrink under the effect of environmental pressure at the beginning, and then cavitation bubble sags diagonally under the joint action of the wall and the adjacent bubble, generating a micro-jet. As the relative distance of cavitation bubbles increases, the position of the Energies 2020, 13, 4398 15 of 18 depression gradually moves upward. With the deepening of the depression, the effect of the relative distance between the bubbles on the collapse process can be clearly observed. In Case 10, the bubble collapses from the lower part at first, and it disappears rapidly under the effect of the collapse pressure generated by the lower part. Then a low-pressure zone generates where a vortex is formed. In Case 11, the cavitation bubbles form the first collapse from the middle, and it indicates that the suppression effect between cavitation bubbles is nearly equal to the inhibition effect of the wall on the bubble. In Case 12, the upper part collapses first, and the bubbles disappear under the collapse pressure. In Case 10, the pressure in the vortex zone is extremely low, and it is only about 0.001 mu·lu −1 tu −2 . The pressure in these low-pressure zones for Case 12 is around 0.005 mu·lu −1 tu −2 , which is bigger than that in Case 10. Therefore, it can be seen that the pressure in the vortex zone after the collapse is rather lower when the cavitation bubbles are relatively close. In addition, as the distance between them increases, the pressure in this zone increases accordingly. Discussion In this section, a brief comparison is displayed between results by other authors about the cavitation bubbles and the simulation in this paper. Mao et al. investigated the single and the doublebubbles' collapse near a flat wall by single relaxation time lattice Boltzmann method (SRT-LBM) [19]. A series of collapse laws have been found by analyzing the density and pressure field in different initial conditions. Tomita studied the effect of the rigid surface curvature on the bubble behavior by Discussion In this section, a brief comparison is displayed between results by other authors about the cavitation bubbles and the simulation in this paper. Mao et al. investigated the single and the double-bubbles' collapse near a flat wall by single relaxation time lattice Boltzmann method (SRT-LBM) [19]. A series of collapse laws have been found by analyzing the density and pressure field in different initial conditions. Tomita studied the effect of the rigid surface curvature on the bubble behavior by BIM [9]. In addition, they observed a mushroom-shaped bubble during the bubble collapse stage and found that the boundary curvature increases the micro-jet velocity. Shervani-Tabar and Rouhollahi investigated a single-bubble collapse near a concave rigid wall by two different methods, including BIM and finite difference method [36]. They found that the micro-jet velocity increased with the decrease in the surface concavity. Xue et al. numerically studied a single bubble collapse near a convex wall in different curvatures by multi-relaxation time lattice Boltzmann method (MRT-LBM) and obtained a relationship between micro-jet velocity and the initial pressure differences [37]. In the present study, an improved MRT-LBM model has been used to investigate the collapse of single and double bubbles near a concave rigid wall. It is found that the discrepancy exists both for the bubbles' behavior in different angels and the interactions between bubbles. More details of the work in this paper can be seen in the conclusion. Conclusions In this paper, the LBM-MRT model is used to simulate the collapse process of cavitation bubbles near the concave wall. Firstly, the model is verified by the experiment results. After that, the model is used to simulate the collapse process of cavitation bubbles near the concave wall. It includes the cases of different additional pressures and angles with vertical direction, as well as double cavitation bubbles with vertical and horizontal arrangements. Moreover, the evolution of the pressure field, the characteristic point pressure, and the micro-jet have been discussed in detail. Based on the studies, the following conclusions can be drawn: 1. The collapse process of cavitation bubble is affected by the pressure of the surrounding environment. When the additional pressure around the environment decreases, the velocity of cavitation-bubble collapse becomes slower, and the duration of the collapsing process increases accordingly. Moreover, no second collapse of cavitation bubble can be found when the additional pressure is lower than a critical value. When the angle of the cavitation bubble with vertical direction changes, the collapse process of cavitation bubble is similar, but the depression direction is perpendicular to the concave wall. After it collapses, the low-pressure zone is generated due to the vortex. 2. When the double cavitation bubbles are arranged vertically, as the relative distance between cavitation bubbles increases, the pressure above the upper bubble increases, and the velocity of collapse also raises accordingly. With the increase in the relative distance, the interval between the first and the second collapses of the upper bubble becomes shorter; eventually there is no occurrence of the second collapse. After the upper bubble collapses, the collapse process of the lower one in each case is not the same. For example, the mutual effect between cavitation bubbles decreases as the relative distance increases. Besides, the shrinking center of the lower bubble shifts from its middle to the upper. 3. When the double cavitation bubbles are arranged horizontally, the mutual effect between the cavitation bubbles gradually decreases, as the relative distance of the cavitation bubbles increases. The depression position of cavitation bubble gradually moves from the lower part to the upper part, as the relative distance increases. In Case 11, the cavitation-bubble collapse from the middle of the bubble under the interaction between cavitation bubbles and the influence of the wall on the bubbles. After the collapse, the pressure in the vortex zone increases accordingly, as the relative distance of the cavitation bubbles increases.
9,111.2
2020-08-26T00:00:00.000
[ "Engineering", "Physics" ]
Exploring the Effectiveness of E-Learning in Fostering Innovation and Creative Entrepreneurship in Higher Education This research aims to explore the effectiveness of implementing E-Learning in increasing innovation and creativepreneur skills of students in higher education. By focusing on the digital learning environment, this research wants to identify the impact of using E-Learning platforms on the development of students' creative and entrepreneurial ideas. The research method involves collecting quantitative data through online surveys and qualitative through in-depth interviews with students and lecturers. The data is then analyzed to measure the level of effectiveness of E-Learning in providing support for innovation and creativepreneurship skills. It is hoped that the results of this research will provide new insight into the potential of E-Learning in increasing students' creativity and entrepreneurial spirit in the digital era. It is hoped that the conclusions of this research can provide a basis for developing more effective online learning strategies in supporting innovation and entrepreneurship in higher education INTRODUCTION Higher education today cannot ignore the paradigm changes that occur as a result of advances in information technology.Innovation in education is becoming increasingly important, and one solution is through the application of E-Learning [1].This digital transformation opens the door to new possibilities in developing innovation and entrepreneurial skills among students [2].Universities, as higher education institutions, have a big responsibility to prepare students not only with academic knowledge, but also with creative and entrepreneurial skills that are relevant to the demands of the times [3]. In the context of globalization and increasingly fierce competition, students need to be equipped with skills that can increase their competitiveness in an ever-changing job market.E-Learning, as a digital learning tool, provides greater accessibility to educational resources and allows students to learn independently through digital platforms.Therefore, this research will explore the extent to which the application of E-Learning can be effective in increasing innovation and creativepreneur skills among university students [4].The literature review forms the main basis of this research, highlighting the positive impact of E-Learning in facilitating innovative learning and the development of entrepreneurial skills [5].Several previous studies have indicated that the use of digital technology in the learning process can encourage creative and innovative thinking among students [6].Therefore, this research is directed at exploring more deeply the effectiveness of implementing E-Learning in increasing innovation and creativepreneurship skills in higher education [7]. The urgency of this problem lies in the urgent need to ensure that the higher education system is able to keep up with current developments and provide optimal support for the development of students as innovators and entrepreneurs.In line with economic dynamics and technological developments, it is hoped that this research will provide a clearer view of the potential of E-Learning as an effective tool for advancing creativity and entrepreneurial spirit among students [8], [9].The aim of this research is to gain an in-depth understanding of the extent to which E-Learning can be effective in fostering innovation and creativepreneurship skills among college students.By analyzing quantitative and qualitative data, this research seeks to provide insights that can be the basis for developing better online learning strategies to support the growth of students' creativity and entrepreneurial spirit, so that they can become reliable agents of change in society [10]. LITERATURE REVIEW In detailing the literature relevant to research studies regarding the effectiveness of E-Learning in increasing innovation and creativepreneur skills in higher education, we can explore works that provide a strong theoretical and empirical foundation [11].One significant source is the book "Teaching in a Digital Age: Guidelines for Designing Teaching and Learning" by Tony Bates [12].This book discusses in depth the potential of E-Learning in improving the quality of learning and creating an environment that supports student creativity.With a focus on effective learning design, Bates highlights how technology can be used to stimulate creative and innovative thinking among students [13]. In addition, the meta-analysis results in the work "Evaluation of Evidence-Based Practices in Online Learning" by Means, Toyama, Murphy, Bakia, and Jones provide a deep understanding of the effectiveness of online learning [14].Through this research, empirical evidence can be found about the positive impact of E-Learning on student learning achievement and skill development [15]. However, it is also important to pay attention to the analytical aspects of big data in the context of E-Learning, as discussed in "Guest Editorial-Learning and Knowledge Analytics: The Rise of Big Data" by Siemens and Gasevic.They highlight the important role of big data analysis in providing deep insights into student learning patterns, which in turn can make a major contribution to improving creative and entrepreneurial aspects [16]. Meanwhile, "NMC Horizon Report: 2014 Higher Education Edition" by Johnson, Adams Becker, Estrada, and Freeman is an important reference in identifying emerging educational technology trends [17].This report provides insight into how technological developments can impact innovation and entrepreneurial skills in the E-Learning context. Finally, the article by Conole and Dyke, "What are the affordances of information and communication technologies?"provides a basic understanding of how information and communication technology features, including E-Learning, can facilitate innovation and entrepreneurial learning.This entire literature, with its various perspectives, forms a solid basis for exploring the role of E-Learning in enhancing the creative and entrepreneurial aspects of students in higher education [18]. METHOD This research aims to explore the impact of implementing E-Learning on the development of innovation and creativepreneur skills among university students [19].The research method that will be used involves qualitative and quantitative approaches in order to gain an in-depth and measurable understanding of the influence of E-Learning in the learning context [20].The scope of this research will include active students at several universities that have adopted the E-Learning system [21]. 1. Approach: This research will adopt a qualitative and quantitative approach.A qualitative approach was used to gain an in-depth understanding of students' experiences in using E-Learning and its impact on creativity and entrepreneurship.Meanwhile, a quantitative approach will be used to measure the extent of the effectiveness of E-Learning in increasing innovation and creativepreneurship skills through statistical data analysis [22]. 2. Scope or Object: This research will focus on college students who are involved in the use of E-Learning.The research object involves the online learning process, student interaction with the E-Learning platform, and its impact on innovative development and entrepreneurial skills.3. Variable Operational Definition/Research Focus Description: 4. Independent Variable: Implementation of E-Learning. 5. Dependent Variable: Level of student innovation and creativepreneurship skills.6. Research Focus: Identifying the relationship between the use of E-Learning and increasing student innovation and creativepreneur skills [23].7. Place: This research will be carried out at several universities that have a wellimplemented E-Learning system.The choice of college will involve a variety of contexts and student characteristics.8. Population and Sample/Informants: Population: Active students at universities who use E-Learning.9. Sample: The sample will be randomly selected from several universities representing various disciplines and semester levels [24].10.Material: Secondary data in the form of recordings of student interactions with the E-Learning platform, as well as related literature.11.Main Tools: Online surveys, in-depth interviews, and observations as the main data collection instruments [25].12. Data collection technique: Online Survey: To collect quantitative data regarding student perceptions of the effectiveness of E-Learning [26].13.In-Depth Interviews: To gain deeper qualitative insights into students' experiences and the factors influencing their innovation and entrepreneurship [27].14.Observation: To directly observe student interactions with the E-Learning platform [28].15.Statistical Analysis: Using statistical software to analyze survey data and identify patterns of relationships between variables [29]. Qualitative Analysis: A thematic approach will be used to analyze interview and observation data, focusing on findings related to innovation and entrepreneurial skills.With a combination of qualitative and quantitative approaches, as well as the use of various data collection and analysis techniques, this research is expected to provide a holistic and in-depth understanding of the role of E-Learning in increasing innovation and creativepreneurship skills among university students [30]. Deep interview Conduct in-depth interviews with experts in the field of business education, entrepreneurs who have implemented the Lean Startup methodology, and academics who specialize in entrepreneurship education.This interview aims to gather diverse perspectives on needs, challenges and opportunities in designing relevant curricula. Case Study Conduct case studies on institutions that have implemented elements of the Lean Startup methodology in their curriculum.Analyze the impact of this implementation on student learning outcomes and their success after graduation. Survey Distribute a survey to students and alumni of business programs to assess their perceptions of the effectiveness of the education they received in preparing them for the world of digital business.The survey also aims to identify gaps in knowledge and skills that they experience. Data analysis Use thematic analysis to identify key themes from interview and survey data.Apply content analysis to case study data to extract best practices and lessons that can be applied in curriculum design. Prototype Curriculum Design Develop a curriculum prototype based on findings from collected data.This prototype will be tested with focus groups consisting of prospective students, entrepreneurs, and educators to get feedback and iterate on the design. Validation Conduct pilots on designed curriculum modules with student cohorts to test their effectiveness in real learning environments.Collect and analyze feedback from students and teachers to make further adjustments to the curriculum. This method allows research to not only identify theoretical principles that should be integrated into digital business study programs but also to understand how these principles can be practically applied in educational settings.The end result is a curriculum framework that can be adapted and adopted by educational institutions wishing to prepare their students for the challenges and opportunities in digital business. RESULTS AND DISCUSSION This research reveals important findings regarding the design and implementation of digital business study programs that integrate the Lean Startup methodology.First, an analysis of traditional business curricula compared to the needs of today's digital industry highlights a significant skills gap.Skills such as basic programming, data analysis, UX/UI understanding, and the ability to adapt and solve complex problems, which are increasingly important in the digital economy, are often not emphasized enough.This indicates the need for a more dynamic and responsive curriculum that can adapt to rapid changes in technology and business practices. The Lean Startup methodology, which emphasizes iterative learning and product development responsive to customer feedback, was enthusiastically received by students and faculty.The implementation of these practices in the curriculum has increased student engagement and given them the tools to face real challenges in business.A prototype curriculum designed based on Lean Startup principles demonstrated improvements in student learning outcomes, with many reporting increased confidence in identifying market opportunities and developing viable business strategies. Further discussion of the skills gap emphasizes the importance of education that is oriented towards skills and competencies that can be directly applied in a professional context.The curriculum should allow students to develop the necessary technical skills while also strengthening their soft skills through simulations, real projects, and interactions with industry professionals. Figure 2. Business Plan Discussions about the acceptance of the Lean Startup methodology suggest that this approach can help overcome some of the shortcomings of traditional business education by introducing students to concepts such as MVP, pivots, and customer validation.This emphasizes the importance of experience-oriented learning, where students can learn from failure and success in a controlled and supportive environment. The prototype curriculum focused on Lean Startup has transformed the students' learning experience, enabling them to engage in a more dynamic and practical learning process.This not only enhances their theoretical knowledge but also strengthens their practical skills, increases students' job readiness, and prepares them to contribute effectively in the ever-evolving digital economy.This research shows that the integration of Lean Startup methodology into digital business study programs offers significant potential to improve the quality and relevance of entrepreneurship education, providing a framework for educational institutions to design curricula that are adaptive and responsive to the needs of the evolving digital business industry.Users provide more length ChatGPT Results and Discussion: This research yielded in-depth insights into how Lean Startup methodology can be integrated into digital business curricula to bridge the skills gap that exists between traditional business education and the demands of the modern job market.Comparative analysis between traditional business curricula and digital industry needs shows that there is a significant shortage in the technical and analytical skills required to operate effectively in the digital economy.These skills include not only aspects of information technology such as programming and data analysis, but also the design thinking, innovation, and creativity skills necessary to develop user-centered solutions. Additionally, soft skills such as communication, teamwork, and adaptive leadership are also necessary to lead in a fast-paced and often ambiguous environment.In terms of acceptance of the Lean Startup methodology, this research found that this approach resonates deeply with students and faculty.This methodology is appreciated for its focus on action-based learning and rapid adaptation based on market feedback.A curriculum that adopts Lean Startup principles has been proven to increase student engagement and give them practical tools to face real challenges in business.Students involved in this prototyping curriculum reported increased confidence in their ability to identify market opportunities, develop a minimum viable product (MVP), and iterate based on customer feedback.Further discussion of the skills gap underscores the importance of education oriented towards skills and competencies that can be directly applied in professional contexts.A well-designed curriculum should allow students to develop necessary technical skills while also strengthening skills their software through simulations, real projects, and interactions with industry professionals.Discussions about the acceptance of the Lean Startup methodology suggest that this approach can help overcome some of the shortcomings of traditional business education by introducing students to concepts such as MVP, pivots, and customer validation.This emphasizes the importance of experienceoriented learning, where students can learn from failure and success in a controlled and supportive environment.The prototype curriculum focused on Lean Startup has transformed the students' learning experience, enabling them to engage in a more dynamic and practical learning process.This not only enhances their theoretical knowledge but also strengthens their practical skills, increases students' job readiness, and prepares them to contribute effectively in the ever-evolving digital economy. This research shows that the integration of Lean Startup methodology into digital business study programs offers significant potential to improve the quality and relevance of entrepreneurship education, providing a framework for educational institutions to design curricula that are adaptive and responsive to the needs of the growing digital business industry.Additionally, these findings suggest that a student-centered approach, which allows them to take ownership of their own learning process, is essential in entrepreneurship education.A flexible and adaptable curriculum, which allows students to explore their interests and apply their learning in real projects, is necessary to meet individual needs and career aspirations.This research also highlights the importance of collaboration between educational institutions and industry to ensure that curricula remain relevant and future-oriented.These partnerships can take a variety of forms, including internships, collaborative projects, and mentorship programs that allow students to gain hands-on experience and build their professional networks.Overall, the results and discussion in this research confirm that effective digital business education requires an innovative, adaptive, and sustainable approach that places students at the center of the learning process and prepares them to become leaders and innovators in an ever-changing global economy. CONCLUSION In the Industry 4.0 era, financial technology has undergone an unprecedented transformation, especially with the integration of FinTech, Crowdfunding, and Blockchain.The final conclusion of this research confirms that the integration of these three technologies has the potential to revolutionize the financial services sector, creating a new paradigm that is more inclusive, transparent and efficient.FinTech, with its ability to simplify financial processes and increase accessibility, has become a major catalyst for innovation in the industry.Crowdfunding, on the other hand, has enabled individuals and small businesses to access funding sources that were previously difficult to reach, facilitating economic growth and financial inclusion.Meanwhile, Blockchain, with its distributed ledger, offers a revolutionary security and transparency solution, addressing many of the challenges faced by traditional financial systems.However, despite its great potential, there are several obstacles that need to be overcome.Immature regulations, cybersecurity challenges, and issues related to technology adoption are some of the areas that require special attention. To realize the full potential of this integration, a balanced approach is needed that considers both aspects: exploiting the opportunities offered by new technologies while addressing emerging challenges.In addition, collaboration between stakeholders from the public and private sectors, as well as the academic community, will be key to ensuring that this integration provides maximum benefits for society at large.Education and training will also play an important role in ensuring that individuals and organizations are equipped with the skills and knowledge necessary to utilize these technologies effectively.Thus, this research makes an important contribution to our understanding of the future of financial services in the Industry 4.0 era.By providing valuable insights and concrete recommendations, this research serves as a guide for policymakers, industry practitioners and other stakeholders in formulating strategies and initiatives that will shape a more inclusive, efficient and sustainable future for the financial industry. Figure 1 . Figure 1.Research Method 1. Study of literature Conducted an extensive review of existing literature regarding Lean Startup methodology, digital business pedagogy, and curriculum design.Analyze academic documents and publications to identify best practices and relevant theories.2.Deep interviewConduct in-depth interviews with experts in the field of business education, entrepreneurs who have implemented the Lean Startup methodology, and academics who specialize in entrepreneurship education.This interview aims to gather diverse perspectives on needs, challenges and opportunities in designing relevant curricula.3.Case StudyConduct case studies on institutions that have implemented elements of the Lean Startup methodology in their curriculum.Analyze the impact of this implementation on student learning outcomes and their success after graduation.4.SurveyDistribute a survey to students and alumni of business programs to assess their perceptions of the effectiveness of the education they received in preparing them for the world of digital business.The survey also aims to identify gaps in knowledge and skills that they experience.5.Data analysisUse thematic analysis to identify key themes from interview and survey data.Apply content analysis to case study data to extract best practices and lessons that can be applied in curriculum design.6.Prototype Curriculum DesignDevelop a curriculum prototype based on findings from collected data.This prototype will be tested with focus groups consisting of prospective students, entrepreneurs, and educators to get feedback and iterate on the design.7.ValidationConduct pilots on designed curriculum modules with student cohorts to test their effectiveness in real learning environments.Collect and analyze feedback from students and teachers to make further adjustments to the curriculum. Grace Hardini An academic who focuses on Economics and Business.Marviola is part of the Faculty of Economics & Business at IJIIS Incorporation.He has a strong educational background in economics, and his research interests include topics such as macroeconomics, international finance, and development economics.With a high dedication to research, Marviola has been actively involved in various research projects that contribute to understanding and developments in the field of economics.Contacted at email<EMAIL_ADDRESS>Tarisya Khaizure A professional in the field of Computer Science who comes from Pandawan Incorporation in New Zealand.Tarisya has a strong educational background in computer science and extensive practical experience in software development.His research interests include natural language processing, artificial intelligence, and web-based application development.Tarisya is actively involved in developing new technologies and continues to contribute to advances in the world of computing.Contacted at email<EMAIL_ADDRESS>Godwin A computer scientist based at Rey Corporation in the United States.Gelard has a solid educational background in computer science and extensive experience in software development and artificial intelligence.The research and projects he participates in are often related to data analysis, cyber security, and intelligent systems development.Gelard is known for his creativity in creating innovative technological solutions and contributing to the development of computer science.USA Contacted at email: gerald.godwin@rey.zone
4,554.2
2024-03-06T00:00:00.000
[ "Education", "Computer Science", "Business" ]
First records of the seamoth, Pegasus nanhaiensis (Actinopterygii: Syngnathiformes: Pegasidae), from the southern South China Sea, with notes on fresh coloration Three seamoth specimens (45.5–56.9 mm standard length; SL) (Syngnathiformes: Pegasidae), originally identified as Pegasus laternarius Cuvier, 1829, but now recognized as representing P. nanhaiensis Zhang, Wang et Lin, 2020, a species recently described from the northern South China Sea off Yangjiang and Beihai, China, were obtained at a local fish market in Maha Chai, Samut Sakhon Province, Thailand on 6 July 2012, having been caught in the northern Gulf of Thailand. In addition, single specimens, reported as P. laternarius or Spinipegasus laternarius from Bidong Island, South China Sea off the Malay Peninsula (46.1 mm SL) and from Ko Kradat, Trat Province, eastern Gulf of Thailand (66.1 mm SL), were re-identified here as P. nanhaiensis. Thai specimens and Malaysian record represent the first records of P. nanhaiensis from Thailand and Malaysia, respectively, and from outside Chinese coastal waters. Additionally, the Bidong specimen is the southernmost record for the species. The fresh coloration of P. nanhaiensis is described for the first time. Pegasus nanhaiensis was originally described on the basis of 17 specimens from the northern South China Sea (off Yangjiang and Beihai) (Zhang et al. 2020), no further specimens having been recorded since. However, three specimens, collected from the northern Gulf of Thailand prior to that description, were re-identified here as P. nanhaiensis, two having been reported as P. laternarius by Matsunuma (2013). These three specimens, therefore, represent the first records of P. nanhaiensis from the Gulf of Thailand and the first records outside Chinese coastal waters. In addition, previous records of P. laternarius (or as Spinipegasus laternarius) from the eastern Gulf of Thailand and the eastern Malay Peninsula were re-identified here as P. nanhaiensis. As Zhang et al. (2020) described the coloration of dry specimens only, a fresh color description of P. nanhaiensis is provided here for the first time. Methods Counts and measurements followed Osterhage et al. (2016) and Zhang et al. (2020). Measurements were made to the nearest 0.1 mm with digital calipers under a dissecting microscope. Standard length is abbreviated as SL. Terminology of body parts and determination of sex followed Palsson and Pietsch (1989). The following description was based solely on the three specimens from the northern Gulf of Thailand (Figs. 1-3). Photographs of the lateral view of tail rings I-VI (Fig. 3) were taken with a Nikon D850 camera using the internal focus bracketing function (focus step width 1, number of shots 30); a set of multifocal images were then collated into an overall well-focused composite image using Combine ZP (free software: available at https://combinezp. software.informer.com). Institutional codes follow Sabaj (2020 Description. Measurements are given in Table 1. Body depressed, encased in bony plates. Eyes not visible in ventral view. Rostrum of male long, club-shaped, with many small surface spines; that of female very short, pointed. Mouth small, inferior, toothless. Gill opening restricted to small dorsolateral hole behind head. Two rows each with two small tubercles on dorsum of head. Carapace comprising three pairs of dorsal plates (d 1-3 ), four pairs of dorsolateral plates (dl 1-4 ), paired superior pectoral-fin plates (pp.s.), and two paired extralateral plates (el 1-2 ); rounded hump-like tubercles on each dorsal plate (d 1-3 ); small posteriorly directed tubercles on lateral edges of each dorsolateral plate (dl 1-4 ). [KAUM-I. 47680 with hookshaped tubercle between paired dorsal plates (d 2 ); absent in KAUM-I. 47679 and 47681]. Plastron comprising five paired ventrolateral plates (vl 1-5 ), paired gular plates (g), pectoral plates (p), ventral plates (v), anal plates (a) and inferior pectoral-fin plates (pp.i.) and an unpaired pre-anal plate (ip). Anus located between preanal plate and tail ring I. KAUM-I. 47679 with 6 inwardly directed spines (7 and . Distributional records of Pegasus nanhaiensis. Yellow stars: type series localities (black arrow: type locality); red circles and striped area: localities of presently reported specimens (specimens from northern Gulf of Thailand were obtained at a fish market; their approximate collection locality indicated). 5 in KAUM-I. 47680 and 47681, respectively) on dorsal surface of ventrolateral plate (vl 1 ). Small central tubercles on each pectoral and ventral plate; interventral and pre-anal plates with bulge, the latter plate with posteriorly directed tubercle; small, posteriorly directed tubercles on lateral edges of each vl 2 -vl 4 . Tail elongate, with 11 tail rings (I-XI); 9 th and 10 th tail rings fused together, anterior 8 rings mobile; small, posteriorly directed tubercles on corners of each tail ring, their tips sharply pointed; tubercles smaller on posterior tail rings; anteriorly directed spines on anterior of tubercles on tail rings IX, X, and XI; two paired caudolateral plates overlapping junctions between tail rings II and III and IV and V; dorsal surface of last tail ring lacking spine. Wing-like pectoral fins large, inserted horizontally, with 11 rays (10 and 12 rays on left and right side, respectively, in KAUM-I. 47679), 5 th ray stout, thicker than other rays. Pelvic fins with 1 spine and 2 rays; each pelvic fin separated without membrane, inserted into an unpaired interventral plate; first spine very long, extended posteriorly. Dorsal and anal fins short, each with 5 soft rays, extending from center of dorsal and ventral tail ring II to center of tail ring IV, respectively. Discussion The presently reported specimens were consistent with the diagnosis of Pegasus nanhaiensis, provided by Zhang et al. (2020), all having a rounded hump-like tubercle on each of dorsal plates I, II, and III; clear, distinctly bounded hexagonal patterns on the dorsal plates (d 1-3 ), and dorsolateral plates (dl 1-4 ); two paired caudolateral plates overlapping the junctions between tail rings II and III and IV and V (Fig. 3); and a bulge on the margin of the ventral plate connecting with the paired pelvic fins. Al-though the rostrum length in the female and rostrum tip width in males in this study differed slightly compared with the original description (6.9% of SL and 4.9%-5.0% of SL, respectively, in the presently reported specimens vs. 4.8%-6.3% and 3.0%-4.8%, respectively, in the type series; Table 1), such minor differences were regarded here as intraspecific variations. Pegasus nanhaiensis is similar to P. laternarius in sharing 11 tail rings, thickened fifth pectoral-fin ray, the fused 9 th and 10 th tail rings, and a wider carapace (carapace width 28.8%-37.0% of SL in the former, 24.7%-35.8% in the latter), whereas other congeners have 12 (in P. tetrabelos and P. volitans) and 14 (in P. lancifer) tail rings, normal fifth pectoral-fin ray (not thickened; in P. lancifer and P. volitans), the posterior 3 (in P. tetrabelos and P. volitans) and 7 (in P. lancifer) tail rings fused together and a slender carapace (21. . Pegasus nanhaiensis can be distinguished from P. laternarius by the above-mentioned diagnostic characters (the latter with a pointed, roughly triangular tubercle on each of dorsal plates I, II, and III; no hexagonal pattern on dorsal plates; three paired caudolateral plates on tail rings II and III, III and IV, and IV and V) (Palsson and Pietsch 1989;Osterhage et al. 2016;Zhang et al. 2020;this study). In addition, 16S rDNA and COI analyses put P. nanhaiensis into a different clade from P. laternarius, separated by a genetic distance of 3.51-3.53 percentage points (Zhang et al. 2020). Pegasus nanhaiensis was previously known only from the type specimens from the northern South China Sea, off Yangjiang and Beihai, China (Zhang et al. 2020), the three specimens described herein representing the first records of P. nanhaiensis from the Gulf of Thailand. In addition, a single specimen (ZMUC P 842, 66.1 mm SL), reported as P. laternarius by Palsson and Pietsch (1989: 23, fig. 11) from Ko Kradat, eastern Gulf of Thailand and a single specimen (FRLM 55093, 46.1 mm SL), reported as Spinipegasus laternarius (Cuvier, 1829) by Hibino (2021: 14, unnumbered figs.) from off Bidong Island, east off the Malay Peninsula, South China Sea, were re-identified here as P. nanhaiensis, based on clear, distinctly bounded hexagonal patterns on the dorsal plates (d 1-3 ) and dorsolateral plates (dl 1-4 ) from their photographs, respectively. The Bidong specimen represents the southernmost record of the species (Fig. 4), suggesting that P. nanhaiensis is widely distributed in coastal waters of the South China Sea. The coloration of P. nanhaiensis was previously known only from dried specimens (Zhang et al. 2020), the fresh color description of the species being provided here for the first time. Although the dorsal and lateral body surfaces were dark brown and the first four segments of the tail rings darker than the remaining tail rings in the dried specimens (Zhang et al. 2020), the dorsal surface was yellow to dark yellowish-brown and tail rings I-IV and the posterior half of VII and VIII brown (remaining rings yellowish-white) in the presently reported fresh specimens from Thailand. The clear hexagonal patterns on the surface dorsal plate, found in fresh specimens of P. nanhaiensis (Fig. 1), was lost in preserved specimens (Fig. 2), which became indistinguishable from preserved P. laternarius on this basis (Figs. 2 and 5). donating specimens and G. Hardy (Ngunguru, New Zealand) for reading the manuscript and providing help with English. This study was supported in part by JSPS KAKENHI Grant Numbers 20H03311 and 21H03651; the JSPS Core-to-core CREPSUM JPJSC-CB20200009; and the "Establishment of Glocal Research and Education Network in the Amami Islands" project of the Kagoshima University adopted by the Ministry of Education, Culture, Sports, Science, and Technology, Japan.
2,210.6
2022-01-11T00:00:00.000
[ "Environmental Science", "Biology" ]
College English Writing Teaching Design Based on Constructivist Mode Constructivist theory, a key branch of cognitive psychology, is universally recognized as the theoretical basis for innovating traditional teaching. This paper attempts to apply constructivism to writing education, which will be used to help guide writing teaching design and class construction, in order to explore and discover a proper and student-tailored writing class teaching mode. Introduction Constructivist theory has long been exerting its great influence throughout the world on all fields, especially teaching and classroom.The 21 st century has witnessed a tremendous change in China's education, which has experienced the transformation from knowledge-cramming-oriented teaching to education which is oriented by training students' innovation and integral qualities.In order to satisfy such needs, constructivism-guided teaching modes and methods have been applied increasingly to modern teaching writing practice.This paper is aimed at the exploration of college English writing class modes and teaching methods based on constructivism. Learning Theory According to constructivism, learners take to acquiring their knowledge about the outside world in the process of mutual interaction with their surroundings, developing their own cognitive structure (Guo Xuan, 2006).The theory emphasizes the central position of the students.It requires the students' role transformation from stimu-lated passive receivers and objects into subjects processing knowledge and active constructors of meanings; as well as the teachers' part from mere knowledge givers into helpers and promoters to students.This will help prompt the effective revolution of the traditional teaching concepts, modes and methods. Teaching Mode Under the guidance of learning theory, elements in learning environment will be fully utilized to help students build their knowledge and meanings.In this mode, the learning environment includes circumstances, cooperation, conversation and construction.Teaching course involves teachers, students, teaching materials and media etc. whose functions are in essence different from the traditional ones.Students will become active, while teachers will be organizers and supervisors (Cruickshank, D. R. et al., 2006).Teaching materials will not be the only learning content, but something students feel like pursuing.Media will also be the cognitive instruments to create situations, cooperate and communicate for students and instructors rather than mere means and methods for teachers. Teaching Methods Teaching methods guided by constructivism involve three types, namely, Scaffolding Instruction, Anchored Instruction and Random Access Instruction, the first two of which are applied in this research. Scaffolding Instruction can trace its origin back to the Zone of Prozimal Development Theory proposed by the famous former Soviet psychologist Vygotsky (Liu Dianzhong, 2007).This theory holds the distance between children's actual level of solving problems independently (the first development) and their potential level of settling problems under instructors' guidance (the second development) as the "Zone of Prozimal Development" (See Figure 1).Scaffolding teaching method stresses learning theme, around which teachers construct conceptual framework in regard to.With the help of tutors, students improve within the frame to achieve the final target of knowledge construction.In writing, teachers constantly lead students from one writing level up to another higher one.For instance, learners can be given a theme, "My hometown", through which instructors can combine the previous description type with the new point, exposition, to enable students to grasp the new knowledge while mastering the old one, raising writing skills step by step. Anchored Instruction needs teachers to provide infectious real events and problems in class, which are compared to "anchor" (Liu Yang, 2005).Once these events and problems are set, the teaching contents and progress will be "anchored".The materials for writing projects are mostly originated from real-life events.Writing comes from and above real life.Teachers need to lead students to pay close attention to what is going on around them.For example, when the topic "Introduction of Chinese traditional festival" is given, it is not difficult to recall some rituals we observe for each custom (See Figure 2). Random Access Instruction aims to let learners have a comprehensive and intensive command of all knowledge and skills through many "accesses" to the same contents.Learning the same content should be done many times, in different contexts, with various targets and with a view to diverse sides of the questions (Gao Wen, 1999).This mode will enable learners to get a new understanding of knowledge (Xu lihua, 2012). In writing class, the writing teaching courses based on constructivism should be involved with such teaching links as context-setting, cooperative learning, exchange and discussion and meaning-constructing, etc. Learners' Position The students-centered concept in teaching design is the core principle of constructivism, which is embodied by the three elements as pioneering spirits, knowledge exhibition and self-feedback realization (Gagnon, G. W. et al., 2001).In writing, teachers should help students take their own experiences and ideas as the original material, and integrate with writing knowledge input from the writing course, creating their own unique essays.Meanwhile, learners can review and reread their works to achieve self-feedback. Learning Context Constructivism emphasizes the great influence of context on meanings, and believes the close relationship between learning and certain social and cultural background.Learning context is an indispensable factor in writing a good article. Cooperative Learning The interaction and cooperation between learners and their surroundings are key to meaning construction.Under teachers' guidance and organization, students get together to do discussions, communication and debate about what and how to write and so on, enabling the whole group (including teachers and each student) to share all ideas and intellect. Learning Environment Constructivism stresses the careful design of learning environment rather than the teaching one.It is where learners can explore freely and study independently (Dick, W. et al., 2005).Our writing course design makes full use of computers and the Internet to create a learning website for the students to seek materials, enjoy works of high quality, post their own articles and evaluate and learn from others, providing a good learning atmosphere for the course as well as offering more opportunities f or learners to interact liberally and actively. Learning Materials All resources in the course need to support learners' active exploration and knowledge acquisition, not merely for instructors' presentation (Rowntree, D., 1990).Our teachers' job is to give necessary and adequate aid and instruction as to where, how and how to effectively utilize these writing resources. Learning Aims The whole course design should be centered on Meaning Construction, which means the ultimate aim for studying is to accomplish methods acquisition (Jiang Mei, 2007).Each writing course will involve methods introduction, theme discussion, individual or group practice, etc., achieving meaning construction for learners. Designing Learning Environment The writing course for this semester is held in multimedia classroom, with adequate space and free and open atmosphere as well as multimedia equipment like computers, slide projectors etc.. Students can conduct mutual negotiations, group discussions, individual design and display, thus improving and perfecting their own writing skills and works. Designing Evaluation of Learning Effect Our writing course involves two individual evaluations.One is to let students assess and grade their own achievement, including conception, outline-making, writing process, correction and self-feedback.The other is to have students fill in a summary form about this writing lesson, making qualitative evaluation on their mastering writing knowledge and skills, their attitudes and methods, as well as their shortcomings and improvement, etc. Designing Intensive Training The intensive training of the writing lesson contains input and output.Input means during the before-and-after class time students should read, appreciate, digest and absorb, study and imitate large amounts of excellent famous works.Output is designed to arouse students' interest and desire in writing through the input, enabling them to construct and create their own inspiration and good works. Teachers' Leading Role and Position Just as a play is completed by the cooperation of the director and actors, a course is accomplished by the joint efforts of the teacher and students.The supervision of a teacher and the study of students are to a course what the guidance of a director and the performance of actors to a play.The writing course education based on the constructivist principle implies the heavier responsibilities, the greater role and the higher position of the teacher (Dou Shude, 2009).This writing course displays the author's careful preparation before lesson, enlightenment and organization during the course, and feedback and complement after class, showing the teacher's leading role and position. Real Learning Context Learning is the process in which learners actively build the objective world, which requires the real-life materials and prototypes in class (Li Qun, 2003).However, if the materials, without the teacher's plan and design, are presented to students disorderly, study will become aimless and blind exploration, communication and discussion will turn to boundless free talk, and the meaning construction will be in vain, resulting in chaotic teaching activities and situations, which the author experienced previously.Accordingly, in this writing course design, it takes lots of time to collect and organize and design the resources and courseware, thus conducting the writing course well and smoothly. Applicability of the Constructivist Teaching Mode The writing course is of strong practicality, which can be applied by the constructivism.But those with emphasis on theoretical and abstract knowledge should not be acquired from direct experiences.Consequently, teachers are expected to select teaching methods and teaching modes in accordance with actual teaching conditions in the course design.The teaching and learning theories should be applied to pedagogical practice tailored to diverse lessons and individuals. Conclusion In conclusion, concerning the teaching modes and instructing methods, constructivism revolutionizes the traditional teaching patterns.However, constructivist theory must go through its development and refinement in practice, and at the same time, it should be combined with other theories and patterns or modes in order to seek for better teaching methods and learning instructions for promising future college English writing education as well as college English education.
2,227.4
2015-01-27T00:00:00.000
[ "Education", "Linguistics" ]
Dynamics of an Inverting Tippe Top The existing results about inversion of a tippe top (TT) establish stability of asymptotic solutions and prove inversion by using the LaSalle theorem. Dynamical behaviour of inverting solutions has only been explored numerically and with the use of certain perturbation techniques. The aim of this paper is to provide analytical arguments showing oscillatory behaviour of TT through the use of the main equation for the TT. The main equation describes time evolution of the inclination angle $\theta(t)$ within an effective potential $V(\cos\theta,D(t),\lambda)$ that is deforming during the inversion. We prove here that $V(\cos\theta,D(t),\lambda)$ has only one minimum which (if Jellett's integral is above a threshold value $\lambda>\lambda_{\text{thres}}=\frac{\sqrt{mgR^3I_3\alpha}(1+\alpha)^2}{\sqrt{1+\alpha-\gamma}}$ and $1-\alpha^2<\gamma=\frac{I_1}{I_3}<1$ holds) moves during the inversion from a neighbourhood of $\theta=0$ to a neighbourhood of $\theta=\pi$. This allows us to conclude that $\theta(t)$ is an oscillatory function. Estimates for a maximal value of the oscillation period of $\theta(t)$ are given. Introduction A tippe top (TT) is constructed as a truncated axisymmetric sphere with a small peg as its handle. The top is spun on a flat surface with the peg pointing upward. If the initial rotation is fast enough, the top will start to turn upside down until it ends up spinning on its peg. We call this interesting phenomenon an inversion. It is known that the TT inverts when the physical parameters satisfy the conditions 1 − α < γ = I 1 I 3 < 1 + α where 0 < α < 1 is the eccentricity of the center of mass and I 1 , I 3 are the main moments of inertia. The TT and the inversion phenomenon has been studied extensively throughout the years, but the dynamics of inversion has proven to be a difficult problem. This is because even the most simplified model for the rolling and gliding TT is a non-integrable dynamical system with at least 6 degrees of freedom. The focus in many works has been on the asymptotics of the TT [1,5,9,10,13] or on numerical simulations for a TT [3,11,18]. In this paper we study equations of motion for a rolling and gliding TT in the case of inverting solutions and analyse dynamical properties of such solutions through the main equations for the TT [12,14]. We study the main equation for the TT for a subset of parameters satisfying 1 − α 2 < γ < 1 and 1−γ γ+α 2 −1 = mR 2 I 3 when it acquires a simpler form, which enables detailed analysis of deformation of the effective potential V (cos θ, D, λ) during the inversion. We show that, during the inversion, a minimum of the effective potential moves from the neighbourhood of θ = 0 to the neighbourhood of θ = π and therefore the inclination angle θ(t) oscillates within a nutational band that moves from the north pole to the south pole of the unit sphere S 2 . We give also estimates for the period of nutation of the symmetry axis. The tippe top model We model the TT as an axisymmetric sphere of mass m and radius R which is in instantaneous contact with the supporting plane at the point A. The center of mass CM is shifted from the geometric center O along its symmetry axis by αR, where 0 < α < 1. We choose a fixed inertial reference frame ( X, Y , Z) with X and Y parallel to the supporting plane and with vertical Z. We place the origin of this system in the supporting plane. Let (x,ŷ,ẑ) be a frame defined through rotation around Z by an angle ϕ, where ϕ is the angle between the plane spanned by X and Z and the plane spanned by the points CM , O and A. The third reference frame (1,2,3), with origin at CM , is defined by rotating (x,ŷ,ẑ) by an angle θ aroundŷ. Thus3 will be parallel to the symmetry axis, and θ will be the angle betweenẑ and3. This frame is not fully fixed in the body. The axis2 points behind the plane of the picture of Fig. 1. We let s denote the position of CM w.r.t. the origin of the frame ( X, Y , Z) and the vector from CM to A is a = R(α3 −ẑ). The orientation of the body w.r.t. the inertial reference frame (X,Ŷ ,Ẑ) is described by the Euler angles (θ, ϕ, ψ), where ψ is the rotation angle of the sphere about the symmetry axis. With this notation, the angular velocity of the TT is ω = −φ sin θ1 +θ2 + (ψ +φ cos θ)3, and we denote ω 3 :=ψ +φ cos θ. The principal moments of inertia along the axes (1,2,3) are denoted by I 1 = I 2 and I 3 , so the inertia tensor I will have components (I 1 , I 1 , I 3 ) with respect to the (1,2,3)-frame. The axes1 and2 are principal axes due to the axisymmetry of TT. The equations of motion for TT are the Newton equations for the rolling and gliding rigid body where F is the external force acting on the TT at the supporting point A and L = Iω is the angular momentum w.r.t. CM . We assume that the TT is always in contact with the plane at A, soẑ · (a + s) = 0 holds at all times. This system is known to admit Jellett's integral of motion λ = −L · a = RI 1φ sin 2 θ − RI 3 ω 3 (α − cos θ) (without loss of generality, we will assume in this paper that λ is positive). The contact condition determines the vertical part of the external force, but the planar parts must be specified to make system (1) complete. We assume that the contact force has the form F = g nẑ − µg n v A , where g n ≥ 0 is the normal force and −µg n v A is a viscous-type friction force, acting against the gliding velocity v A . The quantity µ(L,3,ṡ, s, t) ≥ 0 is a friction coefficient. For this model of the rolling and gliding TT, it is easy to see [4,15] that the energy is decreasingĖ = F · v A < 0 and that theŷ component of the friction force is the only force creating the torque necessary for transferring the rotational energy into the potential energy, thus lifting the CM of the TT. This mechanism shows that the inversion phenomenon is created by the gliding friction. The asymptotic properties of this model have been analysed in previous works [1,5,9,10,13]. In the nongliding case, v A = 0, the possible motions for the TT are either spinning in the upright (θ = 0) or in the inverted (θ = π) position, or rolling around with fixed CM with an inclination angle θ ∈ (0, π). The inclined rolling solutions are called tumbling solutions. If 1 − α < γ < 1 + α, where γ = I 1 /I 3 , every angle in the interval (0, π) determines an admissible tumbling solution. Further, by a LaSalle-type theorem [13], it is known that for initial conditions such that the absolute value of the Jellett integral |λ| is above the threshold value λ thres , only the inverted spinning position is a stable asymptotic solution. For a TT built such that it satisfies the parameter condition 1 − α < γ < 1 + α and for initial conditions with L ·ẑ such that λ > λ thres , the inversion can take place. Since we are primarily interested in the dynamics of inversion and we want to consider solutions describing an inverting TT, the basic assumptions are that the TT in question satisfies the parameter constraint and that we have initial conditions such that λ is above the threshold. Then we have a situation where an inverting solution becomes the only stable asymptotic solution, so the TT has to invert. Our aim is to describe the dynamics of inverting solutions. In our particular model, the assumptions about the reaction force F and the contact constraint yield the reduced equations of motion for the rolling and gliding TT: where r = s − sẑẑ. We write the gliding velocity as v A = ν x cos θ1 + ν y2 + ν x sin θ3, where ν x , ν y are the velocities in the2 ×ẑ and2 direction. Equations (2) can be written in the Euler form and then solved for the highest derivative of each of the variables (θ, ϕ, ω 3 , ν x , ν y ). We then get the system which, if we add the equation d dt (θ) =θ, becomes a dynamical system of the form (θ,θ,φ,ω 3 ,ν x ,ν y ) = (h 1 (θ, . . . , ν y ), . . . , h 6 (θ, . . . , ν y )). The value of the normal force g n can be determined from the contact constraint (a + s) ·ẑ = 0 and it is g n = mgI 1 + mRα(cos θ(I 1φ 2 sin 2 θ + I 1θ 2 ) − I 3φ ω 3 sin 2 θ) We see that we get a complicated, nonlinear system for 6 unknowns. The main equation for the tippe top For further study of inverting solutions we need to clarify the logic of applying the main equation for the tippe top (METT) to analysing motion of TT. We need also to recall properties of TT equations when the TT is only rolling on the supporting surface and the gliding velocity vanishes v A =ṡ + ω × a = 0. It is the well known [2,7] integrable case of the rolling axisymmetric sphere that was first separated by Chaplygin. We need to explain how the structure of separation equations motivates the introduction of the METT and how this equation differs from the classical separation equation. For the purely rolling axisymmetric sphere, the constraint v A =ṡ + ω × a = 0 implies that the equations of motion (1) reduce to a closed system for the vectors3 and ω: For this system the external force is dynamically determined: In the Euler angle form the equations give a fourth order dynamical system for (θ,θ,φ, ω 3 ). The system (9) admits three integrals of motion. Since the system is conservative, the energy is an integral of motion. We also have Jellett's integral λ = RI 1φ sin 2 θ − RI 3 ω 3 (α − cos θ) as well as the Routh integral They allow to eliminate ω 3 = from the expression of the energy (10) to get the separation equation where g(cos θ) = 1 2 I 3 σ((α − cos θ) 2 + 1 − cos 2 θ) + γ and The separable first order differential equation (11) for θ determines the motion of the rolling TT. It is the Chaplygin separation equation for an axisymmetric sphere [2]. We shall show that for certain choice of parameters the effective potential V (z, D, λ) is convex in z ∈ [−1, 1] so, since V (z, D, λ) → ∞ as z → ±1, it has one minimum in the interval [−1, 1]. This means that for fixed E the solutions θ(t) describe nutational motion of the rolling TT between two bounding angles θ 1 , θ 2 determined by the equation E = V (cos θ, D, λ). The rolling and gliding TT only has λ as an integral of motion. It is useful however to consider D(θ(t), ω 3 (t)) = I 3 ω 3 (t) d(cos θ(t)) being now a time dependent function. Its derivative we calculate using the equations of motion (3)-(7) for the rolling and gliding TT: For the total energy of TT, we will call the modified energy function. The derivative of this function is d With the use of the functions D(θ, ω 3 ),Ẽ(θ,θ,φ, ω 3 ) we can write the TT equations of motion (3)- (7) in an equivalent integrated form [12,15] as These equations are as difficult as the equations (3)- (7). However, if we treat D(θ(t), ω 3 (t)) =: , ω 3 (t)) =:Ẽ(t) as given known functions, then from D(t) = I 3 ω 3 d(cos θ) and λ = RI 1φ sin 2 θ −RI 3 ω 3 (α−cos θ) we can calculateφ, ω 3 and substitute into expression (12) for the modified energy to obtain the METT [12,15] that involves only the function θ(t): This equation has the same form as equation (11), but now it depends explicitly on time through the functions D(t) andẼ(t). Solving this equation is therefore not longer possible. It is a first order time dependent ODE which we can study provided that we have some quantitative information about the functions D(t) andẼ(t). The functions D(t),Ẽ(t) are usually unknown but for inverting solutions we have qualitative information about their behaviour due to conservation of the Jellett function λ. Thus we consider the motion of the TT as being determined by the three functions (λ, D(t), E(t)) and governed by the METT. Of particular interest regarding the inversion movement is the initial and final position of the TT. The TT goes (asymptotically) from an initial angle close to θ = 0 to the final angle close to θ = π which means, since λ = −L · a is constant, that λ = L 0 R(1 − α) = L 1 R(1 + α) (where L 0 and L 1 are the values of |L| at θ = 0 and θ = π, respectively). This implies that The values (D 0 ,Ẽ 0 ) and (D 1 ,Ẽ 1 ) can be interpreted as the boundary values for the unknown functions (D(t),Ẽ(t)). So we assume that for inverting solutions (D(t),Ẽ(t)) The aim of the following sections is to analyse dynamical properties of the inverting solution as the symmetry axis of TT moves from a neighborhood of θ = 0 to a neighborhood of θ = π. In order to simplify the technical side of analysis we choose special values of the parameters in METT so that the effective potential V (cos θ, D, λ) becomes rational, but we expect that the whole line of reasoning can be repeated in the general case when the potential depends algebraically on z through d(z). We show that V (z, D, λ) is strictly convex and therefore has one minimum z min . We show also that, for inverting solutions when D(t) moves from D 0 to D 1 , the potential deforms so that z min = z min (D, λ) moves from a neighborhood of z = 1 to a neighborhood of z = −1. On the unit sphere S 2 the angle θ(t) performs nutational motion within the nutational band [θ − (t), θ + (t)] that moves from the neighborhood of the north pole to the neighborhood of the south pole. We shall give an estimate for the relation between the inversion time T inv and the maximal period of nutation T V (Ẽ(t), D(t)), so that if T inv is an order of magnitude larger than T V , say T inv > 10T V , the angle θ(t) performs oscillatory motion within the moving nutational band. The rational form of the METT The effective potential in the separation equation (11) is an algebraic function in z, which complicates the analysis. We can however make a restriction on the parameters so that the second degree polynomial d(z) can be written as a perfect square. This makes the term d(z) a linear function of z and the potential becomes a rational function [2,16]. We see that if 1 − α 2 < γ < 1, and if we let the parameter σ = 1−γ This is a perfect square, so for γ in this range we can find physical values for σ such that d(z) is a real polynomial in z: is a subinterval of (1 − α, 1 + α), the parameter range for γ where complete inversion of TT is possible. When σ = 1−γ γ+α 2 −1 , we can rewrite the functions in the separation equation E = g(cos θ)θ 2 + V (cos θ, D, λ) as This rational form of the effective potential is simpler to work with. We should note that the restriction on the parameter σ implies that the moments of inertia I 1 and The range of parameters making the effective potential a rational function provides a simplest non-trivial situation in which we can study properties of the potential in greater detail. We consider the potential functioñ whereṼ , since the constant does not affect the shape of V (z, D, λ) and the position of minimum z min . The parameters in the function f (z) = −βz + (az+b) 2 1−z 2 (the expression inside the parentheses on the r.h.s. of (14)) are therefore defined as Remember that 1 − α 2 < γ < 1 and the range of parameters a, b is determined by the range of D. We observe that The parameters a and b satisfy the relation b + αa = λγα. It is illustrated in Fig. 2, where the lines b = a and b = −a correspond to D = D 1 and D = D 0 , respectively. Proposition 1. The effective potential V (z, D, λ) of (13) is convex for z ∈ (−1, 1) and for all real values of D and λ. Proof . We must show that d 2 dz 2 V (z, D, λ) ≥ 0 for z ∈ (−1, 1). Due to the form of V , it is enough to show that the rational function (az+b) 2 1−z 2 is convex for all a, b. Suppose first a = ±b and ab = 0. We look at the second derivative of this function: and have to show that the third degree polynomial q(z) = 2abz 3 + 3(a 2 + b 2 )z 2 + 6abz + a 2 + b 2 , has no roots in the interval [−1, 1]. To do this we apply the Sturm theorem [17]. We generate a sequence of polynomials (q 0 (z), q 1 (z), q 2 (z), . . . , q m (z)) recursively by starting from a square-free polynomial q(z): q 0 (z) = q(z), q 1 (z) = q (z) and q i = −rem(q i−1 , q i−2 ) for i ≥ 2. Here rem(q i−1 , q i−2 ) denotes the remainder after polynomial division of q i−1 by q i−2 . By Euclid's algorithm, this will terminate with the constant polynomial q m . Let S(ξ) be the number of sign-changes in the sequence (q 0 (ξ), q 1 (ξ), q 2 (ξ), . . . , q m (ξ)) at the point ξ. The Sturm theorem states that for real numbers c < d the number of distinct roots in (c, d] is S(c) − S(d). For our polynomial q(z) the algorithm described above yields four polynomials (q 0 , q 1 , q 2 , q 3 ). When we look at this sequence of polynomials at the points z = −1 and z = 1 we have We thus see that the number of sign changes for the Sturm sequence at both points −1 and 1 is the same, either 1 or 2, depending on whether ab is positive or negative. Thus according to Sturm's theorem, q(z) has no roots in (−1, 1]. Since q(0) = a 2 + b 2 > 0 (and q(−1) > 0), q(z) is positive in [−1, 1] and we can conclude that (az+b) 2 1−z 2 is convex when a = ±b and ab = 0. Routine checking confirms that this is also convex if a = ±b and as well as if ab = 0. Thus the function is convex for all arbitrary values of the parameters a, b ∈ R. If is reduced to 0.01, the bound for δ is tightened one order of magnitude as well. 5 Oscillation of θ(t) within the deforming rational potential V (cos θ, D(t), λ) As a toy TT inverts, we can see that the symmetry axis performs oscillations, or equivalently we say that it nutates. This is also apparent in simulations of the equations of motion [3,11,18] where graphs of the evolution of the inclination angle θ(t) show that it rises in an oscillating manner from an angle close to θ = 0 to an angle close to θ = π. Here we say that a solution θ(t) is oscillatory on a time interval [0, T ] whenθ(t) changes sign a number of times in this interval. We consider solutions of METT with D(t),Ẽ(t) describing inverting solutions of TT equations, under assumption that D(t),Ẽ(t) are slowly varying regular functions moving from a small neighborhood of (D 0 ,Ẽ 0 ) to a small neighborhood of (D 1 ,Ẽ 1 ). They are regular since D(t) = D(θ(t), ω 3 (t)) andẼ(t) =Ẽ(θ(t),θ(t),φ(t), ω 3 (t)). We further have assumed that (D(t),Ẽ(t)) In the limiting case of constant D andẼ when the METT describes purely rolling TT, the oscillating behaviour of θ(t) follows from the dynamical system representation of the second order equation for θ(t). For the rolling TT the energỹ is an integral of motion for the θ-equation obtained by differentiating (22): . This means thatẼ = g(cos θ)y 2 + V (cos θ, D, λ) is an integral of motion for the dynamical systeṁ θ = y, y = sin θ 2g(cos θ) g z (cos θ)y 2 + V z (cos θ, D, λ) . (23) Trajectories of system (23) are lines of constant value of energyẼ and they are closed curves. The closed trajectories describe periodic solutions [2] with period T defined by the integral where the turning latitudes θ 1 < θ 2 are defined byẼ = V (cos θ 1,2 , D, λ). The modified energyẼ(t) is bounded. That entails boundedness of ω(t) = −φ sin θ1 +θ2 + (ψ +φ cos θ)3 and thus |θ(t)| < B for some positive B. Since the potential V (z, D(t), λ) θ→0,π −→ ∞ the curve (θ(t),θ(t)) of each inverting solution is confined to the open rectangle (θ,θ) ∈ (0, π) × (−B, B). The picture that emerges is that, for slowly varyingẼ(t), D(t), the inverting trajectories of METT stay (locally) close to the trajectories of system (23) and are traversed with almost the same velocity. The time of passing T = t 2 − t 1 between two turning angles given by V (cos θ 1 , D(t 1 ), λ) =Ẽ(t 1 ) and V (cos θ 2 , D(t 2 ), λ) =Ẽ(t 2 ) is close to the half-period of the non-deforming potential. Thus initially the trajectory moves around (θ min ,θ = 0), with θ min close to 0. As D(t) → D 1 the minimum θ min moves toward θ = π (see Proposition 2) and for sufficiently slowly varying D(t) the trajectory goes several times around (θ min ,θ = 0) and drifts toward the point Figure 3. Plots of (t, θ(t)) (left) and (θ(t),θ(t)) (right) obtained by integrating equations (3)- (7) and (8) for two sets of parameters and initial values. Plots a and b correspond to parameter values for rational potential in Example 1 and µ = 0.3. Plots c and d correspond to parameter values provided in [3], corresponding to an algebraic potential. The equations are integrated using the Python 2.7 open source library SciPy [8]. Plot a shows oscillations of θ(t) as it rises to π. The time of inversion from the moment when θ(t) starts to rise, at approximately 3 seconds, is about 4-5 seconds. Larger values of µ give shorter time of inversion. Plot b shows the trajectory in the (θ,θ)-plane. In the next section we shall estimate the maximum value of the period of oscillations for all values of the modified energyẼ and D. This will allow us to formulate a condition for the time of inversion needed for oscillatory behaviour of θ(t). Basically T inv has to be an order of magnitude larger than T upp -the maximal period of oscillations within potential V (cos θ, D(t), λ). Estimates for the period of oscillation A direct way of estimating the period of nutation is to study the explicit integral defining the period and expanding it w.r.t. a small parameter = 2β b 2 , with b and β given by (16) and (17), respectively. This is the technique used in [16], but in that paper the analysis is based on the assumption that ω 3 is large. Here we use the more general assumption that λ is only above the threshold value, so that λ = Cλ thres with C > 1. To simplify estimates we also assume that we consider curves (D(t),Ẽ(t)) such that D 1 < D < D 0 . Then by using that b = αλ + αRD γ + α 2 − 1 satisfies αγλ 1+α < b < αγλ 1−α for D 1 < D < D 0 , we have the estimate for 1 − α 2 < γ < 1, 0 < α < 1, similar to the estimate for equation (19). In the following we shall determine the dependence of the period T ( ) on the small parameter = 2β b 2 and we shall find an estimate for the maximal value of T ( ) that is valid for all D ∈ (D 1 , D 0 ) and all values < 1, which means all values of the Jellett integral that are above the threshold value. As known from the asymptotic analysis these initial values of λ lead to inversion of the TT. The period of oscillations for the potential V (z, D, λ) is given by the integral where z 1 < z 2 ∈ (−1, 1) are two turning points defined byẼ = V (z 1,2 , D, λ). This equation always has two solutions forẼ > V (z min , D, λ) since the potential is convex and In terms of the the parameters a and b, the potential (13) reads and (by definition) the left turning point z 1 is given implicitly by the equatioñ . In the following we shall parametrise (similarly as in [16]) the remaining roots of (1 − z 2 )(Ẽ − V (z, D, λ)) by z 1 . Due to this they become solutions of a quadratic equation. The function in the denominator of (25) we write as . Notice that z = ±1 are not roots because V (z, D, λ) has singularities at the points z = ±1. The quadratic polynomial in the parentheses of (26) determines roots z 2 , z 3 , for any given turning point z 1 . They are thus solutions of the quadratic equation By z 3 we denote the root satisfying z 3 < −1 and by z 2 (>z 1 ) the right turning point. The polynomial (1 − z 2 )(E − V (z, D, λ)) is then factorized: We substitute this into the integral for the period T (25). By the mean value theorem for integrals there exist a z * ∈ [z 1 , z 2 ] such that we have . The integral can be transformed to a standard complete elliptic integral of the first kind through the change of variables z = z 2 + (z 1 − z 2 )s 2 [6]: where k 2 = z 2 −z 1 z 2 −z 3 < 1 is a positive parameter. This integral has the standard expansion By using the roots of equation (27) and by expanding k 2 = z 2 −z 1 z 2 −z 3 w.r.t. = 2β b 2 , we can show that k 2 = O( ) if is small. Indeed, after solving equation (27) we get that the quantity k 2 can be written: With the parameters = 2β b 2 and w = a b , the expansion of k 2 is: It should be noted that the minus sign here is misleading. When (31) is expressed by parameters D, λ, one sees that the factor at is positive. We consider the parameter k 2 for (z 1 , w) ∈ [−1, 1] × [−1 + δ, 1 − δ] with certain small δ. The values w = a b = ∓1 correspond (as Fig. 2 shows) to the upright and inverted spinning solutions, which are asymptotic solutions of TT equations and are never attained during the inversion. Thus the nutational period behaves as where z * is some value between z 1 and z 2 . In the leading factor at √ we have 1−z 2 1 1+w 2 +2wz 1 ≤ 1 on any rectangle R δ and the function g(z * ) is decreasing for z * ∈ (−1, 1) with supremum g(−1) = I 3 (α+1−γ) 2 2(γ+α 2 −1) so that 2π g z * √ 2mgRα We summarize these results in a proposition. ii) The leading factor is bounded A better estimate could be difficult to find due to the complexity of the expression for V (z, D, λ). It is actually not needed when we do qualitative analysis of oscillations within a deforming potential. Here we wanted to see how the period of oscillations within the potential V (z, D, λ) depends on the value of Jellett's integral λ, as stated in Proposition 3, in order to relate the time of inversion T inv to this period. The dependence T max ∼ 1 b ∼ 1 λ implies that the frequency of oscillations within the potential behaves as 2π Tmax ∼ λ. For formulating a sufficient condition for having oscillating behaviour of θ(t) we need to know that there is an upper bound for the period of oscillations within V (z, D, λ). To find a universal bound independent of choice of λ > λ thres could be difficult because the functions h 1 and h 2 have singularities at the boundary of the rectangle (z 1 , w) ∈ [−1, 1]×[−1, 1]. Finding a universal bound would require detailed analysis of the interdependence between z 1 and w during inversion. Therefore we restrict our estimate to the region w ∈ [−1 + δ, 1 − δ] with a certain suitable δ and < 1 C 2 , meaning λ > Cλ thres (see (24)). We consider period T given by (28) as a function of , w = a b and z 1 ∈ [−1, 1], but we drop here the assumption → 0. Let us take < 0.9, w ∈ [−1 + δ, 1 − δ] with δ = 0.0001. These are physically well justified values since < 0.9 means λ > 1.054λ thres and w = ±0.9999 corresponds to extremely vertical initial angular momentum L that is practically never taken by a toy TT.
7,326.2
2013-06-11T00:00:00.000
[ "Physics" ]
A chance-constraint approach for optimizing Social Engagement-based services Social Engagement is a novel business model transforming final users of a service from passive into active components. In this framework, people are contacted by a company and they are asked to perform tasks in exchange for a reward. This arises the complicated optimization problem of allocating the different types of workforce so as to minimize costs. We address this problem by explicitly modeling the behavior of contacted candidates through consolidated concepts from utility theory and proposing a chance-constrained optimization model aiming at optimally deciding which user to contact, the amount of the reward proposed, and how many employees to use in order to minimize the total expected costs of the operations. A solution approach is proposed and its computational efficiency is investigated through experiments. S OCIAL Engagement (SE) is a new business paradigm involving the customers of a company in its operations. More precisely, people agree to perform specific services in exchange for a reward. This model has been enabled by the increase of the number of users connected on the web and technologies able to get people information [1]. This gives to the companies the possibility to easily communicate with candidates and then to propose tasks in exchange for a reward. A concrete application of the SE paradigm is the so called crowd-shipping logistics, in which the companies ask the people to collect the packages to a certain location and deliver it to the final user [2] [3]. By doing this, companies do not only decrease the costs, but also the environmental impact since people accepting the delivery usually would take advantage of travels that they have to do anyhow for other activities. Another interesting application of SE occurs in an evolution of the Internet of Things (IoT) concept called opportunistic IoT (oIoT) [4]. Since the IoT development is considerably slowed down by the difficulty and costs involved in building telecommunication networks capable of continuously transmitting large amounts of data collected by sensors, through oIoT the citizens share (in exchange for a reward) the internet of their devices (mobile phones, modems) so that the nearby sensors can exploit it to communicate the gathered data. In this work, we do not want to concentrate on a specific application rather on a very general SE-based setting in order to embrace all the basic characteristics of such a business model. An effective planning of operations under the SE paradigm yields an interesting optimization problem. The decisionmaker must decide how much he is willing to pay to a candidate for each task, when and where to rely on employees and on candidates, which tasks to assign to the employees and for which tasks the candidates must be contacted, in order to minimize the total operational costs. It is important to note that the reward paid to a candidate is generally lower on average than the cost that the company bears for an employee. However, while an employee is obliged to accept and carry out the tasks assigned to him, there is no certainty that a candidate will accept a proposed task. Little attention has been devoted to the development of optimization models aimed at effectively scheduling companies operations that exploit SE. Just few works [5][6] [7] have tried to tackle the problem and, therefore, there is a large room for improvement of existing approaches as well as for the design of more innovative and complete ones (as claimed regarding crowd-shipping in [3]). In particular, to the best of our knowledge, there is no published optimization model that explicitly accounts for individual candidate behaviour when planning SE-based operations. As already mentioned, one characteristic that makes challenging the optimization problems deriving from the implementation of the SE paradigm is the fact that candidates are not constrained a priori to respect a contract. This means that, once contacted, the candidate may not accept the task and, if we assume a pure rational profit-maximization behavior of the candidate, the reject can happen because the proposed reward is lower than the candidate expectation. It is therefore important to integrate tools in the decisionmaking process that allow monitoring the individual behavior of potential candidates. In this work, to account for individual behaviour, we rely on the candidate's willingness to accept (wta) a task, i.e., the minimum reward expected by a candidate to accept a task. The wta is a well consolidated concept in utility theory and has been used since long to explain human subject preferences in economics [8]. From the decision-maker point of view, the candidate's wta is not deterministically known, since it depends on some factors that are intrinsic of the candidates. Therefore, we consider the candidate wta as a random variable. Thus, the probability of acceptance for a candidate will be equal to the probability that the offered reward is greater or equal to the wta of the candidate. The adopted perspective is similar to [6]. However, instead of relying on a single random variable describing the number of candidates, we model each single candidate behavior through a Bernoulli random variable. The parameter of such a Bernoulli random variable, i.e. the probability that the candidate accepts the task, is not fixed but depends by the proposed reward. This paper's contribution is twofold. First, we propose a novel mathematical model for SE-based services optimization. The formulation, which includes chance constraints [9], results to be the first one that explicitly accounts for each individual candidates behaviour. Second, since the complexity of the proposed model and the explicit consideration of stochastic parameters do not allow to obtain a simple solution, we derive a mixed-integer quadratic programming model that approximates the original model. This is done by making some reasonable hypothesis on the probability distribution of the wta of each candidate, and by exploiting the Markov inequality. Several computational experiments validate the suitability of our proposed model and solution approach. The rest of the paper is organized as follows. The optimization problem is defined and modeled in Section II. Our solution approach is described in Section III. Section IV presents the experimental results, while Section V concludes the paper. II. THE SOCIAL ENGAGEMENT OPTIMIZATION PROBLEM The social engagement optimization problem that we want to study considers a decision-maker (in general a company) whose goal is to use people, in the following called candidate, in addition to employees in order to perform a set of tasks. In particular, we consider a urban environment divided in several geographical areas such as mobile phone cells, neighborhoods of different markets or just geographical areas. Each of these areas is characterized by a number of tasks to perform and each tasks is characterized by different workloads, thus a single task may require more candidates to be done. For example, in the crowd shipping setting these tasks are the delivery required by customers out of the store, while in the oIoT application these tasks consist in sharing the internet connection with smart sensors in the city. Each task can either be performed by using employees or candidates. Employee are more expensive, are available in a small number but they execute the tasks assigned. Instead, candidates are less expensive, their quantity is virtually unlimited (since the number of people considered for SE is far greater than the number of tasks) but they can refuse to perform a task with a given probability. We assume that the acceptance probability increases as the offered reward increase. Please note that, in practice, am employee has greater productivity than a candidate. The goal of the decision-maker is to minimize the total operative costs while enforcing that with high probability all the tasks must be performed. Let us consider a set I of tasks and a set M of candidates. For each task i, let W i be the workload required, α i be the required probability for its accomplishment, ∆ m i be a random variable representing the wta of candidate m, and c i be the cost of using an employee. Moreover, let B be the number of available employees and r > 1 be the ratio between the productivity of an employee and that of a candidate, i.e., the workload that a single employee can afford as compared to a candidate in the same time frame. We define the decision variables Q m i ∈ R + as the reward offered to candidate m to accept task i and z i ∈ N as the number of employee assigned to tasks i. Moreover, we consider the probability for candidate m to accept task i called x m i ∈ [0, 1] and the random variables Y m i distributed according to a Bernoulli distribution of probability x m i which assume value 1 if candidate m accepts to perform task i. Then, the Social Engagement Optimization Problem (SEOP ) can be formulated as follows: The total cost in (1) is expressed as the summation between the total expected cost offered as rewards (the reward Q m i is paid with probability x m i ), and the sum of the costs paid for employees. Constraints (2) define the variables x m i as the acceptance probability, while constraints (3) and (4) ensure Y m i to follow a Bernoulli distribution. Constraints (5) are chance constraints enforcing a minimum probability of doing a given task either by using employees or candidates. It is worth noting that ensuring that each task is performed with a given probability is less strict than requiring that all the tasks will be performed with a given probability. Nevertheless enforcing this second condition would lead to too conservative solutions. Finally, constraint (6) accounts for the limited number of employees. III. SOLUTION APPROACH The optimization problem in (1)-(6) is difficult to solve due to the definition of x m i in constraints (2), of Y m i in constraints (3) and (4), and the chance constraints in (5). Hence, we approximate these constraints in order to get a model which can be readily solved with off-the-shelf solvers. Constraints (2) involve the cdf of the random variable ∆ m i . We approximate it by means of a piece-wise linear function with J breakpoints. In particular, instead of constraints (2) we add a set of constraints of the form where k j and q j are obtained by imposing proper conditions (e.g. the passage in J points of the cdf). This choice is equivalent to enforce x m i ≤ min[1, m 1 Q m i + q 1 , . . . , m J Q m i + q J ], where the first term of the minimum comes from the definition of x m i . Since the approximation proposed in (10) just lead to concave functions (being the pointwise minimum of affine functions) and since the a general cdf may be convex in some portion of the domain, the proposed approximation is not guarantee to converge to the cdf for all the distributions. In the following, for the sake of simplicity, we consider just J = 1 and we impose the passage for the point (0, 0) meaning that with 0 reward the probability that the candidate will perform the task is 0, and for the point (Q m i , 1) whereQ m i is a reward for which the candidate m is willing to perform the task i with a probability that we may approximate to be 1. By making this choice, the obtained final approximation of Constraints (2) is: Now let us consider the constraints in (5), note that these constraints can be written as: By using the Markov inequality, for each i ∈ I, it holds: Eq. (12) leads to the following constraint: Eq. (13) is enforcing that the expected workload form the candidates must be greater than the α i percent of the people needed. Moreover, by considering the bound provided by Eq. (13), we are reducing the feasible set, thus the condition in (11) will be satisfied for greater value of α i . Then, the resulting approximation of the SEOP (SEOP ap ) is the following mixed integer quadratic model: A. CPU results We first study the CPU solving time with respect to the dimension of the SEOP ap . In particular, versus the growth of |I| and |M|, we evaluate the CPU time (sec), the time-to-best (sec) (the number of seconds from the start of the execution of Gurobi to the time in which it founds the best solution of the run), and the MIP gap (%) (computed as the percentage difference between the lower and upper objective bound. In particular, we consider the least gap value that Gurobi has to reach before stopping its execution). The average and standard deviation on 10 instances are shown in Table I. In all the runs we set the solver time limit to 1 hour. Instances with |I| = 5, and |M| = 20 are solved almost instantaneously with 0 gap. The time-to-best is equal to the CPU time since the difference are below the hundredths of a second. For instances with |I| = 10, |M| = 40, and |I| = 20, |M| = 80, the CPU time increases, but the solver is still able to find the optimal solution inside the time limit. For the instances with |I| = 10, |M| = 40 the time to best is near one half of the total CPU time but solutions with gap below the 5% are found by the solver already in the first minutes of the run. Instead, for the instance with |I| = 20, |M| = 80, the time-to-best is close to the whole computation time and no solution with gap below the 5% is found in the first minute of the run. For instances of greater dimensions, the solver is not able to find the optimal solution in the given time limit, for this reason the CPU time is equal to 3600 seconds with a standard deviation of 0. Nevertheless, for instances with |I| = 50, and |M| = 200, several times, the final gaps are are smaller than 10%, while for instances with |I| = 100, and |M| = 400, the solver is not able to find a good bound in the allocated computational time, hence, a 100% MIP gap with 0 standard deviation is reported. B. Approximation analysis We now analyze the goodness of the SEOP ap approximation. Since ∆ m i is distributed as a Gumbel distribution with concave cdf, the approximation proposed converges to the exact function and several techniques for developing good piece-wise approximation are available [10]. Thus, we are interested in quantifying how much conservative is the Markov inequality with respect to Eq. (5). Hence, we compute the optimal solution of SEOP and we use it to calculatê This can be done easily by noting that the Y m i are independent with respect to the index m since the knowledge about candidate m performing a task does not provide any information related to the execution of the same task by other candidates. Thus, m Y m i is a sum of independent random variable distributed according to Bernoulli distribution of parameter x m i . Central Limit Theorems for non identically distributed random variables are available and, in particular, by applying the Lyapunov Central Limit Theorem it is possible to prove [11] that for large values of |M| (in practice |M| ≥ 30), it holds that: (20) By using (20), we can compute α by solving: where Φ is the cdf of a standard normal distribution. We report the value of the α ∈ As we expected, the curve is above the lineα = α since by using the Markov inequality we are considering an upper bound on the probability. Nevertheless, the results are close to the exact value being on average 10% higher than the α set in the model. Thus, in the real field, the decision-maker may lower by 10% the values of the αs and get a solution compliant with the wanted probability of execution. V. CONCLUSIONS AND FUTURE WORKS We proposed a new probabilistic model for SE-based services optimization encompassing the wta of the candidate involved in the business model. We prove, by means of CPU experiment that, despite the difficult formulation, the model can be approximated into a nice tractable form able to provide timely solution for crowd-shipping applications. However, being the SE a very seminal topic within the optimization field, we believe that a full-fledged experimental design to explore all the solution characteristics is needed. Some questions to answer are related to the performance of the method in the case in which non-concave distributions for the wta are considered or how the solutions of the model are related to the number of breakpoints used by the piece-wise wta approximation.
3,925.6
2022-09-04T00:00:00.000
[ "Computer Science" ]
The Relationship between Co2 emissions and Military Effort The relationship between environment and growth is a conventional theme in economics, (Stern (2004); Dinda (2004) and Dasgupta et al (2002)) so is the relationship between military effort and growth (Dunne and Birdi (2001); Dimitraki and Menla (2015); Dunne (2010)). However, there is almost no study that addresses the possible interactions between military effort, environment and growth. Although remarkable are the merits of the contributions proposed within these separate lines of studies, we argue that they don’t get grip on all the aspects of military effort. This lack of connection in research leaves many empty spaces between these different aspects, yet closely interacted. This article intends to contribute to fill this gap. pollution, military effort and growth. From this perspective, we argue that there are two mechanisms through which military effort, measured here through military expenditure, may impact pollution. The first is a direct mechanism through which military expenditure directly impacts pollution. The second is the "indirect" mechanism by which military expenditure affects income which in turn impacts pollution. We assert that the total effect of military expenditure on pollution is the result of these two effects. As far as we know, prior contributions have neglected this indirect effect, which might significantly affect pollution. To empirically investigate these direct and indirect effects of military expenditure on pollution, we use a sample of 120 countries covering the period 1981 to 2015. The remainder of the paper is organized as follows: section 2 examines the previous literature, section 3 outlines the methodology used within this paper, section 4 provides the results, and section 5 provides the conclusion. The Literature Review While the empirical studies on the relationship between environment and military effort are very limited, studies on the relationship between economic growth and military effort abound. Table 1 provides the main contributions related to the relationship between military effort and economic growth. The economic power represented by the GDP drives any increases in military spending and not vice versa. A possible explanation for the result is that the increase in the military spending has been rapid primarily as a result of the country's economic development. Luca Pieroni (2007) The relationship between military spending and economic growth? For the first group (high military expenditure level) the author finds a negative relationship between the share of military spending and economic growth. By contrast, countries with lower military burden show an insignificant relationship between economic growth and military burden. The Effects of Economic Growth on Environment A large body of literature posits a link between pollution and economic growth. The seminal work of Krueger's and Grossman (1991) detected the relationship known as the Environmental Kuznets Curve (EKC). Table 2 examines the relation between pollution and economic growth. Japan and Oceania causality from income to CO2emissions is obtained. Finally, for the country groups of Africa and Asia, the relationship of causality is bi-directional. Methodology and data Econometrically, the use of the multivariate cointegration is much recommended since it offers the opportunity to verify the existence of the relationship and the sense of causality among variables. It presents an extremely powerful empirical framework to deal with the issue raised in this paper. In this study we will study the relationship between military expenditure and CO2emissions and specify the direct and indirect effect of military expenditure on the emissions of CO2. Data The data sample includes 4200 observations describing 120 different countries covering 35 years (from 1981 to 2015). We have used these countries because they have complete data for military expenditure variable and CO2emissions. The indicator of military endeavour used in this paper is the military expenditure per capita (MILexp). Biswas and Ram (1986); Deger and Sen (1983); Faini Annez and Taylor (1984) and Leontief and Duchin (1983) used the military expenditure variable to study the link between military effort and economic growth. Presentation of the Model To handle both the indirect and direct effects of the military expenditure on pollution, we use the joint estimation of two equations. Estimation equations are defined as: Where subscripts i and t denote country and year. In eq (1) emissions of CO2 per capita (ECO2) as a function of per capita income (GDP) and a quadratic income. Equation (1) also includes Z, a vector of additional explanatory variables. These include the share of exportation in GDP and the share of industry in GDP. Finally, γi and κt represent country and year specific effects, and εit and μit denote error terms. Eq. (2) expresses per capita income as a function of year and country specific effects (τt and λi), military expenditure (MILexp) and X, a vector of other explanatory variables. Instrumental variables In equation (2) income is a function of military expenditure ; consequently this equation may suffer from a problem of endogeneity. To deal with this potential endogeneity, in this equation MILexp is instrumented. The instrumental variable solution is to find another variable; this variable is highly correlated with MILexp, and not correlated with the error term. We use the Human Development Index. Identifying the effect of the military expenditure on pollution The total effect of military expenditure per capita on emissions of CO2 (dECO2/ dMILexp) decomposes into a direct and an indirect effect. The direct effect is defined as the impact of military expenditure on emissions of CO2. The indirect effect is expressed as the product of the impact of military expenditure on income (δY/ δMILexp) and the impact of income on emissions of CO2 (δECO2/ δY). These effects can be expressed as: Where ECO2, Y and MILexp denote emissions of CO2, income and military expenditure, respectively. Table 4 provides estimates of per capita income equation. 1395.98 (0.000) In the first column, military expenditure is treated as being exogenous with regard to income and is therefore not instrumented. In models Y1 to Y4 Military expenditure is instrumented using 2SLS. All models use a random effects specification.* and ** denote significance at 5% and 10% respectively. Table 4 provides estimates of per capita income equation. In the first column, MIL which is treated as being exogenous with regard to income is not instrumented, but in all subsequent columns MIL is instrumented. Model (Y4) begins by expressing per capita income simply as a function of population growth and military expenditure. Models (Y1) to (Y3) include explanatory variables used by many studies (Levine and Zervos (1993); Mankiw et al (1992) and Levine and Renelt (1992)).These variables are the population growth rate (POPgr), the rate of inflation (INFL), and the share of exports in GDP. Estimation Results In Table 4, Military expenditure is found to have a statistically positive impact on income in all models. This result is justified by Benoit (1973) but contradicts with other contributions (Leontief and Dutchin (1983); Deger and Sen (1983) and Taylor et al (1984)). The correlation between MILexp and the instrument (IDH) is high whereas the correlation between the residuals of the model (Y1) and the instrument is very low (See table A2). The first stage regression results validate the use of variables "IDH" as instruments (See table A3) . The obtained F value is high and the first stage estimates are significant (see Table A3). This gives extra support to the validity of the instrument (IDH). 150.81 (0.000) Standard errors in parentheses.* denotes significance at 5% respectively. All models use a fixed effect Table 5 on emissions of CO2. Industry share (INDsh) and export of goods and services (EXP) are found to be positive, significant determinants of pollutant emission. In this direction, Managi (2004) shows that trade liberalization causes the increase of emissions of CO2. In the same context, Tubb and Magnani (2007) and Cole (2004) argue that trade affects negatively emissions of many pollutants (CO2, SO2, NO2 etc...) in OECD countries. It is now possible to quantify the impact of military expenditure on emissions of CO2. Firstly, Table 6 provides the indirect, direct and total effect of military expenditure on pollution for each of the two models presented in Table 5. Table 6 indicates a positive direct impact of military expenditure on emissions of CO2. For emissions of CO2, the indirect effect is positive, providing a positive total effect. This positive sign of indirect effect reflects the same sign of the relationship between income and emissions of CO2 income (δECO2/δY) and the relationship between income and military expenditure (δY/ δMILexp). Consequently, a military expenditure -induced reduction in income leads to a reduction in emissions of CO2 and vice versa. Discussions and conclusions The aim of this paper is to study the relationship between emissions of CO2 and military endeavor with a detailed empirical examination. Empirical results show that military expenditure has a positive indirect and direct effect on per capita CO2emissions. This positive linkage between the two variables was found to increase statistical significance when military endeavor was instrumented as a determinant of income. A direct consequence of our results is that a reduction of military endeavor entails a reduction of emissions of CO2 and vice versa.
2,100
2018-11-13T00:00:00.000
[ "Economics", "Environmental Science" ]